Search Results

Search found 7217 results on 289 pages for 'nice'.

Page 285/289 | < Previous Page | 281 282 283 284 285 286 287 288 289  | Next Page >

  • The Windows Store... why did I sign up with this mess again?

    - by FransBouma
    Yesterday, Microsoft revealed that the Windows Store is now open to all developers in a wide range of countries and locations. For the people who think "wtf is the 'Windows Store'?", it's the central place where Windows 8 users will be able to find, download and purchase applications (or as we now have to say to not look like a computer illiterate: <accent style="Kentucky">aaaaappss</accent>) for Windows 8. As this is the store which is integrated into Windows 8, it's an interesting place for ISVs, as potential customers might very well look there first. This of course isn't true for all kinds of software, and developer tools in general aren't the kind of applications most users will download from the Windows store, but a presence there can't hurt. Now, this Windows Store hosts two kinds of applications: 'Metro-style' applications and 'Desktop' applications. The 'Metro-style' applications are applications created for the new 'Metro' UI which is present on Windows 8 desktop and Windows RT (the single color/big font fingerpaint-oriented UI). 'Desktop' applications are the applications we all run and use on Windows today. Our software are desktop applications. The Windows Store hosts all Metro-style applications locally in the store and handles the payment for these applications. This means you upload your application (sorry, 'app') to the store, jump through a lot of hoops, Microsoft verifies that your application is not violating a tremendous long list of rules and after everything is OK, it's published and hopefully you get customers and thus earn money. Money which Microsoft will pay you on a regular basis after customers buy your application. Desktop applications are not following this path however. Desktop applications aren't hosted by the Windows Store. Instead, the Windows Store more or less hosts a page with the application's information and where to get the goods. I.o.w.: it's nothing more than a product's Facebook page. Microsoft will simply redirect a visitor of the Windows Store to your website and the visitor will then use your site's system to purchase and download the application. This last bit of information is very important. So, this morning I started with fresh energy to register our company 'Solutions Design bv' at the Windows Store and our two applications, LLBLGen Pro and ORM Profiler. First I went to the Windows Store dashboard page. If you don't have an account, you have to log in or sign up if you don't have a live account. I signed in with my live account. After that, it greeted me with a page where I had to fill in a code which was mailed to me. My local mail server polls every several minutes for email so I had to kick it to get it immediately. I grabbed the code from the email and I was presented with a multi-step process to register myself as a company or as an individual. In red I was warned that this choice was permanent and not changeable. I chuckled: Microsoft apparently stores its data on paper, not in digital form. I chose 'company' and was presented with a lengthy form to fill out. On the form there were two strange remarks: Per company there can just be 1 (one, uno, not zero, not two or more) registered developer, and only that developer is able to upload stuff to the store. I have no idea how this works with large companies, oh the overhead nightmares... "Sorry, but John, our registered developer with the Windows Store is on holiday for 3 months, backpacking through Australia, no, he's not reachable at this point. M'yeah, sorry bud. Hey, did you fill in those TPS reports yesterday?" A separate Approver has to be specified, which has to be a different person than the registered developer. Apparently to Microsoft a company with just 1 person is not a company. Luckily we're with two people! *pfew*, dodged that one, otherwise I would be stuck forever: the choice I already made was not reversible! After I had filled out the form and it was all well and good and accepted by the Microsoft lackey who had to write it all down in some paper notebook ("Hey, be warned! It's a permanent choice! Written down in ink, can't be changed!"), I was presented with the question how I wanted to pay for all this. "Pay for what?" I wondered. Must be the paper they were scribbling the information on, I concluded. After all, there's a financial crisis going on! How could I forget! Silly me. "Ok fair enough". The price was 75 Euros, not the end of the world. I could only pay by credit card, so it was accepted quickly. Or so I thought. You see, Microsoft has a different idea about CC payments. In the normal world, you type in your CC number, some date, a name and a security code and that's it. But Microsoft wants to verify this even more. They want to make a verification purchase of a very small amount and are doing that with a special code in the description. You then have to type in that code in a special form in the Windows Store dashboard and after that you're verified. Of course they'll refund the small amount they pull from your card. Sounds simple, right? Well... no. The problem starts with the fact that I can't see the CC activity on some website: I have a bank issued CC card. I get the CC activity once a month on a piece of paper sent to me. The bank's online website doesn't show them. So it's possible I have to wait for this code till October 12th. One month. "So what, I'm not going to use it anyway, Desktop applications don't use the payment system", I thought. "Haha, you're so naive, dear developer!" Microsoft won't allow you to publish any applications till this verification is done. So no application publishing for a month. Wouldn't it be nice if things were, you know, digital, so things got done instantly? But of course, that lackey who scribbled everything in the Big Windows Store Registration Book isn't that quick. Can't blame him though. He's just doing his job. Now, after the payment was done, I was presented with a page which tells me Microsoft is going to use a third party company called 'Symantec', which will verify my identity again. The page explains to me that this could be done through email or phone and that they'll contact the Approver to verify my identity. "Phone?", I thought... that's a little drastic for a developer account to publish a single page of information about an external hosted software product, isn't it? On Facebook I just added a page, done. And paying you, Microsoft, took less information: you were happy to take my money before my identity was even 'verified' by this 3rd party's minions! "Double standards!", I roared. No-one cared. But it's the thought of getting it off your chest, you know. Luckily for me, everyone at Symantec was asleep when I was registering so they went for the fallback option in case phone calls were not possible: my Approver received an email. Imagine you have to explain the idiot web of security theater I was caught in to someone else who then has to reply a random person over the internet that I indeed was who I said I was. As she's a true sweetheart, she gave me the benefit of the doubt and assured that for now, I was who I said I was. Remember, this is for a desktop application, which is only a link to a website, some pictures and a piece of text. No file hosting, no payment processing, nothing, just a single page. Yeah, I also thought I was crazy. But we're not at the end of this quest yet. I clicked around in the confusing menus of the Windows Store dashboard and found the 'Desktop' section. I get a helpful screen with a warning in red that it can't find any certified 'apps'. True, I'm just getting started, buddy. I see a link: "Check the Windows apps you submitted for certification". Well, I haven't submitted anything, but let's see where it brings me. Oh the thrill of adventure! I click the link and I end up on this site: the hardware/desktop dashboard account registration. "Erm... but I just registered...", I mumbled to no-one in particular. Apparently for desktop registration / verification I have to register again, it tells me. But not only that, the desktop application has to be signed with a certificate. And not just some random el-cheapo certificate you can get at any mall's discount store. No, this certificate is special. It's precious. This certificate, the 'Microsoft Authenticode' Digital Certificate, is the only certificate that's acceptable, and jolly, it can be purchased from VeriSign for the price of only ... $99.-, but be quick, because this is a limited time offer! After that it's, I kid you not, $499.-. 500 dollars for a certificate to sign an executable. But, I do feel special, I got a special price. Only for me! I'm glowing. Not for long though. Here I started to wonder, what the benefit of it all was. I now again had to pay money for a shiny certificate which will add 'Solutions Design bv' to our installer as the publisher instead of 'unknown', while our customers download the file from our website. Not only that, but this was all about a Desktop application, which wasn't hosted by Microsoft. They only link to it. And make no mistake. These prices aren't single payments. Every year these have to be renewed. Like a membership of an exclusive club: you're special and privileged, but only if you cough up the dough. To give you an example how silly this all is: I added LLBLGen Pro and ORM Profiler to the Visual Studio Gallery some time ago. It's the same thing: it's a central place where one can find software which adds to / extends / works with Visual Studio. I could simply create the pages, add the information and they show up inside Visual Studio. No files are hosted at Microsoft, they're downloaded from our website. Exactly the same system. As I have to wait for the CC transcripts to arrive anyway, I can't proceed with publishing in this new shiny store. After the verification is complete I have to wait for verification of my software by Microsoft. Even Desktop applications need to be verified using a long list of rules which are mainly focused on Metro-style applications. Even while they're not hosted by Microsoft. I wonder what they'll find. "Your application wasn't approved. It violates rule 14 X sub D: it provides more value than our own competing framework". While I was writing this post, I tried to check something in the Windows Store Dashboard, to see whether I remembered it correctly. I was presented again with the question, after logging in with my live account, to enter the code that was just mailed to me. Not the previous code, a brand new one. Again I had to kick my mail server to pull the email to proceed. This was it. This 'experience' is so beyond miserable, I'm afraid I have to say goodbye for now to the 'Windows Store'. It's simply not worth my time. Now, about live accounts. You might know this: live accounts are tied to everything you do with Microsoft. So if you have an MSDN subscription, e.g. the one which costs over $5000.-, it's tied to this same live account. But the fun thing is, you can login with your live account to the MSDN subscriptions with just the account id and password. No additional code is mailed to you. While it gives you access to all Microsoft software available, including your licenses. Why the draconian security theater with this Windows Store, while all I want is to publish some desktop applications while on other Microsoft sites it's OK to simply sign in with your live account: no codes needed, no verification and no certificates? Microsoft, one thing you need with this store and that's: apps. Apps, apps, apps, apps, aaaaaaaaapps. Sorry, my bad, got carried away. I just can't stand the word 'app'. This store's shelves have to be filled to the brim with goods. But instead of being welcomed into the store with open arms, I have to fight an uphill battle with an endless list of rules and bullshit to earn the privilege to publish in this shiny store. As if I have to be thrilled to be one of the exclusive club called 'Windows Store Publishers'. As if Microsoft doesn't want it to succeed. Craig Stuntz sent me a link to an old blog post of his regarding code signing and uploading to Microsoft's old mobile store from back in the WinMo5 days: http://blogs.teamb.com/craigstuntz/2006/10/11/28357/. Good read and good background info about how little things changed over the years. I hope this helps Microsoft make things more clearer and smoother and also helps ISVs with their decision whether to go with the Windows Store scheme or ignore it. For now, I don't see the advantage of publishing there, especially not with the nonsense rules Microsoft cooked up. Perhaps it changes in the future, who knows.

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • High CPU usage with Team Speak 3.0.0-rc2

    - by AlexTheBird
    The CPU usage is always around 40 percent. I use push-to-talk and I had uninstalled pulseaudio. Now I use Alsa. I don't even have to connect to a Server. By simply starting TS the cpu usage goes up 40 percent and stays there. The CPU usage of 3.0.0-rc1 [Build: 14468] is constantly 14 percent. This is the output of top, mpstat and ps aux while I am running TS3 ... of course: alexandros@alexandros-laptop:~$ top top - 18:20:07 up 2:22, 3 users, load average: 1.02, 0.85, 0.77 Tasks: 163 total, 1 running, 162 sleeping, 0 stopped, 0 zombie Cpu(s): 5.3%us, 1.9%sy, 0.1%ni, 91.8%id, 0.7%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 2061344k total, 964028k used, 1097316k free, 69116k buffers Swap: 3997688k total, 0k used, 3997688k free, 449032k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2714 alexandr 20 0 206m 31m 24m S 37 1.6 0:12.78 ts3client_linux 868 root 20 0 47564 27m 10m S 8 1.4 3:21.73 Xorg 1 root 20 0 2804 1660 1204 S 0 0.1 0:00.53 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.01 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.45 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 7 root 20 0 0 0 0 S 0 0.0 0:00.08 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root 20 0 0 0 0 S 0 0.0 0:01.17 events/0 10 root 20 0 0 0 0 S 0 0.0 0:00.81 events/1 11 root 20 0 0 0 0 S 0 0.0 0:00.00 cpuset 12 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper 13 root 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr 14 root 20 0 0 0 0 S 0 0.0 0:00.00 pm 16 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers 17 root 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/0 19 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/1 20 root 20 0 0 0 0 S 0 0.0 0:00.05 kblockd/0 21 root 20 0 0 0 0 S 0 0.0 0:00.02 kblockd/1 22 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpid 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_notify 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_hotplug 25 root 20 0 0 0 0 S 0 0.0 0:00.99 ata/0 26 root 20 0 0 0 0 S 0 0.0 0:00.92 ata/1 27 root 20 0 0 0 0 S 0 0.0 0:00.00 ata_aux 28 root 20 0 0 0 0 S 0 0.0 0:00.00 ksuspend_usbd 29 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd alexandros@alexandros-laptop:~$ mpstat Linux 2.6.32-32-generic (alexandros-laptop) 16.06.2011 _i686_ (2 CPU) 18:20:15 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 18:20:15 all 5,36 0,09 1,91 0,68 0,07 0,06 0,00 0,00 91,83 alexandros@alexandros-laptop:~$ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2804 1660 ? Ss 15:58 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S 15:58 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 15:58 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/0] root 6 0.0 0.0 0 0 ? S 15:58 0:00 [migration/1] root 7 0.0 0.0 0 0 ? S 15:58 0:00 [ksoftirqd/1] root 8 0.0 0.0 0 0 ? S 15:58 0:00 [watchdog/1] root 9 0.0 0.0 0 0 ? S 15:58 0:01 [events/0] root 10 0.0 0.0 0 0 ? S 15:58 0:00 [events/1] root 11 0.0 0.0 0 0 ? S 15:58 0:00 [cpuset] root 12 0.0 0.0 0 0 ? S 15:58 0:00 [khelper] root 13 0.0 0.0 0 0 ? S 15:58 0:00 [async/mgr] root 14 0.0 0.0 0 0 ? S 15:58 0:00 [pm] root 16 0.0 0.0 0 0 ? S 15:58 0:00 [sync_supers] root 17 0.0 0.0 0 0 ? S 15:58 0:00 [bdi-default] root 18 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/0] root 19 0.0 0.0 0 0 ? S 15:58 0:00 [kintegrityd/1] root 20 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/0] root 21 0.0 0.0 0 0 ? S 15:58 0:00 [kblockd/1] root 22 0.0 0.0 0 0 ? S 15:58 0:00 [kacpid] root 23 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_notify] root 24 0.0 0.0 0 0 ? S 15:58 0:00 [kacpi_hotplug] root 25 0.0 0.0 0 0 ? S 15:58 0:00 [ata/0] root 26 0.0 0.0 0 0 ? S 15:58 0:00 [ata/1] root 27 0.0 0.0 0 0 ? S 15:58 0:00 [ata_aux] root 28 0.0 0.0 0 0 ? S 15:58 0:00 [ksuspend_usbd] root 29 0.0 0.0 0 0 ? S 15:58 0:00 [khubd] root 30 0.0 0.0 0 0 ? S 15:58 0:00 [kseriod] root 31 0.0 0.0 0 0 ? S 15:58 0:00 [kmmcd] root 34 0.0 0.0 0 0 ? S 15:58 0:00 [khungtaskd] root 35 0.0 0.0 0 0 ? S 15:58 0:00 [kswapd0] root 36 0.0 0.0 0 0 ? SN 15:58 0:00 [ksmd] root 37 0.0 0.0 0 0 ? S 15:58 0:00 [aio/0] root 38 0.0 0.0 0 0 ? S 15:58 0:00 [aio/1] root 39 0.0 0.0 0 0 ? S 15:58 0:00 [ecryptfs-kthrea] root 40 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/0] root 41 0.0 0.0 0 0 ? S 15:58 0:00 [crypto/1] root 48 0.0 0.0 0 0 ? S 15:58 0:03 [scsi_eh_0] root 50 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_1] root 53 0.0 0.0 0 0 ? S 15:58 0:00 [kstriped] root 54 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/0] root 55 0.0 0.0 0 0 ? S 15:58 0:00 [kmpathd/1] root 56 0.0 0.0 0 0 ? S 15:58 0:00 [kmpath_handlerd] root 57 0.0 0.0 0 0 ? S 15:58 0:00 [ksnapd] root 58 0.0 0.0 0 0 ? S 15:58 0:03 [kondemand/0] root 59 0.0 0.0 0 0 ? S 15:58 0:02 [kondemand/1] root 60 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/0] root 61 0.0 0.0 0 0 ? S 15:58 0:00 [kconservative/1] root 213 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_2] root 222 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_3] root 234 0.0 0.0 0 0 ? S 15:58 0:00 [scsi_eh_4] root 235 0.0 0.0 0 0 ? S 15:58 0:01 [usb-storage] root 255 0.0 0.0 0 0 ? S 15:58 0:00 [jbd2/sda5-8] root 256 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 257 0.0 0.0 0 0 ? S 15:58 0:00 [ext4-dio-unwrit] root 290 0.0 0.0 0 0 ? S 15:58 0:00 [flush-8:0] root 318 0.0 0.0 2316 888 ? S 15:58 0:00 upstart-udev-bridge --daemon root 321 0.0 0.0 2616 1024 ? S<s 15:58 0:00 udevd --daemon root 526 0.0 0.0 0 0 ? S 15:58 0:00 [kpsmoused] root 528 0.0 0.0 0 0 ? S 15:58 0:00 [led_workqueue] root 650 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/0] root 651 0.0 0.0 0 0 ? S 15:58 0:00 [radeon/1] root 652 0.0 0.0 0 0 ? S 15:58 0:00 [ttm_swap] root 654 0.0 0.0 2612 984 ? S< 15:58 0:00 udevd --daemon root 656 0.0 0.0 0 0 ? S 15:58 0:00 [hd-audio0] root 657 0.0 0.0 2612 916 ? S< 15:58 0:00 udevd --daemon root 674 0.6 0.0 0 0 ? S 15:58 0:57 [phy0] syslog 715 0.0 0.0 34812 1776 ? Sl 15:58 0:00 rsyslogd -c4 102 731 0.0 0.0 3236 1512 ? Ss 15:58 0:02 dbus-daemon --system --fork root 740 0.0 0.1 19088 3380 ? Ssl 15:58 0:00 gdm-binary root 744 0.0 0.1 18900 4032 ? Ssl 15:58 0:01 NetworkManager avahi 749 0.0 0.0 2928 1520 ? S 15:58 0:00 avahi-daemon: running [alexandros-laptop.local] avahi 752 0.0 0.0 2928 544 ? Ss 15:58 0:00 avahi-daemon: chroot helper root 753 0.0 0.1 4172 2300 ? S 15:58 0:00 /usr/sbin/modem-manager root 762 0.0 0.1 20584 3152 ? Sl 15:58 0:00 /usr/sbin/console-kit-daemon --no-daemon root 836 0.0 0.1 20856 3864 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-simple-slave --display-id /org/gnome/DisplayManager/Display1 root 856 0.0 0.1 4836 2388 ? S 15:58 0:00 /sbin/wpa_supplicant -u -s root 868 2.3 1.3 36932 27924 tty7 Rs+ 15:58 3:22 /usr/bin/X :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-a46T4j/database -nolisten root 891 0.0 0.0 1792 564 tty4 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty4 root 901 0.0 0.0 1792 564 tty5 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty5 root 908 0.0 0.0 1792 564 tty2 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty2 root 910 0.0 0.0 1792 568 tty3 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty3 root 913 0.0 0.0 1792 564 tty6 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty6 root 917 0.0 0.0 2180 1072 ? Ss 15:58 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket daemon 924 0.0 0.0 2248 432 ? Ss 15:58 0:00 atd root 927 0.0 0.0 2376 900 ? Ss 15:58 0:00 cron root 950 0.0 0.0 11736 1372 ? Ss 15:58 0:00 /usr/sbin/winbindd root 958 0.0 0.0 11736 1184 ? S 15:58 0:00 /usr/sbin/winbindd root 974 0.0 0.1 6832 2580 ? Ss 15:58 0:00 /usr/sbin/cupsd -C /etc/cups/cupsd.conf root 1078 0.0 0.0 1792 564 tty1 Ss+ 15:58 0:00 /sbin/getty -8 38400 tty1 gdm 1097 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session root 1112 0.0 0.1 19216 3292 ? Sl 15:58 0:00 /usr/lib/gdm/gdm-session-worker root 1116 0.0 0.1 5540 2932 ? S 15:58 0:01 /usr/lib/upower/upowerd root 1131 0.0 0.1 6308 3824 ? S 15:58 0:00 /usr/lib/policykit-1/polkitd 108 1163 0.0 0.2 16788 4360 ? Ssl 15:58 0:01 /usr/sbin/hald root 1164 0.0 0.0 3536 1300 ? S 15:58 0:00 hald-runner root 1188 0.0 0.0 3612 1256 ? S 15:58 0:00 hald-addon-input: Listening on /dev/input/event6 /dev/input/event5 /dev/input/event2 root 1194 0.0 0.0 3612 1224 ? S 15:58 0:00 /usr/lib/hal/hald-addon-rfkill-killswitch root 1200 0.0 0.0 3608 1240 ? S 15:58 0:00 /usr/lib/hal/hald-addon-generic-backlight root 1202 0.0 0.0 3616 1236 ? S 15:58 0:02 hald-addon-storage: polling /dev/sr0 (every 2 sec) root 1204 0.0 0.0 3616 1236 ? S 15:58 0:00 hald-addon-storage: polling /dev/sdb (every 2 sec) root 1211 0.0 0.0 3624 1220 ? S 15:58 0:00 /usr/lib/hal/hald-addon-cpufreq 108 1212 0.0 0.0 3420 1200 ? S 15:58 0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 1000 1222 0.0 0.1 24196 2816 ? Sl 15:58 0:00 /usr/bin/gnome-keyring-daemon --daemonize --login 1000 1240 0.0 0.3 28228 7312 ? Ssl 15:58 0:00 gnome-session 1000 1274 0.0 0.0 3284 356 ? Ss 15:58 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1277 0.0 0.0 3392 772 ? S 15:58 0:00 /usr/bin/dbus-launch --exit-with-session gnome-session 1000 1278 0.0 0.0 3160 1652 ? Ss 15:58 0:00 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 1000 1281 0.0 0.2 8172 4636 ? S 15:58 0:00 /usr/lib/libgconf2-4/gconfd-2 1000 1287 0.0 0.5 24228 10896 ? Ss 15:58 0:03 /usr/lib/gnome-settings-daemon/gnome-settings-daemon 1000 1290 0.0 0.1 6468 2364 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd 1000 1293 0.0 0.6 38104 13004 ? S 15:58 0:03 metacity 1000 1296 0.0 0.1 30280 2628 ? Ssl 15:58 0:00 /usr/lib/gvfs//gvfs-fuse-daemon /home/alexandros/.gvfs 1000 1301 0.0 0.0 3344 988 ? S 15:58 0:03 syndaemon -i 0.5 -k 1000 1303 0.0 0.1 8060 3488 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor root 1306 0.0 0.1 15692 3104 ? Sl 15:58 0:00 /usr/lib/udisks/udisks-daemon 1000 1307 0.4 1.0 50748 21684 ? S 15:58 0:34 python -u /usr/share/screenlets/DigiClock/DigiClockScreenlet.py 1000 1308 0.0 0.9 35608 18564 ? S 15:58 0:00 python /usr/share/screenlets-manager/screenlets-daemon.py 1000 1309 0.0 0.3 19524 6468 ? S 15:58 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 1000 1311 0.0 0.5 37412 11788 ? S 15:58 0:01 gnome-power-manager 1000 1312 0.0 1.0 50772 22628 ? S 15:58 0:03 gnome-panel 1000 1313 0.1 1.5 102648 31184 ? Sl 15:58 0:10 nautilus root 1314 0.0 0.0 5188 996 ? S 15:58 0:02 udisks-daemon: polling /dev/sdb /dev/sr0 1000 1315 0.0 0.6 51948 12464 ? SL 15:58 0:01 nm-applet --sm-disable 1000 1317 0.0 0.1 16956 2364 ? Sl 15:58 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor 1000 1318 0.0 0.3 20164 7792 ? S 15:58 0:00 bluetooth-applet 1000 1321 0.0 0.1 7260 2384 ? S 15:58 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor 1000 1323 0.0 0.5 37436 12124 ? S 15:58 0:00 /usr/lib/notify-osd/notify-osd 1000 1324 0.0 1.9 197928 40456 ? Ssl 15:58 0:06 /home/alexandros/.dropbox-dist/dropbox 1000 1329 0.0 0.3 20136 7968 ? S 15:58 0:00 /usr/bin/gnome-screensaver --no-daemon 1000 1331 0.0 0.1 7056 3112 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.6 /org/gtk/gvfs/exec_spaw/0 root 1340 0.0 0.0 2236 1008 ? S 15:58 0:00 /sbin/dhclient -d -sf /usr/lib/NetworkManager/nm-dhcp-client.action -pf /var/run/dhcl 1000 1348 0.0 0.1 42252 3680 ? Ssl 15:58 0:00 /usr/lib/bonobo-activation/bonobo-activation-server --ac-activate --ior-output-fd=19 1000 1384 0.0 1.7 80244 35480 ? Sl 15:58 0:02 /usr/bin/python /usr/lib/deskbar-applet/deskbar-applet/deskbar-applet --oaf-activate- 1000 1388 0.0 0.5 26196 11804 ? S 15:58 0:01 /usr/lib/gnome-panel/wnck-applet --oaf-activate-iid=OAFIID:GNOME_Wncklet_Factory --oa 1000 1393 0.1 0.5 25876 11548 ? S 15:58 0:08 /usr/lib/gnome-applets/multiload-applet-2 --oaf-activate-iid=OAFIID:GNOME_MultiLoadAp 1000 1394 0.0 0.5 25600 11140 ? S 15:58 0:03 /usr/lib/gnome-applets/cpufreq-applet --oaf-activate-iid=OAFIID:GNOME_CPUFreqApplet_F 1000 1415 0.0 0.5 39192 11156 ? S 15:58 0:01 /usr/lib/gnome-power-manager/gnome-inhibit-applet --oaf-activate-iid=OAFIID:GNOME_Inh 1000 1417 0.0 0.7 53544 15488 ? Sl 15:58 0:00 /usr/lib/gnome-applets/mixer_applet2 --oaf-activate-iid=OAFIID:GNOME_MixerApplet_Fact 1000 1419 0.0 0.4 23816 9068 ? S 15:58 0:00 /usr/lib/gnome-panel/notification-area-applet --oaf-activate-iid=OAFIID:GNOME_Notific 1000 1488 0.0 0.3 20964 7548 ? S 15:58 0:00 /usr/lib/gnome-disk-utility/gdu-notification-daemon 1000 1490 0.0 0.1 6608 2484 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-burn --spawner :1.6 /org/gtk/gvfs/exec_spaw/1 1000 1510 0.0 0.1 6348 2084 ? S 15:58 0:00 /usr/lib/gvfs/gvfsd-metadata 1000 1531 0.0 0.3 19472 6616 ? S 15:58 0:00 /usr/lib/gnome-user-share/gnome-user-share 1000 1535 0.0 0.4 77128 8392 ? Sl 15:58 0:00 /usr/lib/evolution/evolution-data-server-2.28 --oaf-activate-iid=OAFIID:GNOME_Evoluti 1000 1601 0.0 0.5 69576 11800 ? Sl 15:59 0:00 /usr/lib/evolution/2.28/evolution-alarm-notify 1000 1604 0.0 0.7 33924 15888 ? S 15:59 0:00 python /usr/share/system-config-printer/applet.py 1000 1701 0.0 0.5 37116 11968 ? S 15:59 0:00 update-notifier 1000 1892 4.5 7.0 406720 145312 ? Sl 17:11 3:09 /opt/google/chrome/chrome 1000 1896 0.0 0.1 69812 3680 ? S 17:11 0:02 /opt/google/chrome/chrome 1000 1898 0.0 0.6 91420 14080 ? S 17:11 0:00 /opt/google/chrome/chrome --type=zygote 1000 1916 0.2 1.3 140780 27220 ? Sl 17:11 0:12 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1918 0.7 1.8 155720 37912 ? Sl 17:11 0:31 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1921 0.0 1.0 135904 21052 ? Sl 17:11 0:02 /opt/google/chrome/chrome --type=extension --disable-client-side-phishing-detection - 1000 1927 6.5 3.6 194604 74960 ? Sl 17:11 4:32 /opt/google/chrome/chrome --type=renderer --disable-client-side-phishing-detection -- 1000 2156 0.4 0.7 48344 14896 ? Rl 18:03 0:04 gnome-terminal 1000 2157 0.0 0.0 1988 712 ? S 18:03 0:00 gnome-pty-helper 1000 2158 0.0 0.1 6504 3860 pts/0 Ss 18:03 0:00 bash 1000 2564 0.2 0.1 6624 3984 pts/1 Ss+ 18:17 0:00 bash 1000 2711 0.0 0.0 4208 1352 ? S 18:19 0:00 /bin/bash /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/ts3client_runsc 1000 2714 36.5 1.5 210872 31960 ? SLl 18:19 0:18 ./ts3client_linux_x86 1000 2743 0.0 0.0 2716 1068 pts/0 R+ 18:20 0:00 ps aux Output of vmstat: alexandros@alexandros-laptop:~$ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 1093324 69840 449496 0 0 27 10 476 667 6 2 91 1 Output of lsusb alexandros@alexandros-laptop:~$ lspci 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 671MX 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:02.0 ISA bridge: Silicon Integrated Systems [SiS] SiS968 [MuTIOL Media IO] (rev 01) 00:02.5 IDE interface: Silicon Integrated Systems [SiS] 5513 [IDE] (rev 01) 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.1 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) 00:03.3 USB Controller: Silicon Integrated Systems [SiS] USB 2.0 Controller 00:05.0 IDE interface: Silicon Integrated Systems [SiS] SATA Controller / IDE mode (rev 03) 00:06.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:07.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge 00:0d.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 00:0f.0 Audio device: Silicon Integrated Systems [SiS] Azalia Audio Controller 01:00.0 VGA compatible controller: ATI Technologies Inc Mobility Radeon X2300 02:00.0 Ethernet controller: Atheros Communications Inc. AR5001 Wireless Network Adapter (rev 01) The Team Speak log file : 2011-06-19 19:04:04.223522|INFO | | | Logging started, clientlib version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:04.761149|ERROR |SoundBckndIntf| | /home/alexandros/Programme/TeamSpeak3-Client-linux_x86_back/soundbackends/libpulseaudio_linux_x86.so error: NOT_CONNECTED 2011-06-19 19:04:05.871770|INFO |ClientUI | | Failed to init text to speech engine 2011-06-19 19:04:05.894623|INFO |ClientUI | | TeamSpeak 3 client version: 3.0.0-rc2 [Build: 14642] 2011-06-19 19:04:05.895421|INFO |ClientUI | | Qt version: 4.7.2 2011-06-19 19:04:05.895571|INFO |ClientUI | | Using configuration location: /home/alexandros/.ts3client/ts3clientui_qt.conf 2011-06-19 19:04:06.559596|INFO |ClientUI | | Last update check was: Sa. Jun 18 00:08:43 2011 2011-06-19 19:04:06.560506|INFO | | | Checking for updates... 2011-06-19 19:04:07.357869|INFO | | | Update check, my version: 14642, latest version: 14642 2011-06-19 19:05:52.978481|INFO |PreProSpeex | 1| Speex version: 1.2rc1 2011-06-19 19:05:54.055347|INFO |UIHelpers | | setClientVolumeModifier: 10 -8 2011-06-19 19:05:54.057196|INFO |UIHelpers | | setClientVolumeModifier: 11 2 Thanks for taking the time to read my message. UPDATE: Thanks to nickguletskii's link I googled for "alsa cpu usage" (without quotes) and it brought me to a forum. A user wrote that by directly selecting the hardware with "plughw:x.x" won't impact the performance of the system. I have selected it in the TS 3 configuration and it worked. But this solution is not optimal because now no other program can access the sound output. If you need any further information or my question is unclear than please tell me.

    Read the article

  • CodePlex Daily Summary for Friday, December 31, 2010

    CodePlex Daily Summary for Friday, December 31, 2010Popular ReleasesFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...Windows Weibo all in one for Sina Sohu and QQ: WeiBee V0.1: WeiBee is an all in one Twitter tool, which can update Wei Bo at the same time for websites. It intends to support t.sohu.com, t.sina.com.cn and t.qq.com.cn. If you have WeiBo at SOHU, SINA and QQ, you can try this tool to help you save time to open all the webpages to update your status. For any business opportunity, such as put advertise on China Twitter market, and to build a custom WeiBo tool, please reach my email box qq1800@gmail.com My official WeiBo is http://mediaroom.t.sohu.comStyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...eCompany: eCompany v0.2.0 Build 63: Version 0.2.0 Build 63: Added Splash screen & about box Added downloading of currencies when eCompany launched for the first time (must close any bug caused by no currency rate existing) Added corp creation when eCompany launched for the first time (for now, you didn't need to edit the company.xml file manually) You just need to decompress file "eCompany v0.2.0.63.zip" into your current eCompany install directory.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 8: 1. added truncate table/defrag index/check db functions 2. improved alert 3. fixed problem with alert causing config file corrupted(hopefully)Analysis Services Stored Procedure Project: 1.3.5 Release: This release includes the following fixes and new functionality: Updates to GetCubeLastProcessedDate to work with perspectives Fixes to reports that call Discover functions improving drillthrough functions against perspectives improving ExecuteDrillthroughAndFixColumns logic fixing situation where MDX query calling certain ASSP sprocs which opened external connections caused deadlock to SSAS processing small fix to Partition code when DbColumnName property doesn't exist changes...DocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)AutoLoL: AutoLoL v1.5.0: Added the all new Masteries Browser which replaces the Quick Open combobox AutoLoL will now attemt to create file associations for mastery (*.lolm) files Each Mastery Build can now contain keywords that the Masteries Browser will use for filtering Changed the way AutoLoL detects if another instance is already running Changed the format of the mastery files to allow more information stored in* Dialogs will now focus the Ok or Cancel button which allows the user to press Return to clo...Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!Facebook C# SDK: 4.1.1: From 4.1.1 Release: Authentication bug fix caused by facebook change (error with redirects in Safari) Authenticator fix, always returning true From 4.1.0 Release Lots of bug fixes Removed Dynamic Runtime Language dependencies from non-dynamic platforms. Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample Changed internal serialization to use Json.net BREAKING CHANGE: Canvas Session is no longer supported. Use Signed...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!EnhSim: EnhSim 2.2.7 ALPHA: 2.2.7 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Mongoose has bee...Euro for Windows XP: ChangeRegionalSettings 1..0: *Rocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresFlickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????New Projects1i2m3i4s5e6r7p: 1i2m3i4s5e6r7pA3 Fashion Web: A complete e-commerce siteAfonsoft Blog - Blog de teste: Blog para teste de desenvolvimento em MySQL ou SQLServer com ASP.NET 3.5AnyGrid for ASP.NET MVC: Which grid component should you use for your ASP.NET MVC project? How about all of them? AnyGrid makes it easy to switch between grid implementations, allowing a single action to, e.g., use two different grids for desktop and mobile views. It also supports DataAnnnotations.BizTalk Map Test Framework: The Map Test Framework makes it easier for BizTalk developers to test their maps. You'll no longer have to maintain a whole bunch of XML files for your tests. The use of template files and xpath queries to perform tests will increase your productivity tremendously.BlogResult.NET: A simple Blogging starter kit using S#harp Architecture. This project was originally course material sample code for an MVC class, but we decided to open source it and make it available to the community.Caro: Code demo caro game.Ecozany Skin for DotNetNuke: This Ecozany package contains three sample skins and a collection of containers for use in your DotNetNuke web sites.Enhanced lookup field: Enhanced lookup field makes it easier for end users to create filters. You'll no longer have to use a custom field or javascript to have a parent child lookup for example. It's developed in C#. - Parent/multiple child lookups - Advanced filters options - Easy to usejobglance: This is job siteMaxLeafWeb_K3: MaxLeafWeb_K3Miage Kart: Projet java M1msystem: MVC. Net web systemNetController: Controle sem teclas de direção, usando acelerômentro e .NET micro Framework.Pencils - A Blog Site framework: Another Blog site frameworkPICTShell: PICT is an efficient way to design test cases and test configurations for software systems. PICT Shell is the GUI for PICT.Portal de Solicitação de Serviços: O Portal de Solicitação de Serviços é uma solução que atende diversas empresas prestadores de serviços. Inicialmente desenvolvido para atender empresa de informática, mas o objetivo é que com a publicação do fonte, ele seja extendido à diversos tipos de prestadoras.Runery: Runery RSPSsCut4s: sCut4sSumacê Jogava Caxangá?: sumace makes it easier for gamers to play. You'll no longer have to play. It's developed in XNA.Supply_Chain_FInance: Supply Chain Finance thrives these years all over the world. In exploring the inner working mechanism ,we can sought to have a way to embed an information system in it. We develop the system to suit the domesticated supply chain finace .TestAmir: ??? ????? ??? ???The Cosmos: School project on a SOA based system (Loan system)Time zone: A c# library for manage time changes for any time zone in the world. Based on olson database.Validate: Validate is a collection of extension methods that let you add validations to any object and display validations in a user/developer friendly format. Its an extremely light weight validation framework with a zero learning curve.Windows Phone 7 Silverlight ListBox with CheckBox Control: A Windows Phone 7 ListBox control, providing CheckBoxes for item selection when in "Choose State" and no CheckBoxes in "Normal State", like the built-in Email app, with nice transition. To switch between states, set the "IsInChooseState" property. The control inherits ListBox.XQSOFT.WF.Designer: this is workflow designerzhanghai: my test project

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Help with Collision Resolution?

    - by Milo
    I'm trying to learn about physics by trying to make a simplified GTA 2 clone. My only problem is collision resolution. Everything else works great. I have a rigid body class and from there cars and a wheel class: class RigidBody extends Entity { //linear private Vector2D velocity = new Vector2D(); private Vector2D forces = new Vector2D(); private OBB2D predictionRect = new OBB2D(new Vector2D(), 1.0f, 1.0f, 0.0f); private float mass; private Vector2D deltaVec = new Vector2D(); private Vector2D v = new Vector2D(); //angular private float angularVelocity; private float torque; private float inertia; //graphical private Vector2D halfSize = new Vector2D(); private Bitmap image; private Matrix mat = new Matrix(); private float[] Vector2Ds = new float[2]; private Vector2D tangent = new Vector2D(); private static Vector2D worldRelVec = new Vector2D(); private static Vector2D relWorldVec = new Vector2D(); private static Vector2D pointVelVec = new Vector2D(); public RigidBody() { //set these defaults so we don't get divide by zeros mass = 1.0f; inertia = 1.0f; setLayer(LAYER_OBJECTS); } protected void rectChanged() { if(getWorld() != null) { getWorld().updateDynamic(this); } } //intialize out parameters public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //store physical parameters this.halfSize = halfSize; this.mass = mass; image = bitmap; inertia = (1.0f / 20.0f) * (halfSize.x * halfSize.x) * (halfSize.y * halfSize.y) * mass; RectF rect = new RectF(); float scalar = 10.0f; rect.left = (int)-halfSize.x * scalar; rect.top = (int)-halfSize.y * scalar; rect.right = rect.left + (int)(halfSize.x * 2.0f * scalar); rect.bottom = rect.top + (int)(halfSize.y * 2.0f * scalar); setRect(rect); predictionRect.set(rect); } public void setLocation(Vector2D position, float angle) { getRect().set(position, getWidth(), getHeight(), angle); rectChanged(); } public void setPredictionLocation(Vector2D position, float angle) { getPredictionRect().set(position, getWidth(), getHeight(), angle); } public void setPredictionCenter(Vector2D center) { getPredictionRect().moveTo(center); } public void setPredictionAngle(float angle) { predictionRect.setAngle(angle); } public Vector2D getPosition() { return getRect().getCenter(); } public OBB2D getPredictionRect() { return predictionRect; } @Override public void update(float timeStep) { doUpdate(false,timeStep); } public void doUpdate(boolean prediction, float timeStep) { //integrate physics //linear Vector2D acceleration = Vector2D.scalarDivide(forces, mass); if(prediction) { Vector2D velocity = Vector2D.add(this.velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); c = Vector2D.add(getRect().getCenter(), Vector2D.scalarMultiply(velocity , timeStep)); setPredictionCenter(c); //forces = new Vector2D(0,0); //clear forces } else { velocity.x += (acceleration.x * timeStep); velocity.y += (acceleration.y * timeStep); //velocity = Vector2D.add(velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); v.x = getRect().getCenter().getX() + (velocity.x * timeStep); v.y = getRect().getCenter().getY() + (velocity.y * timeStep); deltaVec.x = v.x - c.x; deltaVec.y = v.y - c.y; deltaVec.normalize(); setCenter(v.x, v.y); forces.x = 0; //clear forces forces.y = 0; } //angular float angAcc = torque / inertia; if(prediction) { float angularVelocity = this.angularVelocity + angAcc * timeStep; setPredictionAngle(getAngle() + angularVelocity * timeStep); //torque = 0; //clear torque } else { angularVelocity += angAcc * timeStep; setAngle(getAngle() + angularVelocity * timeStep); torque = 0; //clear torque } } public void updatePrediction(float timeStep) { doUpdate(true, timeStep); } //take a relative Vector2D and make it a world Vector2D public Vector2D relativeToWorld(Vector2D relative) { mat.reset(); Vector2Ds[0] = relative.x; Vector2Ds[1] = relative.y; mat.postRotate(JMath.radToDeg(getAngle())); mat.mapVectors(Vector2Ds); relWorldVec.x = Vector2Ds[0]; relWorldVec.y = Vector2Ds[1]; return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //take a world Vector2D and make it a relative Vector2D public Vector2D worldToRelative(Vector2D world) { mat.reset(); Vector2Ds[0] = world.x; Vector2Ds[1] = world.y; mat.postRotate(JMath.radToDeg(-getAngle())); mat.mapVectors(Vector2Ds); return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //velocity of a point on body public Vector2D pointVelocity(Vector2D worldOffset) { tangent.x = -worldOffset.y; tangent.y = worldOffset.x; return Vector2D.add( Vector2D.scalarMultiply(tangent, angularVelocity) , velocity); } public void applyForce(Vector2D worldForce, Vector2D worldOffset) { //add linear force forces.x += worldForce.x; forces.y += worldForce.y; //add associated torque torque += Vector2D.cross(worldOffset, worldForce); } @Override public void draw( GraphicsContext c) { c.drawRotatedScaledBitmap(image, getPosition().x, getPosition().y, getWidth(), getHeight(), getAngle()); } public Vector2D getVelocity() { return velocity; } public void setVelocity(Vector2D velocity) { this.velocity = velocity; } public Vector2D getDeltaVec() { return deltaVec; } } Vehicle public class Wheel { private Vector2D forwardVec; private Vector2D sideVec; private float wheelTorque; private float wheelSpeed; private float wheelInertia; private float wheelRadius; private Vector2D position = new Vector2D(); public Wheel(Vector2D position, float radius) { this.position = position; setSteeringAngle(0); wheelSpeed = 0; wheelRadius = radius; wheelInertia = (radius * radius) * 1.1f; } public void setSteeringAngle(float newAngle) { Matrix mat = new Matrix(); float []vecArray = new float[4]; //forward Vector vecArray[0] = 0; vecArray[1] = 1; //side Vector vecArray[2] = -1; vecArray[3] = 0; mat.postRotate(newAngle / (float)Math.PI * 180.0f); mat.mapVectors(vecArray); forwardVec = new Vector2D(vecArray[0], vecArray[1]); sideVec = new Vector2D(vecArray[2], vecArray[3]); } public void addTransmissionTorque(float newValue) { wheelTorque += newValue; } public float getWheelSpeed() { return wheelSpeed; } public Vector2D getAnchorPoint() { return position; } public Vector2D calculateForce(Vector2D relativeGroundSpeed, float timeStep, boolean prediction) { //calculate speed of tire patch at ground Vector2D patchSpeed = Vector2D.scalarMultiply(Vector2D.scalarMultiply( Vector2D.negative(forwardVec), wheelSpeed), wheelRadius); //get velocity difference between ground and patch Vector2D velDifference = Vector2D.add(relativeGroundSpeed , patchSpeed); //project ground speed onto side axis Float forwardMag = new Float(0.0f); Vector2D sideVel = velDifference.project(sideVec); Vector2D forwardVel = velDifference.project(forwardVec, forwardMag); //calculate super fake friction forces //calculate response force Vector2D responseForce = Vector2D.scalarMultiply(Vector2D.negative(sideVel), 2.0f); responseForce = Vector2D.subtract(responseForce, forwardVel); float topSpeed = 500.0f; //calculate torque on wheel wheelTorque += forwardMag * wheelRadius; //integrate total torque into wheel wheelSpeed += wheelTorque / wheelInertia * timeStep; //top speed limit (kind of a hack) if(wheelSpeed > topSpeed) { wheelSpeed = topSpeed; } //clear our transmission torque accumulator wheelTorque = 0; //return force acting on body return responseForce; } public void setTransmissionTorque(float newValue) { wheelTorque = newValue; } public float getTransmissionTourque() { return wheelTorque; } public void setWheelSpeed(float speed) { wheelSpeed = speed; } } //our vehicle object public class Vehicle extends RigidBody { private Wheel [] wheels = new Wheel[4]; private boolean throttled = false; public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //front wheels wheels[0] = new Wheel(new Vector2D(halfSize.x, halfSize.y), 0.45f); wheels[1] = new Wheel(new Vector2D(-halfSize.x, halfSize.y), 0.45f); //rear wheels wheels[2] = new Wheel(new Vector2D(halfSize.x, -halfSize.y), 0.75f); wheels[3] = new Wheel(new Vector2D(-halfSize.x, -halfSize.y), 0.75f); super.initialize(halfSize, mass, bitmap); } public void setSteering(float steering) { float steeringLock = 0.13f; //apply steering angle to front wheels wheels[0].setSteeringAngle(steering * steeringLock); wheels[1].setSteeringAngle(steering * steeringLock); } public void setThrottle(float throttle, boolean allWheel) { float torque = 85.0f; throttled = true; //apply transmission torque to back wheels if (allWheel) { wheels[0].addTransmissionTorque(throttle * torque); wheels[1].addTransmissionTorque(throttle * torque); } wheels[2].addTransmissionTorque(throttle * torque); wheels[3].addTransmissionTorque(throttle * torque); } public void setBrakes(float brakes) { float brakeTorque = 15.0f; //apply brake torque opposing wheel vel for (Wheel wheel : wheels) { float wheelVel = wheel.getWheelSpeed(); wheel.addTransmissionTorque(-wheelVel * brakeTorque * brakes); } } public void doUpdate(float timeStep, boolean prediction) { for (Wheel wheel : wheels) { float wheelVel = wheel.getWheelSpeed(); //apply negative force to naturally slow down car if(!throttled && !prediction) wheel.addTransmissionTorque(-wheelVel * 0.11f); Vector2D worldWheelOffset = relativeToWorld(wheel.getAnchorPoint()); Vector2D worldGroundVel = pointVelocity(worldWheelOffset); Vector2D relativeGroundSpeed = worldToRelative(worldGroundVel); Vector2D relativeResponseForce = wheel.calculateForce(relativeGroundSpeed, timeStep,prediction); Vector2D worldResponseForce = relativeToWorld(relativeResponseForce); applyForce(worldResponseForce, worldWheelOffset); } //no throttling yet this frame throttled = false; if(prediction) { super.updatePrediction(timeStep); } else { super.update(timeStep); } } @Override public void update(float timeStep) { doUpdate(timeStep,false); } public void updatePrediction(float timeStep) { doUpdate(timeStep,true); } public void inverseThrottle() { float scalar = 0.2f; for(Wheel wheel : wheels) { wheel.setTransmissionTorque(-wheel.getTransmissionTourque() * scalar); wheel.setWheelSpeed(-wheel.getWheelSpeed() * 0.1f); } } } And my big hack collision resolution: private void update() { camera.setPosition((vehicle.getPosition().x * camera.getScale()) - ((getWidth() ) / 2.0f), (vehicle.getPosition().y * camera.getScale()) - ((getHeight() ) / 2.0f)); //camera.move(input.getAnalogStick().getStickValueX() * 15.0f, input.getAnalogStick().getStickValueY() * 15.0f); if(input.isPressed(ControlButton.BUTTON_GAS)) { vehicle.setThrottle(1.0f, false); } if(input.isPressed(ControlButton.BUTTON_STEAL_CAR)) { vehicle.setThrottle(-1.0f, false); } if(input.isPressed(ControlButton.BUTTON_BRAKE)) { vehicle.setBrakes(1.0f); } vehicle.setSteering(input.getAnalogStick().getStickValueX()); //vehicle.update(16.6666666f / 1000.0f); boolean colided = false; vehicle.updatePrediction(16.66666f / 1000.0f); List<Entity> buildings = world.queryStaticSolid(vehicle,vehicle.getPredictionRect()); if(buildings.size() > 0) { colided = true; } if(!colided) { vehicle.update(16.66f / 1000.0f); } else { Vector2D delta = vehicle.getDeltaVec(); vehicle.setVelocity(Vector2D.negative(vehicle.getVelocity().multiply(0.2f)). add(delta.multiply(-1.0f))); vehicle.inverseThrottle(); } } Here is OBB public class OBB2D { // Corners of the box, where 0 is the lower left. private Vector2D corner[] = new Vector2D[4]; private Vector2D center = new Vector2D(); private Vector2D extents = new Vector2D(); private RectF boundingRect = new RectF(); private float angle; //Two edges of the box extended away from corner[0]. private Vector2D axis[] = new Vector2D[2]; private double origin[] = new double[2]; public OBB2D(Vector2D center, float w, float h, float angle) { set(center,w,h,angle); } public OBB2D(float left, float top, float width, float height) { set(new Vector2D(left + (width / 2), top + (height / 2)),width,height,0.0f); } public void set(Vector2D center,float w, float h,float angle) { Vector2D X = new Vector2D( (float)Math.cos(angle), (float)Math.sin(angle)); Vector2D Y = new Vector2D((float)-Math.sin(angle), (float)Math.cos(angle)); X = X.multiply( w / 2); Y = Y.multiply( h / 2); corner[0] = center.subtract(X).subtract(Y); corner[1] = center.add(X).subtract(Y); corner[2] = center.add(X).add(Y); corner[3] = center.subtract(X).add(Y); computeAxes(); extents.x = w / 2; extents.y = h / 2; computeDimensions(center,angle); } private void computeDimensions(Vector2D center,float angle) { this.center.x = center.x; this.center.y = center.y; this.angle = angle; boundingRect.left = Math.min(Math.min(corner[0].x, corner[3].x), Math.min(corner[1].x, corner[2].x)); boundingRect.top = Math.min(Math.min(corner[0].y, corner[1].y),Math.min(corner[2].y, corner[3].y)); boundingRect.right = Math.max(Math.max(corner[1].x, corner[2].x), Math.max(corner[0].x, corner[3].x)); boundingRect.bottom = Math.max(Math.max(corner[2].y, corner[3].y),Math.max(corner[0].y, corner[1].y)); } public void set(RectF rect) { set(new Vector2D(rect.centerX(),rect.centerY()),rect.width(),rect.height(),0.0f); } // Returns true if other overlaps one dimension of this. private boolean overlaps1Way(OBB2D other) { for (int a = 0; a < axis.length; ++a) { double t = other.corner[0].dot(axis[a]); // Find the extent of box 2 on axis a double tMin = t; double tMax = t; for (int c = 1; c < corner.length; ++c) { t = other.corner[c].dot(axis[a]); if (t < tMin) { tMin = t; } else if (t > tMax) { tMax = t; } } // We have to subtract off the origin // See if [tMin, tMax] intersects [0, 1] if ((tMin > 1 + origin[a]) || (tMax < origin[a])) { // There was no intersection along this dimension; // the boxes cannot possibly overlap. return false; } } // There was no dimension along which there is no intersection. // Therefore the boxes overlap. return true; } //Updates the axes after the corners move. Assumes the //corners actually form a rectangle. private void computeAxes() { axis[0] = corner[1].subtract(corner[0]); axis[1] = corner[3].subtract(corner[0]); // Make the length of each axis 1/edge length so we know any // dot product must be less than 1 to fall within the edge. for (int a = 0; a < axis.length; ++a) { axis[a] = axis[a].divide((axis[a].length() * axis[a].length())); origin[a] = corner[0].dot(axis[a]); } } public void moveTo(Vector2D center) { Vector2D centroid = (corner[0].add(corner[1]).add(corner[2]).add(corner[3])).divide(4.0f); Vector2D translation = center.subtract(centroid); for (int c = 0; c < 4; ++c) { corner[c] = corner[c].add(translation); } computeAxes(); computeDimensions(center,angle); } // Returns true if the intersection of the boxes is non-empty. public boolean overlaps(OBB2D other) { if(right() < other.left()) { return false; } if(bottom() < other.top()) { return false; } if(left() > other.right()) { return false; } if(top() > other.bottom()) { return false; } if(other.getAngle() == 0.0f && getAngle() == 0.0f) { return true; } return overlaps1Way(other) && other.overlaps1Way(this); } public Vector2D getCenter() { return center; } public float getWidth() { return extents.x * 2; } public float getHeight() { return extents.y * 2; } public void setAngle(float angle) { set(center,getWidth(),getHeight(),angle); } public float getAngle() { return angle; } public void setSize(float w,float h) { set(center,w,h,angle); } public float left() { return boundingRect.left; } public float right() { return boundingRect.right; } public float bottom() { return boundingRect.bottom; } public float top() { return boundingRect.top; } public RectF getBoundingRect() { return boundingRect; } public boolean overlaps(float left, float top, float right, float bottom) { if(right() < left) { return false; } if(bottom() < top) { return false; } if(left() > right) { return false; } if(top() > bottom) { return false; } return true; } }; What I do is when I predict a hit on the car, I force it back. It does not work that well and seems like a bad idea. What could I do to have more proper collision resolution. Such that if I hit a wall I will never get stuck in it and if I hit the side of a wall I can steer my way out of it. Thanks I found this nice ppt. It talks about pulling objects apart and calculating new velocities. How could I calc new velocities in my case? http://www.google.ca/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CC8QFjAB&url=http%3A%2F%2Fcoitweb.uncc.edu%2F~tbarnes2%2FGameDesignFall05%2FSlides%2FCh4.2-CollDet.ppt&ei=x4ucULy5M6-N0QGRy4D4Cg&usg=AFQjCNG7FVDXWRdLv8_-T5qnFyYld53cTQ&cad=rja

    Read the article

  • Syncing Data with a Server using Silverlight and HTTP Polling Duplex

    - by dwahlin
    Many applications have the need to stay in-sync with data provided by a service. Although web applications typically rely on standard polling techniques to check if data has changed, Silverlight provides several interesting options for keeping an application in-sync that rely on server “push” technologies. A few years back I wrote several blog posts covering different “push” technologies available in Silverlight that rely on sockets or HTTP Polling Duplex. We recently had a project that looked like it could benefit from pushing data from a server to one or more clients so I thought I’d revisit the subject and provide some updates to the original code posted. If you’ve worked with AJAX before in Web applications then you know that until browsers fully support web sockets or other duplex (bi-directional communication) technologies that it’s difficult to keep applications in-sync with a server without relying on polling. The problem with polling is that you have to check for changes on the server on a timed-basis which can often be wasteful and take up unnecessary resources. With server “push” technologies, data can be pushed from the server to the client as it changes. Once the data is received, the client can update the user interface as appropriate. Using “push” technologies allows the client to listen for changes from the data but stay 100% focused on client activities as opposed to worrying about polling and asking the server if anything has changed. Silverlight provides several options for pushing data from a server to a client including sockets, TCP bindings and HTTP Polling Duplex.  Each has its own strengths and weaknesses as far as performance and setup work with HTTP Polling Duplex arguably being the easiest to setup and get going.  In this article I’ll demonstrate how HTTP Polling Duplex can be used in Silverlight 4 applications to push data and show how you can create a WCF server that provides an HTTP Polling Duplex binding that a Silverlight client can consume.   What is HTTP Polling Duplex? Technologies that allow data to be pushed from a server to a client rely on duplex functionality. Duplex (or bi-directional) communication allows data to be passed in both directions.  A client can call a service and the server can call the client. HTTP Polling Duplex (as its name implies) allows a server to communicate with a client without forcing the client to constantly poll the server. It has the benefit of being able to run on port 80 making setup a breeze compared to the other options which require specific ports to be used and cross-domain policy files to be exposed on port 943 (as with sockets and TCP bindings). Having said that, if you’re looking for the best speed possible then sockets and TCP bindings are the way to go. But, they’re not the only game in town when it comes to duplex communication. The first time I heard about HTTP Polling Duplex (initially available in Silverlight 2) I wasn’t exactly sure how it was any better than standard polling used in AJAX applications. I read the Silverlight SDK, looked at various resources and generally found the following definition unhelpful as far as understanding the actual benefits that HTTP Polling Duplex provided: "The Silverlight client periodically polls the service on the network layer, and checks for any new messages that the service wants to send on the callback channel. The service queues all messages sent on the client callback channel and delivers them to the client when the client polls the service." Although the previous definition explained the overall process, it sounded as if standard polling was used. Fortunately, Microsoft’s Scott Guthrie provided me with a more clear definition several years back that explains the benefits provided by HTTP Polling Duplex quite well (used with his permission): "The [HTTP Polling Duplex] duplex support does use polling in the background to implement notifications – although the way it does it is different than manual polling. It initiates a network request, and then the request is effectively “put to sleep” waiting for the server to respond (it doesn’t come back immediately). The server then keeps the connection open but not active until it has something to send back (or the connection times out after 90 seconds – at which point the duplex client will connect again and wait). This way you are avoiding hitting the server repeatedly – but still get an immediate response when there is data to send." After hearing Scott’s definition the light bulb went on and it all made sense. A client makes a request to a server to check for changes, but instead of the request returning immediately, it parks itself on the server and waits for data. It’s kind of like waiting to pick up a pizza at the store. Instead of calling the store over and over to check the status, you sit in the store and wait until the pizza (the request data) is ready. Once it’s ready you take it back home (to the client). This technique provides a lot of efficiency gains over standard polling techniques even though it does use some polling of its own as a request is initially made from a client to a server. So how do you implement HTTP Polling Duplex in your Silverlight applications? Let’s take a look at the process by starting with the server. Creating an HTTP Polling Duplex WCF Service Creating a WCF service that exposes an HTTP Polling Duplex binding is straightforward as far as coding goes. Add some one way operations into an interface, create a client callback interface and you’re ready to go. The most challenging part comes into play when configuring the service to properly support the necessary binding and that’s more of a cut and paste operation once you know the configuration code to use. To create an HTTP Polling Duplex service you’ll need to expose server-side and client-side interfaces and reference the System.ServiceModel.PollingDuplex assembly (located at C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Libraries\Server on my machine) in the server project. For the demo application I upgraded a basketball simulation service to support the latest polling duplex assemblies. The service simulates a simple basketball game using a Game class and pushes information about the game such as score, fouls, shots and more to the client as the game changes over time. Before jumping too far into the game push service, it’s important to discuss two interfaces used by the service to communicate in a bi-directional manner. The first is called IGameStreamService and defines the methods/operations that the client can call on the server (see Listing 1). The second is IGameStreamClient which defines the callback methods that a server can use to communicate with a client (see Listing 2).   [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(IGameStreamClient))] public interface IGameStreamService { [OperationContract(IsOneWay = true)] void GetTeamData(); } Listing 1. The IGameStreamService interface defines server operations that can be called on the server.   [ServiceContract] public interface IGameStreamClient { [OperationContract(IsOneWay = true)] void ReceiveTeamData(List<Team> teamData); [OperationContract(IsOneWay = true, AsyncPattern=true)] IAsyncResult BeginReceiveGameData(GameData gameData, AsyncCallback callback, object state); void EndReceiveGameData(IAsyncResult result); } Listing 2. The IGameStreamClient interfaces defines client operations that a server can call.   The IGameStreamService interface is decorated with the standard ServiceContract attribute but also contains a value for the CallbackContract property.  This property is used to define the interface that the client will expose (IGameStreamClient in this example) and use to receive data pushed from the service. Notice that each OperationContract attribute in both interfaces sets the IsOneWay property to true. This means that the operation can be called and passed data as appropriate, however, no data will be passed back. Instead, data will be pushed back to the client as it’s available.  Looking through the IGameStreamService interface you can see that the client can request team data whereas the IGameStreamClient interface allows team and game data to be received by the client. One interesting point about the IGameStreamClient interface is the inclusion of the AsyncPattern property on the BeginReceiveGameData operation. I initially created this operation as a standard one way operation and it worked most of the time. However, as I disconnected clients and reconnected new ones game data wasn’t being passed properly. After researching the problem more I realized that because the service could take up to 7 seconds to return game data, things were getting hung up. By setting the AsyncPattern property to true on the BeginReceivedGameData operation and providing a corresponding EndReceiveGameData operation I was able to get around this problem and get everything running properly. I’ll provide more details on the implementation of these two methods later in this post. Once the interfaces were created I moved on to the game service class. The first order of business was to create a class that implemented the IGameStreamService interface. Since the service can be used by multiple clients wanting game data I added the ServiceBehavior attribute to the class definition so that I could set its InstanceContextMode to InstanceContextMode.Single (in effect creating a Singleton service object). Listing 3 shows the game service class as well as its fields and constructor.   [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.Single)] public class GameStreamService : IGameStreamService { object _Key = new object(); Game _Game = null; Timer _Timer = null; Random _Random = null; Dictionary<string, IGameStreamClient> _ClientCallbacks = new Dictionary<string, IGameStreamClient>(); static AsyncCallback _ReceiveGameDataCompleted = new AsyncCallback(ReceiveGameDataCompleted); public GameStreamService() { _Game = new Game(); _Timer = new Timer { Enabled = false, Interval = 2000, AutoReset = true }; _Timer.Elapsed += new ElapsedEventHandler(_Timer_Elapsed); _Timer.Start(); _Random = new Random(); }} Listing 3. The GameStreamService implements the IGameStreamService interface which defines a callback contract that allows the service class to push data back to the client. By implementing the IGameStreamService interface, GameStreamService must supply a GetTeamData() method which is responsible for supplying information about the teams that are playing as well as individual players.  GetTeamData() also acts as a client subscription method that tracks clients wanting to receive game data.  Listing 4 shows the GetTeamData() method. public void GetTeamData() { //Get client callback channel var context = OperationContext.Current; var sessionID = context.SessionId; var currClient = context.GetCallbackChannel<IGameStreamClient>(); context.Channel.Faulted += Disconnect; context.Channel.Closed += Disconnect; IGameStreamClient client; if (!_ClientCallbacks.TryGetValue(sessionID, out client)) { lock (_Key) { _ClientCallbacks[sessionID] = currClient; } } currClient.ReceiveTeamData(_Game.GetTeamData()); //Start timer which when fired sends updated score information to client if (!_Timer.Enabled) { _Timer.Enabled = true; } } Listing 4. The GetTeamData() method subscribes a given client to the game service and returns. The key the line of code in the GetTeamData() method is the call to GetCallbackChannel<IGameStreamClient>().  This method is responsible for accessing the calling client’s callback channel. The callback channel is defined by the IGameStreamClient interface shown earlier in Listing 2 and used by the server to communicate with the client. Before passing team data back to the client, GetTeamData() grabs the client’s session ID and checks if it already exists in the _ClientCallbacks dictionary object used to track clients wanting callbacks from the server. If the client doesn’t exist it adds it into the collection. It then pushes team data from the Game class back to the client by calling ReceiveTeamData().  Since the service simulates a basketball game, a timer is then started if it’s not already enabled which is then used to randomly send data to the client. When the timer fires, game data is pushed down to the client. Listing 5 shows the _Timer_Elapsed() method that is called when the timer fires as well as the SendGameData() method used to send data to the client. void _Timer_Elapsed(object sender, ElapsedEventArgs e) { int interval = _Random.Next(3000, 7000); lock (_Key) { _Timer.Interval = interval; _Timer.Enabled = false; } SendGameData(_Game.GetGameData()); } private void SendGameData(GameData gameData) { var cbs = _ClientCallbacks.Where(cb => ((IContextChannel)cb.Value).State == CommunicationState.Opened); for (int i = 0; i < cbs.Count(); i++) { var cb = cbs.ElementAt(i).Value; try { cb.BeginReceiveGameData(gameData, _ReceiveGameDataCompleted, cb); } catch (TimeoutException texp) { //Log timeout error } catch (CommunicationException cexp) { //Log communication error } } lock (_Key) _Timer.Enabled = true; } private static void ReceiveGameDataCompleted(IAsyncResult result) { try { ((IGameStreamClient)(result.AsyncState)).EndReceiveGameData(result); } catch (CommunicationException) { // empty } catch (TimeoutException) { // empty } } LIsting 5. _Timer_Elapsed is used to simulate time in a basketball game. When _Timer_Elapsed() fires the SendGameData() method is called which iterates through the clients wanting to be notified of changes. As each client is identified, their respective BeginReceiveGameData() method is called which ultimately pushes game data down to the client. Recall that this method was defined in the client callback interface named IGameStreamClient shown earlier in Listing 2. Notice that BeginReceiveGameData() accepts _ReceiveGameDataCompleted as its second parameter (an AsyncCallback delegate defined in the service class) and passes the client callback as the third parameter. The initial version of the sample application had a standard ReceiveGameData() method in the client callback interface. However, sometimes the client callbacks would work properly and sometimes they wouldn’t which was a little baffling at first glance. After some investigation I realized that I needed to implement an asynchronous pattern for client callbacks to work properly since 3 – 7 second delays are occurring as a result of the timer. Once I added the BeginReceiveGameData() and ReceiveGameDataCompleted() methods everything worked properly since each call was handled in an asynchronous manner. The final task that had to be completed to get the server working properly with HTTP Polling Duplex was adding configuration code into web.config. In the interest of brevity I won’t post all of the code here since the sample application includes everything you need. However, Listing 6 shows the key configuration code to handle creating a custom binding named pollingDuplexBinding and associate it with the service’s endpoint.   <bindings> <customBinding> <binding name="pollingDuplexBinding"> <binaryMessageEncoding /> <pollingDuplex maxPendingSessions="2147483647" maxPendingMessagesPerSession="2147483647" inactivityTimeout="02:00:00" serverPollTimeout="00:05:00"/> <httpTransport /> </binding> </customBinding> </bindings> <services> <service name="GameService.GameStreamService" behaviorConfiguration="GameStreamServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="pollingDuplexBinding" contract="GameService.IGameStreamService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>   Listing 6. Configuring an HTTP Polling Duplex binding in web.config and associating an endpoint with it. Calling the Service and Receiving “Pushed” Data Calling the service and handling data that is pushed from the server is a simple and straightforward process in Silverlight. Since the service is configured with a MEX endpoint and exposes a WSDL file, you can right-click on the Silverlight project and select the standard Add Service Reference item. After the web service proxy is created you may notice that the ServiceReferences.ClientConfig file only contains an empty configuration element instead of the normal configuration elements created when creating a standard WCF proxy. You can certainly update the file if you want to read from it at runtime but for the sample application I fed the service URI directly to the service proxy as shown next: var address = new EndpointAddress("http://localhost.:5661/GameStreamService.svc"); var binding = new PollingDuplexHttpBinding(); _Proxy = new GameStreamServiceClient(binding, address); _Proxy.ReceiveTeamDataReceived += _Proxy_ReceiveTeamDataReceived; _Proxy.ReceiveGameDataReceived += _Proxy_ReceiveGameDataReceived; _Proxy.GetTeamDataAsync(); This code creates the proxy and passes the endpoint address and binding to use to its constructor. It then wires the different receive events to callback methods and calls GetTeamDataAsync().  Calling GetTeamDataAsync() causes the server to store the client in the server-side dictionary collection mentioned earlier so that it can receive data that is pushed.  As the server-side timer fires and game data is pushed to the client, the user interface is updated as shown in Listing 7. Listing 8 shows the _Proxy_ReceiveGameDataReceived() method responsible for handling the data and calling UpdateGameData() to process it.   Listing 7. The Silverlight interface. Game data is pushed from the server to the client using HTTP Polling Duplex. void _Proxy_ReceiveGameDataReceived(object sender, ReceiveGameDataReceivedEventArgs e) { UpdateGameData(e.gameData); } private void UpdateGameData(GameData gameData) { //Update Score this.tbTeam1Score.Text = gameData.Team1Score.ToString(); this.tbTeam2Score.Text = gameData.Team2Score.ToString(); //Update ball visibility if (gameData.Action != ActionsEnum.Foul) { if (tbTeam1.Text == gameData.TeamOnOffense) { AnimateBall(this.BB1, this.BB2); } else //Team 2 { AnimateBall(this.BB2, this.BB1); } } if (this.lbActions.Items.Count > 9) this.lbActions.Items.Clear(); this.lbActions.Items.Add(gameData.LastAction); if (this.lbActions.Visibility == Visibility.Collapsed) this.lbActions.Visibility = Visibility.Visible; } private void AnimateBall(Image onBall, Image offBall) { this.FadeIn.Stop(); Storyboard.SetTarget(this.FadeInAnimation, onBall); Storyboard.SetTarget(this.FadeOutAnimation, offBall); this.FadeIn.Begin(); } Listing 8. As the server pushes game data, the client’s _Proxy_ReceiveGameDataReceived() method is called to process the data. In a real-life application I’d go with a ViewModel class to handle retrieving team data, setup data bindings and handle data that is pushed from the server. However, for the sample application I wanted to focus on HTTP Polling Duplex and keep things as simple as possible.   Summary Silverlight supports three options when duplex communication is required in an application including TCP bindins, sockets and HTTP Polling Duplex. In this post you’ve seen how HTTP Polling Duplex interfaces can be created and implemented on the server as well as how they can be consumed by a Silverlight client. HTTP Polling Duplex provides a nice way to “push” data from a server while still allowing the data to flow over port 80 or another port of your choice.   Sample Application Download

    Read the article

  • Designing for the future

    - by Dennis Vroegop
    User interfaces and user experience design is a fast moving field. It’s something that changes pretty quick: what feels fresh today will look outdated tomorrow. I remember the day I first got a beta version of Windows 95 and I felt swept away by the user interface of the OS. It felt so modern! If I look back now, it feels old. Well, it should: the design is 17 years old which is an eternity in our field. Of course, this is not limited to UI. Same goes for many industries. I want you to think back of the cars that amazed you when you were in your teens (if you are in your teens then this may not apply to you). Didn’t they feel like part of the future? Didn’t you think that this was the ultimate in designs? And aren’t those designs hopelessly outdated today (again, depending on your age, it may just be me)? Let’s review the Win95 design: And let’s compare that to Windows 7: There are so many differences here, I wouldn’t even know where to start explaining them. The general feeling however is one of more usability: studies have shown Windows 7 is much easier to understand for new users than the older versions of Windows did. Of course, experienced Windows users didn’t like it: people are usually afraid of changes and like to stick to what they know. But for new users this was a huge improvement. And that is what UX design is all about: make a product easier to use, with less training required and make users feel more productive. Still, there are areas where this doesn’t hold up. There are plenty examples of designs from the past that are still fresh today. But if you look closely at them, you’ll notice some subtle differences. This differences are what keep the designs fresh. A good example is the signs you’ll find on the road. They haven’t changed much over the years (otherwise people wouldn’t recognize them anymore) but they have been changing gradually to reflect changes in traffic. The same goes for computer interfaces. With each new product or version of a product, the UI and UX is changed gradually. Every now and then however, a bigger change is needed. Just think about the introduction of the Ribbon in Microsoft Office 2007: the whole UI was redesigned. A lot of old users (not in age, but in times of using older versions) didn’t like it a bit, but new users or casual users seem to be more efficient using the product. Which, of course, is exactly the reason behind the changes. I believe that a big engine behind the changes in User Experience design has been the web. In the old days (i.e. before the explosion of the internet) user interface design in Windows applications was limited to choosing the margins between your battleship gray buttons. When the web came along, and especially the web 2.0 where the browsers started to act more and more as application platforms, designers stepped in and made a huge impact. In the browser, they could do whatever they wanted. In the beginning this was limited to the darn blink tag but gradually people really started to think about UX. Even more so: the design of the UI and the whole experience was taken away from the developers and put into the hands of people who knew what they were doing: UX designers. This caused some problems. Everyone who has done a web project in the early 2000’s must have had the same experience: the designers give you a set of Photoshop files and tell you to translate it to HTML. Which, of course, is very hard to do. However, with new tooling and new standards this became much easier. The latest version of HTML and CSS has taken the responsibility for the design away from the developers and placed them in the capable hands of the designers. And that’s where that responsibility belongs, after all, I don’t want a designer to muck around in my c# code just as much as he or she doesn’t want me to poke in the sites style definitions. This change in responsibilities resulted in good looking but more important: better thought out user interfaces in websites. And when websites became more and more interactive, people started to expect the same sort of look and feel from their desktop applications. But that didn’t really happen. Most business applications still have that battleship gray look and feel. Ok, they may use a different color but we’re not talking colors here but usability. Now, you may not be able to read the Dutch captions, but even if you did you wouldn’t understand what was going on. At least, not when you first see it. You have to scan the screen, read all the labels, see how they are related to the other elements on the screen and then figure out what they do. If you’re an experienced user of this application however, this might be a good thing: you know what to do and you get all the information you need in one single screen. But for most applications this isn’t the case. A lot of people only use their computer for a limited time a day (a weird concept for me, but it happens) and need it to get something done and then get on with their lives. For them, a user interface experience like the above isn’t working. (disclaimer: I just picked a screenshot, I am not saying this is bad software but it is an example of about 95% of the Windows applications out there). For the knowledge worker, this isn’t a problem. They use one or two systems and they know exactly what they need to do to achieve their goal. They don’t want any clutter on their screen that distracts them from their task, they just want to be as efficient as possible. When they know the systems they are very productive. The point is, how long does it take to become productive? And: could they be even more productive if the UX was better? Are there things missing that they don’t know about? Are there better ways to achieve what they want to achieve? Also: could a system be designed in such a way that it is not only much more easy to work with but also less tiring? in the example above you need to switch between the keyboard and mouse a lot, something that we now know can be very tiring. The goal of most applications (being client apps or websites on any kind of device) is to provide information. Information is data that when given to the right people, on the right time, in the right place and when it is correct adds value for that person (please, remember that definition: I still hear the statement “the information was wrong” which doesn’t make sense: data can be wrong, information cannot be). So if a system provides data, how can we make sure the chances of becoming information is as high as possible? A good example of a well thought-out system that attempts this is the Zune client. It is a very good application, and I think the UX is much better than it’s main competitor iTunes. Have a look at both: On the left you see the iTunes screenshot, on the right the Zune. As you notice, the Zune screen has more images but less chrome (chrome being visuals not part of the data you want to show, i.e. edges around buttons). The whole thing is text oriented or image oriented, where that text or image is part of the information you need. What is important is big, what’s less important is smaller. Yet, everything you need to know at that point is present and your attention is drawn immediately to what you’re trying to achieve: information about music. You can easily switch between the content on your machine and content on your Zune player but clicking on the image of the player. But if you didn’t know that, you’d find out soon enough: the whole UX is designed in such a way that it invites you to play around. So sooner or later (probably sooner) you’d click on that image and you would see what it does. In the iTunes version it’s harder to find: the discoverability is a lot lower. For inexperienced people the Zune player feels much more natural than the iTunes player, and they get up to speed a lot faster. How does this all work? Why is this UX better? The answer lies in a project from Microsoft with the codename (it seems to be becoming the official name though) “Metro”. Metro is a design language, based on certain principles. When they thought about UX they took a good long look around them and went out in search of metaphors. And they found them. The team noticed that signage in streets, airports, roads, buildings and so on are usually very clear and very precise. These signs give you the information you need and nothing more. It’s simple, clearly understood and fast to understand. A good example are airport signs. Airports can be intimidating places, especially for the non-experienced traveler. In the early 1990’s Amsterdam Airport Schiphol decided to redesign all the signage to make the traveller feel less disoriented. They developed a set of guidelines for signs and implemented those. Soon, most airports around the world adopted these ideas and you see variations of the Dutch signs everywhere on the globe. The signs are text-oriented. Yes, there are icons explaining what it all means for the people who can’t read or don’t understand the language, but the basic sign language is text. It’s clear, it’s high-contrast and it’s easy to understand. One look at the sign and you know where to go. The only thing I don’t like is the green sign pointing to the emergency exit, but since this is the default style for emergency exits I understand why they did this. If you look at the Zune UI again, you’ll notice the similarities. Text oriented, little or no icons, clear usage of fonts and all the information you need. This design language has a set of principles: Clean, light, open and fast Content, not chrome Soulful and alive These are just a couple of the principles, you can read the whole philosophy behind Metro for Windows Phone 7 here. These ideas seem to work. I love my Windows Phone 7. It’s easy to use, it’s clear, there’s no clutter that I do not need. It works for me. And I noticed it works for a lot of other people as well, especially people who aren’t as proficient with computers as I am. You see these ideas in a lot other places. Corning, a manufacturer of glass, has made a video of possible usages of their products. It’s their glimpse into the future. You’ll notice that a lot of the UI in the screens look a lot like what Microsoft is doing with Metro (not coincidentally Corning is the supplier for the Gorilla glass display surface on the new SUR40 device (or Surface v2.0 as a lot of people call it)). The idea behind this vision is that data should be available everywhere where you it. Systems should be available at all times and data is presented in a clear and light manner so that you can turn that data into information. You don’t need a lot of fancy animations that only distract from the data. You want the data and you want it fast. Have a look at this truly inspiring video that made: This is what I believe the future will look like. Of course, not everything is possible, or even desirable. But it is a nice way to think about the future . I feel very strongly about designing applications in such a way that they add value to the user. Designing applications that turn data into information. Applications that make the user feel happy to use them. So… when are you going to drop the battleship-gray designs? Tags van Technorati: surface,design,windows phone 7,wp7,metro

    Read the article

  • Metro Walkthrough: Creating a Task List with a ListView and IndexedDB

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can work with data in a Metro style application written with JavaScript. In particular, we create a super simple Task List application which enables you to create and delete tasks. Here’s a video which demonstrates how the Task List application works: In order to build this application, I had to take advantage of several features of the WinJS library and technologies including: IndexedDB – The Task List application stores data in an IndexedDB database. HTML5 Form Validation – The Task List application uses HTML5 validation to ensure that a required field has a value. ListView Control – The Task List application displays the tasks retrieved from the IndexedDB database in a WinJS ListView control. Creating the IndexedDB Database The Task List application stores all of its data in an IndexedDB database named TasksDB. This database is opened/created with the following code: var db; var req = window.msIndexedDB.open("TasksDB", 1); req.onerror = function () { console.log("Could not open database"); }; req.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement:true }); }; The msIndexedDB.open() method accepts two parameters: the name of the database to open and the version of the database to open. If a database with a matching version already exists, then calling the msIndexedDB.open() method opens a connection to the existing database. If the database does not exist then the upgradeneeded event is raised. You handle the upgradeneeded event to create a new database. In the code above, the upgradeneeded event handler creates an object store named “tasks” (An object store roughly corresponds to a database table). When you add items to the tasks object store then each item gets an id property with an auto-incremented value automatically. The code above also includes an error event handler. If the IndexedDB database cannot be opened or created, for whatever reason, then an error message is written to the Visual Studio JavaScript Console window. Displaying a List of Tasks The TaskList application retrieves its list of tasks from the tasks object store, which we created above, and displays the list of tasks in a ListView control. Here is how the ListView control is declared: <div id="tasksListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: TaskList.tasks.dataSource, itemTemplate: select('#taskTemplate'), tapBehavior: 'toggleSelect', selectionMode: 'multi', layout: { type: WinJS.UI.ListLayout } }"> </div> The ListView control is bound to the TaskList.tasks.dataSource data source. The TaskList.tasks.dataSource is created with the following code: // Create the data source var tasks = new WinJS.Binding.List(); // Open the database var db; var req = window.msIndexedDB.open("TasksDB", 1); req.onerror = function () { console.log("Could not open database"); }; req.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement:true }); }; // Load the data source with data from the database req.onsuccess = function () { db = req.result; var tran = db.transaction("tasks"); tran.objectStore("tasks").openCursor().onsuccess = function(event) { var cursor = event.target.result; if (cursor) { tasks.dataSource.insertAtEnd(null, cursor.value); cursor.continue(); }; }; }; // Expose the data source and functions WinJS.Namespace.define("TaskList", { tasks: tasks }); Notice the success event handler. This handler is called when a database is successfully opened/created. In the code above, all of the items from the tasks object store are retrieved into a cursor and added to a WinJS.Binding.List object named tasks. Because the ListView control is bound to the WinJS.Binding.List object, copying the tasks from the object store into the WinJS.Binding.List object causes the tasks to appear in the ListView: Adding a New Task You add a new task in the Task List application by entering the title of a new task into an HTML form and clicking the Add button. Here’s the markup for creating the form: <form id="addTaskForm"> <input id="newTaskTitle" title="New Task" required /> <button>Add</button> </form> Notice that the INPUT element includes a required attribute. In a Metro application, you can take advantage of HTML5 Validation to validate form fields. If you don’t enter a value for the newTaskTitle field then the following validation error message is displayed: For a brief introduction to HTML5 validation, see my previous blog entry: http://stephenwalther.com/blog/archive/2012/03/13/html5-form-validation.aspx When you click the Add button, the form is submitted and the form submit event is raised. The following code is executed in the default.js file: // Handle Add Task document.getElementById("addTaskForm").addEventListener("submit", function (evt) { evt.preventDefault(); var newTaskTitle = document.getElementById("newTaskTitle"); TaskList.addTask({ title: newTaskTitle.value }); newTaskTitle.value = ""; }); The code above retrieves the title of the new task and calls the addTask() method in the tasks.js file. Here’s the code for the addTask() method which is responsible for actually adding the new task to the IndexedDB database: // Add a new task function addTask(taskToAdd) { var transaction = db.transaction("tasks", "readwrite"); var addRequest = transaction.objectStore("tasks").add(taskToAdd); addRequest.onsuccess = function (evt) { taskToAdd.id = evt.target.result; tasks.dataSource.insertAtEnd(null, taskToAdd); } } The code above does two things. First, it adds the new task to the tasks object store in the IndexedDB database. Second, it adds the new task to the data source bound to the ListView. The dataSource.insertAtEnd() method is called to add the new task to the data source so the new task will appear in the ListView (with a nice little animation). Deleting Existing Tasks The Task List application enables you to select one or more tasks by clicking or tapping on one or more tasks in the ListView. When you click the Delete button, the selected tasks are removed from both the IndexedDB database and the ListView. For example, in the following screenshot, two tasks are selected. The selected tasks appear with a teal background and a checkmark: When you click the Delete button, the following code in the default.js file is executed: // Handle Delete Tasks document.getElementById("btnDeleteTasks").addEventListener("click", function (evt) { tasksListView.winControl.selection.getItems().then(function(items) { items.forEach(function (item) { TaskList.deleteTask(item); }); }); }); The selected tasks are retrieved with the TaskList selection.getItem() method. In the code above, the deleteTask() method is called for each of the selected tasks. Here’s the code for the deleteTask() method: // Delete an existing task function deleteTask(listViewItem) { // Database key != ListView key var dbKey = listViewItem.data.id; var listViewKey = listViewItem.key; // Remove item from db and, if success, remove item from ListView var transaction = db.transaction("tasks", “readwrite”); var deleteRequest = transaction.objectStore("tasks").delete(dbKey); deleteRequest.onsuccess = function () { tasks.dataSource.remove(listViewKey); } } This code does two things: it deletes the existing task from the database and removes the existing task from the ListView. In both cases, the right task is removed by using the key associated with the task. However, the task key is different in the case of the database and in the case of the ListView. In the case of the database, the task key is the value of the task id property. In the case of the ListView, on the other hand, the task key is auto-generated by the ListView. When the task is removed from the ListView, an animation is used to collapse the tasks which appear above and below the task which was removed. The Complete Code Above, I did a lot of jumping around between different files in the application and I left out sections of code. For the sake of completeness, I want to include the entire code here: the default.html, default.js, and tasks.js files. Here are the contents of the default.html file. This file contains the UI for the Task List application: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Task List</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- TaskList references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script type="text/javascript" src="js/tasks.js"></script> <style type="text/css"> body { font-size: x-large; } form { display: inline; } #appContainer { margin: 20px; width: 600px; } .win-container { padding: 10px; } </style> </head> <body> <div> <!-- Templates --> <div id="taskTemplate" data-win-control="WinJS.Binding.Template"> <div> <span data-win-bind="innerText:title"></span> </div> </div> <h1>Super Task List</h1> <div id="appContainer"> <form id="addTaskForm"> <input id="newTaskTitle" title="New Task" required /> <button>Add</button> </form> <button id="btnDeleteTasks">Delete</button> <div id="tasksListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: TaskList.tasks.dataSource, itemTemplate: select('#taskTemplate'), tapBehavior: 'toggleSelect', selectionMode: 'multi', layout: { type: WinJS.UI.ListLayout } }"> </div> </div> </div> </body> </html> Here is the code for the default.js file. This code wires up the Add Task form and Delete button: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll().then(function () { // Get reference to Tasks ListView var tasksListView = document.getElementById("tasksListView"); // Handle Add Task document.getElementById("addTaskForm").addEventListener("submit", function (evt) { evt.preventDefault(); var newTaskTitle = document.getElementById("newTaskTitle"); TaskList.addTask({ title: newTaskTitle.value }); newTaskTitle.value = ""; }); // Handle Delete Tasks document.getElementById("btnDeleteTasks").addEventListener("click", function (evt) { tasksListView.winControl.selection.getItems().then(function(items) { items.forEach(function (item) { TaskList.deleteTask(item); }); }); }); }); } }; app.start(); })(); Finally, here is the tasks.js file. This file contains all of the code for opening, creating, and interacting with IndexedDB: (function () { "use strict"; // Create the data source var tasks = new WinJS.Binding.List(); // Open the database var db; var req = window.msIndexedDB.open("TasksDB", 1); req.onerror = function () { console.log("Could not open database"); }; req.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement:true }); }; // Load the data source with data from the database req.onsuccess = function () { db = req.result; var tran = db.transaction("tasks"); tran.objectStore("tasks").openCursor().onsuccess = function(event) { var cursor = event.target.result; if (cursor) { tasks.dataSource.insertAtEnd(null, cursor.value); cursor.continue(); }; }; }; // Add a new task function addTask(taskToAdd) { var transaction = db.transaction("tasks", "readwrite"); var addRequest = transaction.objectStore("tasks").add(taskToAdd); addRequest.onsuccess = function (evt) { taskToAdd.id = evt.target.result; tasks.dataSource.insertAtEnd(null, taskToAdd); } } // Delete an existing task function deleteTask(listViewItem) { // Database key != ListView key var dbKey = listViewItem.data.id; var listViewKey = listViewItem.key; // Remove item from db and, if success, remove item from ListView var transaction = db.transaction("tasks", "readwrite"); var deleteRequest = transaction.objectStore("tasks").delete(dbKey); deleteRequest.onsuccess = function () { tasks.dataSource.remove(listViewKey); } } // Expose the data source and functions WinJS.Namespace.define("TaskList", { tasks: tasks, addTask: addTask, deleteTask: deleteTask }); })(); Summary I wrote this blog entry because I wanted to create a walkthrough of building a simple database-driven application. In particular, I wanted to demonstrate how you can use a ListView control with an IndexedDB database to store and retrieve database data.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • CodePlex Daily Summary for Sunday, February 13, 2011

    CodePlex Daily Summary for Sunday, February 13, 2011Popular ReleasesTV4Home - The all-in-one TV solution!: 0.1.0.0 Preview: This is the beta preview release of the TV4Home software.Finestra Virtual Desktops: 1.2: Fixes a few minor issues with 1.1 including the broken per-desktop backgrounds Further improves the speed of switching desktops A few UI performance improvements Added donations linksNuGet: NuGet 1.1: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669 To see the list of issues fixed in this release, visit this our issues listEnhSim: EnhSim 2.4.0: 2.4.0This release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 - Upd...Sterling Isolated Storage Database with LINQ for Silverlight and Windows Phone 7: Sterling OODB v1.0: Note: use this changeset to download the source example that has been extended to show database generation, backup, and restore in the desktop example. Welcome to the Sterling 1.0 RTM. This version is not backwards-compatible with previous versions of Sterling. Sterling is also available via NuGet. This product has been used and tested in many applications and contains a full suite of unit tests. You can refer to the User's Guide for complete documentation, and use the unit tests as guide...PDF Rider: PDF Rider 0.5.1: Changes from the previous version * Use dynamic layout to better fit text in other languages * Includes French and Spanish localizations Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0.5.1-setup.exe" (reccomended) 2. Down...Snoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! p.s. By request, I am also attaching a .zip file ... so that people can install it ...RIBA - Rich Internet Business Application for Silverlight: Preview of MVVM Framework Source + Tutorials: This is a first public release of the MVVM Framework which is part of the final RIBA application. The complete RIBA example LOB application has yet to be published. Further Documentation on the MVVM part can be found on the Blog, http://www.SilverlightBlog.Net and in the downloadable source ( mvvm/doc/ ). Please post all issues and suggestions in the issue tracker.SharePoint Learning Kit: 1.5: SharePoint Learning Kit 1.5 has the following new functionality: *Support for SharePoint 2010 *E-Learning Actions can be localised *Two New Document Library Edit Options *Automatically add the Assignment List Web Part to the Web Part Gallery *Various Bug Fixes for the Drop Box There are 2 downloads for this release SLK-1.5-2010.zip for SharePoint 2010 SLK-1.5-2007.zip for SharePoint 2007 (WSS3 & MOSS 2007)Facebook C# SDK: 5.0.3 (BETA): This is fourth BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. For more information about this release see the following blog posts: Facebook C# SDK - Writing your first Facebook Application Facebook C# SDK v5 Beta Internals Facebook C# SDK V5.0.0 (BETA) Released We have spend time trying ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.161: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a new Twitter List network importer, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file...WCF Data Services Toolkit: WCF Data Services Toolkit: The source code and binary releases of the WCF Data Services Toolkit. For simplicity, the source code download doesn't include any of the MSTest files. If you want those, you can pull the code down via MercurialyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Video capturing improved Supports YouTube's new layout (january 2011) Internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details. Webdeploy package sha1 checksum: 28785b7248052465ea0738a7775e8e8744d84c27fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroAutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...New ProjectsAbstract | .NET DDD abstraction for infra-structure (Data, Blobs, Queues): In the last few years we have seen many tools abstract access to infra-structures. They are all very different - what makes it difficult for you to move from Azure or to Azure. Abstract makes migration easier by standardising access to these infra-structures.Apex APRS: Apex APRS is a new APRS client application that is unlike any other. Key Features: Online and offline-cached map viewing from multiple popular sources Fast, simple, intuitive & powerful user interface Customizable Notification System: Customizable Notification SystemDaniel Singleton for C++: An elegant solution for C++ singletons using dependency declaration to control lifetime. One object created during any execution, lazy-init, thread safety... nice and compact.Deduplicator: Deduplicator helps to organize your file system. Create one folder organized by choice containing unique files. To be used for photo's, mp3's or any other binary format. Deduplicator is released yet, user interface is limited and some hardcoding is still in placeDigitypon (ASP.NET MVC 3): Digitypon will be a new web application specialized to be used by those who want to set an e-newspaper or an e-magazine. The main difference among other CMSs is that Digitypon’s workflow is a virtualized way of how employees of printed matters (newspaperes, magazinews) work.EdgeJournalImporter: Import journal files written on the Entourage (Pocket) Edge into Microsoft OneNote 2007+FlatFileSerializer: Serialize and deserialize flat file records from and to self defined classes using Attributes.Google Chart Helper: Controls to insert Google Charts to your web application. No Javascript code to do. We do it for you !How to display records from MySQL 5.1 database in asp.net using VB.net or CSharp: How to display records from MySQl 5.1+ database in asp.net with vb.net or C# code.HTTP Filer: HTTP Filer is a utility that allow users to share files and documents over http protocol. This utility was designed especially for Windows Phone users to send files from computer to their phone easily without send emails with attachments or upload files to an internet server.ibamonitoring: Source code for the avian point-count data collection web site www.ibamonitoring.org.JoPack Ultra Light Packaging for large teams: JoPack is an opensource ultra light package management software – that is targeted for simplifying development with large teams sharing volatile assemblies across several solutions. Latest project source code can be found on project home site: http://code.google.com/p/jo-pack/ L-System Turtle Based Fractal Tool (L-Fractal Tool): A tool to help you play with L-System turtle graphic based fractal curves( http://en.wikipedia.org/wiki/L-system) This tool helps you look into some of the well known curves & lets you define new patterns & production rules to build your own. Have a fun-fractal day !mailer: mailer is a application to mail. It's developed in Python.NJamb: A C# DSL for more rigorous tests: NJamb is a C# syntax for tests and DDD specifications. It makes them more readable, faster to write, and more rigorous. Its Linq-style expressions can assert preconditions and postconditions. IntelliSense makes the syntax almost foolproof. And, it's designed to be extended.NUpdater: NUpdater makes it easier for .NET Framework developers to add auto-updating capability to their software. Putting together numerous patching capabilities, this library is an all-around updater. Developed in C# with CLS compliance (this library is fully compatible with Mono).Perihelia - The .NET & Silverlight Socket Project: Perihelia is an open-source socket framework. The framework includes (or will include) all the necessities you need to satisfy your networking needs. Windows and WPF applications are currently supported, and Silverlight applications will be supported soon.PLogger: PLogger is a light, fast configuration-less file appender logger build using a parallel pipeline architecture. It is much easier and faster to set up and use then Log4Net or the enterprise libraryQuasar: Quasar is a professional .Net utility library which adds sugar on .net framework.Sectors Game Engine: Sectors is a XNA-based 2.5D (Doom-like) game engine with console and scripting support for Windows.SharePoint 2010 Server-Side-Scanner WebPart - embDocumentInhalator: embDocumentInhalator makes it possible for SharePoint 2010 users to scan documents from scanners attached directly to the server. For developers it may help to see the relationship between the individual components required. SIAJUR: Projeto Web para controle de documentossvcutil2: svcutil2 generates Wcf client proxies from Wsdl2 documents.TemporalMemoryNetwork: TemporalMemoryNetwork is a research project exploring how dynamical systems can store and represent patterns that occur through time.WebDAV#: This project aims to implement WebDAV support for .NET, both for client software as well as software hosting their own WebDAV server. The project will start with the server portion. The project will be developed in C# 3.5 for .NET 3.5 and 4.0.

    Read the article

  • External USB attached drive works in Windows XP but not in Windows 7. How to fix?

    - by irrational John
    Earlier this week I purchased this "N52300 EZQuest Pro" external hard drive enclosure from here. I can connect the enclosure using USB 2.0 and access the files in both NTFS partitions on the MBR partitioned drive when I use either Windows XP (SP3) or Mac OS X 10.6. So it works as expected in XP & Snow Leopard. However, the enclosure does not work in Windows 7 (Home Premium) either 64-bit or 32-bit or in Ubuntu 10.04 (kernel 2.6.32-23-generic). I'm thinking this must be a Windows 7 driver problem because the enclosure works in XP & Snow Leopard. I do know that no special drivers are required to use this enclosure. It is supported using the USB mass storage drivers included with XP and OS X. It should also work fine using the mass storage support in Windows 7, no? FWIW, I have also tried using 32-bit Windows 7 on both my desktop, a Gigabyte GA-965P-DS3 with a Pentium Dual-Core E6500 @ 2.93GHz, and on my early 2008 MacBook. I see the same failure in both cases that I see with 64-bit Windows 7. So it doesn't appear to be specific to one hardware platform. I'm hoping someone out there can help me either get the enclosure to work in Windows 7 or convince me that the enclosure hardware is bad and should be RMAed. At the moment though an RMA seems pointless since this appears to be a (Windows 7) device driver problem. I have tried to track down any updates to the mass storage drivers included with Windows 7 but have so far come up empty. Heck, I can't even figure out how to place a bug report with Microsoft since apparently the grace period for Windows 7 email support is only a few months. I came across a link to some USB troubleshooting steps in another question. I haven't had a chance to look over the suggestions on that site or try them yet. Maybe tomorrow if I have time ... ;-) I'll finish up with some more details about the problem. When I connect the enclosure using USB to Windows 7 at first it appears everything worked. Windows detects the drive and installs a driver for it. Looking in Device Manager there is an entry under the Hard Drives section with the title, Hitachi HDT721010SLA360 USB Device. When you open Windows Disk Management the first time after the enclosure has been attached the drive appears as "Not initialize" and I'm prompted to initialize it. This is bogus. After all, the drive worked fine in XP so I know it has already been initialized, partitioned, and formatted. So of course I never try to initialize it "again". (It's a 1 GB drive and I don't want to lose the data on it). Except for this first time, the drive never shows up in Disk Management again unless I uninstall the Hitachi HDT721010SLA360 USB Device entry under Hard Drives, unplug, and then replug the enclosure. If I do that then the process in the previous paragraph repeats. In Ubuntu the enclosure never shows up at all at the file system level. Below are an excerpt from kern.log and an excerpt from the result of lsusb -v after attaching the enclosure. It appears that Ubuntu at first recongnizes the enclosure and is attempting to attach it, but encounters errors which prevent it from doing so. Unfortunately, I don't know whether any of this info is useful or not. excerpt from kern.log [ 2684.240015] usb 1-2: new high speed USB device using ehci_hcd and address 22 [ 2684.393618] usb 1-2: configuration #1 chosen from 1 choice [ 2684.395399] scsi17 : SCSI emulation for USB Mass Storage devices [ 2684.395570] usb-storage: device found at 22 [ 2684.395572] usb-storage: waiting for device to settle before scanning [ 2689.390412] usb-storage: device scan complete [ 2689.390894] scsi 17:0:0:0: Direct-Access Hitachi HDT721010SLA360 ST6O PQ: 0 ANSI: 4 [ 2689.392237] sd 17:0:0:0: Attached scsi generic sg7 type 0 [ 2689.395269] sd 17:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) [ 2689.395632] sd 17:0:0:0: [sde] Write Protect is off [ 2689.395636] sd 17:0:0:0: [sde] Mode Sense: 11 00 00 00 [ 2689.395639] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412003] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412009] sde: sde1 sde2 [ 2689.455759] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.455765] sd 17:0:0:0: [sde] Attached SCSI disk [ 2692.620017] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2707.740014] usb 1-2: device descriptor read/64, error -110 [ 2722.970103] usb 1-2: device descriptor read/64, error -110 [ 2723.200027] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2738.320019] usb 1-2: device descriptor read/64, error -110 [ 2753.550024] usb 1-2: device descriptor read/64, error -110 [ 2753.780020] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2758.810147] usb 1-2: device descriptor read/8, error -110 [ 2763.940142] usb 1-2: device descriptor read/8, error -110 [ 2764.170014] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2769.200141] usb 1-2: device descriptor read/8, error -110 [ 2774.330137] usb 1-2: device descriptor read/8, error -110 [ 2774.440069] usb 1-2: USB disconnect, address 22 [ 2774.440503] sd 17:0:0:0: Device offlined - not ready after error recovery [ 2774.590023] usb 1-2: new high speed USB device using ehci_hcd and address 23 [ 2789.710020] usb 1-2: device descriptor read/64, error -110 [ 2804.940020] usb 1-2: device descriptor read/64, error -110 [ 2805.170026] usb 1-2: new high speed USB device using ehci_hcd and address 24 [ 2820.290019] usb 1-2: device descriptor read/64, error -110 [ 2835.520027] usb 1-2: device descriptor read/64, error -110 [ 2835.750018] usb 1-2: new high speed USB device using ehci_hcd and address 25 [ 2840.780085] usb 1-2: device descriptor read/8, error -110 [ 2845.910079] usb 1-2: device descriptor read/8, error -110 [ 2846.140023] usb 1-2: new high speed USB device using ehci_hcd and address 26 [ 2851.170112] usb 1-2: device descriptor read/8, error -110 [ 2856.300077] usb 1-2: device descriptor read/8, error -110 [ 2856.410027] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 2856.730033] usb 3-2: new full speed USB device using uhci_hcd and address 11 [ 2871.850017] usb 3-2: device descriptor read/64, error -110 [ 2887.080014] usb 3-2: device descriptor read/64, error -110 [ 2887.310011] usb 3-2: new full speed USB device using uhci_hcd and address 12 [ 2902.430021] usb 3-2: device descriptor read/64, error -110 [ 2917.660013] usb 3-2: device descriptor read/64, error -110 [ 2917.890016] usb 3-2: new full speed USB device using uhci_hcd and address 13 [ 2922.911623] usb 3-2: device descriptor read/8, error -110 [ 2928.051753] usb 3-2: device descriptor read/8, error -110 [ 2928.280013] usb 3-2: new full speed USB device using uhci_hcd and address 14 [ 2933.301876] usb 3-2: device descriptor read/8, error -110 [ 2938.431993] usb 3-2: device descriptor read/8, error -110 [ 2938.540073] hub 3-0:1.0: unable to enumerate USB device on port 2 excerpt from lsusb -v Bus 001 Device 017: ID 0dc4:0000 Macpower Peripherals, Ltd Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0dc4 Macpower Peripherals, Ltd idProduct 0x0000 bcdDevice 0.01 iManufacturer 1 EZ QUEST iProduct 2 USB Mass Storage iSerial 3 220417 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 5 Config0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 4 Interface0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Update: Results using Firewire to connect. Today I recieved a 1394b 9 pin to 1394a 6 pin cable which allowed me to connect the "EZQuest Pro" via Firewire. Everything works. When I use Firewire I can connect whether I'm using Windows 7 or Ubuntu 10.04. I even tried booting my Gigabyte desktop as an OS X 10.6.3 Hackintosh and it worked there as well. (Though if I recall correctly, it also worked when using USB 2.0 and booting OS X on the desktop. Certainly it works with USB 2.0 and my MacBook.) I believe the firmware on the device is at the latest level available, v1.07. I base this on the excerpt below from the OS X System Profiler which shows Firmware Revision: 0x107. Bottom line: It's nice that the enclosure is actually usable when I connect with Firewire. But I am still searching for an answer as to why it does not work correctly when using USB 2.0 in Windows 7 (and Ubuntu ... but really Windows 7 is my biggest concern). OXFORD IDE Device 1: Manufacturer: EZ QUEST Model: 0x0 GUID: 0x1D202E0220417 Maximum Speed: Up to 800 Mb/sec Connection Speed: Up to 400 Mb/sec Sub-units: OXFORD IDE Device 1 Unit: Unit Software Version: 0x10483 Unit Spec ID: 0x609E Firmware Revision: 0x107 Product Revision Level: ST6O Sub-units: OXFORD IDE Device 1 SBP-LUN: Capacity: 1 TB (1,000,204,886,016 bytes) Removable Media: Yes BSD Name: disk3 Partition Map Type: MBR (Master Boot Record) S.M.A.R.T. status: Not Supported

    Read the article

  • MSCC: Global Windows Azure Bootcamp

    Mauritius participated and contributed to the Global Windows Azure Bootcamp 2014 (GWAB). Again! And this time stronger than ever, and together with 137 other locations in 56 countries world-wide. We had 62 named registrations, 7 guest additions and approximately 10 offline participants prior to the event day. Most interestingly the organisation of the GWAB through the MSCC helped to increased the number of craftsmen. The Mauritius Software Craftsmanship Community has currently 138 registered members - in less than one year! Only with those numbers we can proudly say that all the preparations and hard work towards this event already paid off. Personally, I'm really grateful that we had this kind of response and the feedback from some attendees confirmed that the MSCC is on the right track here on Cyber Island Mauritius. Inspired and motivated by the success of this event, rest assured that there will be more public events like the GWAB. This time it took some time to reflect on our meetup, following my first impression right on spot: "Wow, what an experience to organise and participate in this global event. Overall, I've been very pleased with the preparations and the event itself. Surely, there have been some nicks that we have to address and to improve for future activities like this. Quite frankly, we are not professional event organisers (not yet) but we learned a lot over the past couple of days. A big Thank You to our event sponsors, namely Microsoft Indian Ocean Islands & French Pacific, Ceridian Mauritius and Emtel. Without them this event wouldn't have happened - Thank You! And to the cool team members of Microsoft Student Partners (MSPs). You geeks did a great job! Thanks!" So, how many attendees did we actually have? 61! - Awesome - 61 cloud computing instances to help on the research of diabetes. During Saturday afternoon there was even an online publication on L'Express: Les développeurs mauriciens se joignent au combat contre le diabète Reactions of other attendees Don't take my word for granted... Here are some impressions and feedback from our participants: "Awesome event, really appreciated the presentations :-)" -- Kevin on event comments "very interesting and enriching." -- Diana on event comments "#gwab #gwabmru 2014 great success. Looking forward for gwab 2015" -- Wasiim on Twitter "Was there till the end. Awesome Event. I'll surely join upcoming meetup sessions :)" -- Luchmun on event comments "#gwabmru was not that cool. left early" -- Mohammad on Twitter The overall feedback is positive but we are absolutely aware that there quite a number of problems we had to face. We are already looking into that and ideas / action plans on how we will be able to improve it for future events. The sessions We started the day with welcoming speeches by Thierry Coret, Sr. Marketing Manager of Microsoft Indian Ocean Islands & French Pacific and Vidia Mooneegan, Managing Director and Sr. Vice President of Ceridian Mauritius. The clear emphasis was on the endless possibilities of cloud computing and how it can enable any kind of sectors here in the country. Then it was about time to set up the cloud computing services in order to contribute each attendees cloud computing resources to the global research of diabetes, a step by step guide presented by Arnaud Meslier, Technical Evangelist at Microsoft. Given a rendering package and a configuration file it was very interesting to follow the single steps in Windows Azure. Also, during the day we were not sure whether the set up had been correctly, as Mauritius didn't show up on the results board - which should have been the case after approximately 20 to 30 minutes. Anyways, let the minions work... Next, Arnaud gave a brief overview of the variety of services Windows Azure has to offer. Whether you need a development environment for your websites or mobiles app, running a virtual machine with your existing applications or simply putting a SQL database online. No worries, Windows Azure has the right packages available and the online management portal is really easy t handle. After this, we got a little bit more business oriented while Wasiim Hosenbocus, employee at Ceridian, took the attendees through the inerts of a real-life application, and demoed a couple of the existing features. He did a great job by showing how the different services of Windows Azure can be created and pulled together. After the lunch break it is always tough to keep the audience awake... And it was my turn. I gave a brief overview on operating and managing a SQL database on Windows Azure. Well, there are actually two options available and depending on your individual requirements you should be aware of both. The simpler version is called SQL Database and while provisioning only takes a couple of seconds, you should take into consideration that SQL Database has a number of constraints, like limitations on the actual database size - up to 5 GB as web edition and up to 150 GB maximum as business edition -, among others. Next, it was Chervine Bhiwoo's session on Windows Azure Mobile Services. It was absolutely amazing to see that the mobiles services directly offers you various project templates, like Windows 8 Store App, Android app, iOS app, and even Xamarin cross-platform app development. So, within a couple of minutes you can have your first mobile app active and running on Windows Azure. Furthermore, Chervine showed the attendees that adding another user interface, like Web Sites running on ASP.NET MVC 4 only takes a couple of minutes, too. And last but not least, we rounded up the day with Windows Azure Websites and hosting of Virtual Machines presented by some members of the local Microsoft Students Partners programme. Surely, one of the big advantages using Windows Azure is the availability of pre-defined installation packages of known web applications, like WordPress, Joomla!, or Ghost. Compared to running your own web site with a traditional web hoster it is surely en par, and depending on your personal level of expertise, Windows Azure provides you more liberty in terms of configuration than maybe a shared hosting environment. Running a pre-defined web application is one thing but in case that you would like to have more control over your hosting environment it is highly advised to opt for a virtual machine. Provisioning of an Ubuntu 12.04 LTS system was very simple to do but it takes some more minutes than you might expect. So, please be patient and take your time while Windows Azure gets everything in place for you. Afterwards, you can use a SecureShell (ssh) client like Putty in case of a Linux-based machine, or Remote Desktop Services when operating a Windows Server system to log in into your virtual machine. At the end of the day we had a great Q&A session and we finalised the event with our raffle of goodies. Participation in the raffle was bound to submission of the session survey and most gratefully we had a give-away for everyone. What a nice coincidence to finish of the day. Note: All session slides (and demo codes) will be made available on the MSCC event page. Please, check the Files section over there. (Some) Visual impressions from the event Just to give you an idea about what has happened during the GWAB 2014 at Ebene... Speakers and Microsoft Student Partners are getting ready for the Global Windows Azure Bootcamp 2014 GWAB 2014 attendees are fully integrated into the hands-on-labs and setting up their individuals cloud computing services 60 attendees at the GWAB 2014. Despite some technical difficulties we had a great time running the event GWAB 2014: Using the lunch break for networking and exchange of ideas - Great conversations and topics amongst attendees There are more pictures on the original event page: Questions & Answers Following are a couple of questions which have been asked and didn't get an answer during the event: Q: Is it possible to upload static pages via FTP? A: Yes, you can. Have a look at the right side column on the dashboard of your website. There you'll find information about the FTP and SFTP host names. You can use any FTP client, like ie. FileZilla to log in. FTP also gives you access to your log files. Q: What are the size limitations on SQL Database? A: 5 GB on the Web Edition, and 150 GB on the business edition. A maximum 150 databases (inclusing 'master') per SQL Database server. More details here: General Guidelines and Limitations (Azure SQL Database) Q: What's the maximum size of a SQL Server running in a Virtual Machine? A: The maximum Windows Azure VM has currently 8 CPU cores, 14 or 56 GB of RAM and 16x 1 TB hard drives. More details here: Virtual Machine and Cloud Service Sizes for Azure Q: How can we register for Windows Azure? A: Mauritius is currently not listed for phone verification. Please get in touch with Arnaud Meslier at Microsoft IOI & FP Q: Can I use my own domain name for Windows Azure websites? A: Yes, you can. But this might require to upscale your account to Standard. In case that I missed a question and answer, please use the comment section at the end of the article. Thanks! Final results Every participant was instructed during the hands-on-lab session on how to set up a cloud computing service in their account. Of course, I won't keep the results from you... Global Azure Lab GWAB 2014: Our cloud computing contribution to the research of diabetes And I would say Mauritius did a good job! Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Launch of Microsoft SQL Server 2014 (15.4.2014) Corsair Hackers Reboot (19.4.2014) WebCup (TBA ~ June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! Networking and job/project opportunities Despite having technical presentations on Windows Azure an event like this always offers a great bunch of possibilities and opportunities to get in touch with new people in IT, have an exchange of experience with other like-minded people. As I already wrote about Communities - The importance of exchange and discussion - I had a great conversation with representatives of the University des Mascareignes which are currently embracing cloud infrastructure and cloud computing for their various campuses in the Indian Ocean. As for the MSCC it would be a great experience to stay in touch with them, and to work on upcoming, common activities. Furthermore, I had a very good conversation with Thierry and Ludovic of Microsoft IOI & FP on the necessity of user groups and IT communities here on the island. It's great to see that the MSCC is currently on a run and that local companies are sharing our thoughts on promoting IT careers and exchange of IT knowledge in such an open way. I'm also looking forward to be able to participate and to contribute on more events in the near future. My resume of the day We learned a lot today and there is always room for improvement! It was an awesome event and quite frankly it was a pleasure to spend the day with so many enthuastic IT people in the same room. It was a great experience to organise such event locally and participate on a global scale to support the GlyQ-IQ Technology in their research on diabetes. I was so pleased to see the involvement of new MSCC members in taking the opportunity to share and learn about the power of cloud computing. The Mauritius Software Craftsmanship Community is on the right way and this year's bootcamp on Windows Azure only marked the beginning of our journey. Thank you to our sponsors and my kudos to the MSPs! Update: Media coverage The event has been reported in local media, too. Following are some resources: Orange - Local - Business: Le cloud, pour des recherches approfondies sur le diabète Maurice Info.mu: Le cloud, pour des recherches approfondies sur le diabète Le Quotidien Pg 2: Global Windows Azure Bootcamp 2014 - Le cloud pour des recherches approfondies sur le diabète The Observer Pg 12: Le cloud, pour des recherches approfondies sur le diabète

    Read the article

  • Simplex Noise Help

    - by Alex Larsen
    Im Making A Minecraft Like Gae In XNA C# And I Need To Generate Land With Caves This Is The Code For Simplex I Have /// <summary> /// 1D simplex noise /// </summary> /// <param name="x"></param> /// <returns></returns> public static float Generate(float x) { int i0 = FastFloor(x); int i1 = i0 + 1; float x0 = x - i0; float x1 = x0 - 1.0f; float n0, n1; float t0 = 1.0f - x0 * x0; t0 *= t0; n0 = t0 * t0 * grad(perm[i0 & 0xff], x0); float t1 = 1.0f - x1 * x1; t1 *= t1; n1 = t1 * t1 * grad(perm[i1 & 0xff], x1); // The maximum value of this noise is 8*(3/4)^4 = 2.53125 // A factor of 0.395 scales to fit exactly within [-1,1] return 0.395f * (n0 + n1); } /// <summary> /// 2D simplex noise /// </summary> /// <param name="x"></param> /// <param name="y"></param> /// <returns></returns> public static float Generate(float x, float y) { const float F2 = 0.366025403f; // F2 = 0.5*(sqrt(3.0)-1.0) const float G2 = 0.211324865f; // G2 = (3.0-Math.sqrt(3.0))/6.0 float n0, n1, n2; // Noise contributions from the three corners // Skew the input space to determine which simplex cell we're in float s = (x + y) * F2; // Hairy factor for 2D float xs = x + s; float ys = y + s; int i = FastFloor(xs); int j = FastFloor(ys); float t = (float)(i + j) * G2; float X0 = i - t; // Unskew the cell origin back to (x,y) space float Y0 = j - t; float x0 = x - X0; // The x,y distances from the cell origin float y0 = y - Y0; // For the 2D case, the simplex shape is an equilateral triangle. // Determine which simplex we are in. int i1, j1; // Offsets for second (middle) corner of simplex in (i,j) coords if (x0 > y0) { i1 = 1; j1 = 0; } // lower triangle, XY order: (0,0)->(1,0)->(1,1) else { i1 = 0; j1 = 1; } // upper triangle, YX order: (0,0)->(0,1)->(1,1) // A step of (1,0) in (i,j) means a step of (1-c,-c) in (x,y), and // a step of (0,1) in (i,j) means a step of (-c,1-c) in (x,y), where // c = (3-sqrt(3))/6 float x1 = x0 - i1 + G2; // Offsets for middle corner in (x,y) unskewed coords float y1 = y0 - j1 + G2; float x2 = x0 - 1.0f + 2.0f * G2; // Offsets for last corner in (x,y) unskewed coords float y2 = y0 - 1.0f + 2.0f * G2; // Wrap the integer indices at 256, to avoid indexing perm[] out of bounds int ii = i % 256; int jj = j % 256; // Calculate the contribution from the three corners float t0 = 0.5f - x0 * x0 - y0 * y0; if (t0 < 0.0f) n0 = 0.0f; else { t0 *= t0; n0 = t0 * t0 * grad(perm[ii + perm[jj]], x0, y0); } float t1 = 0.5f - x1 * x1 - y1 * y1; if (t1 < 0.0f) n1 = 0.0f; else { t1 *= t1; n1 = t1 * t1 * grad(perm[ii + i1 + perm[jj + j1]], x1, y1); } float t2 = 0.5f - x2 * x2 - y2 * y2; if (t2 < 0.0f) n2 = 0.0f; else { t2 *= t2; n2 = t2 * t2 * grad(perm[ii + 1 + perm[jj + 1]], x2, y2); } // Add contributions from each corner to get the final noise value. // The result is scaled to return values in the interval [-1,1]. return 40.0f * (n0 + n1 + n2); // TODO: The scale factor is preliminary! } public static float Generate(float x, float y, float z) { // Simple skewing factors for the 3D case const float F3 = 0.333333333f; const float G3 = 0.166666667f; float n0, n1, n2, n3; // Noise contributions from the four corners // Skew the input space to determine which simplex cell we're in float s = (x + y + z) * F3; // Very nice and simple skew factor for 3D float xs = x + s; float ys = y + s; float zs = z + s; int i = FastFloor(xs); int j = FastFloor(ys); int k = FastFloor(zs); float t = (float)(i + j + k) * G3; float X0 = i - t; // Unskew the cell origin back to (x,y,z) space float Y0 = j - t; float Z0 = k - t; float x0 = x - X0; // The x,y,z distances from the cell origin float y0 = y - Y0; float z0 = z - Z0; // For the 3D case, the simplex shape is a slightly irregular tetrahedron. // Determine which simplex we are in. int i1, j1, k1; // Offsets for second corner of simplex in (i,j,k) coords int i2, j2, k2; // Offsets for third corner of simplex in (i,j,k) coords /* This code would benefit from a backport from the GLSL version! */ if (x0 >= y0) { if (y0 >= z0) { i1 = 1; j1 = 0; k1 = 0; i2 = 1; j2 = 1; k2 = 0; } // X Y Z order else if (x0 >= z0) { i1 = 1; j1 = 0; k1 = 0; i2 = 1; j2 = 0; k2 = 1; } // X Z Y order else { i1 = 0; j1 = 0; k1 = 1; i2 = 1; j2 = 0; k2 = 1; } // Z X Y order } else { // x0<y0 if (y0 < z0) { i1 = 0; j1 = 0; k1 = 1; i2 = 0; j2 = 1; k2 = 1; } // Z Y X order else if (x0 < z0) { i1 = 0; j1 = 1; k1 = 0; i2 = 0; j2 = 1; k2 = 1; } // Y Z X order else { i1 = 0; j1 = 1; k1 = 0; i2 = 1; j2 = 1; k2 = 0; } // Y X Z order } // A step of (1,0,0) in (i,j,k) means a step of (1-c,-c,-c) in (x,y,z), // a step of (0,1,0) in (i,j,k) means a step of (-c,1-c,-c) in (x,y,z), and // a step of (0,0,1) in (i,j,k) means a step of (-c,-c,1-c) in (x,y,z), where // c = 1/6. float x1 = x0 - i1 + G3; // Offsets for second corner in (x,y,z) coords float y1 = y0 - j1 + G3; float z1 = z0 - k1 + G3; float x2 = x0 - i2 + 2.0f * G3; // Offsets for third corner in (x,y,z) coords float y2 = y0 - j2 + 2.0f * G3; float z2 = z0 - k2 + 2.0f * G3; float x3 = x0 - 1.0f + 3.0f * G3; // Offsets for last corner in (x,y,z) coords float y3 = y0 - 1.0f + 3.0f * G3; float z3 = z0 - 1.0f + 3.0f * G3; // Wrap the integer indices at 256, to avoid indexing perm[] out of bounds int ii = i % 256; int jj = j % 256; int kk = k % 256; // Calculate the contribution from the four corners float t0 = 0.6f - x0 * x0 - y0 * y0 - z0 * z0; if (t0 < 0.0f) n0 = 0.0f; else { t0 *= t0; n0 = t0 * t0 * grad(perm[ii + perm[jj + perm[kk]]], x0, y0, z0); } float t1 = 0.6f - x1 * x1 - y1 * y1 - z1 * z1; if (t1 < 0.0f) n1 = 0.0f; else { t1 *= t1; n1 = t1 * t1 * grad(perm[ii + i1 + perm[jj + j1 + perm[kk + k1]]], x1, y1, z1); } float t2 = 0.6f - x2 * x2 - y2 * y2 - z2 * z2; if (t2 < 0.0f) n2 = 0.0f; else { t2 *= t2; n2 = t2 * t2 * grad(perm[ii + i2 + perm[jj + j2 + perm[kk + k2]]], x2, y2, z2); } float t3 = 0.6f - x3 * x3 - y3 * y3 - z3 * z3; if (t3 < 0.0f) n3 = 0.0f; else { t3 *= t3; n3 = t3 * t3 * grad(perm[ii + 1 + perm[jj + 1 + perm[kk + 1]]], x3, y3, z3); } // Add contributions from each corner to get the final noise value. // The result is scaled to stay just inside [-1,1] return 32.0f * (n0 + n1 + n2 + n3); // TODO: The scale factor is preliminary! } private static byte[] perm = new byte[512] { 151,160,137,91,90,15, 131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23, 190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33, 88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166, 77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244, 102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196, 135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123, 5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42, 223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9, 129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228, 251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107, 49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254, 138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180, 151,160,137,91,90,15, 131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23, 190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33, 88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166, 77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244, 102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196, 135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123, 5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42, 223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9, 129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228, 251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107, 49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254, 138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180 }; private static int FastFloor(float x) { return (x > 0) ? ((int)x) : (((int)x) - 1); } private static float grad(int hash, float x) { int h = hash & 15; float grad = 1.0f + (h & 7); // Gradient value 1.0, 2.0, ..., 8.0 if ((h & 8) != 0) grad = -grad; // Set a random sign for the gradient return (grad * x); // Multiply the gradient with the distance } private static float grad(int hash, float x, float y) { int h = hash & 7; // Convert low 3 bits of hash code float u = h < 4 ? x : y; // into 8 simple gradient directions, float v = h < 4 ? y : x; // and compute the dot product with (x,y). return ((h & 1) != 0 ? -u : u) + ((h & 2) != 0 ? -2.0f * v : 2.0f * v); } private static float grad(int hash, float x, float y, float z) { int h = hash & 15; // Convert low 4 bits of hash code into 12 simple float u = h < 8 ? x : y; // gradient directions, and compute dot product. float v = h < 4 ? y : h == 12 || h == 14 ? x : z; // Fix repeats at h = 12 to 15 return ((h & 1) != 0 ? -u : u) + ((h & 2) != 0 ? -v : v); } private static float grad(int hash, float x, float y, float z, float t) { int h = hash & 31; // Convert low 5 bits of hash code into 32 simple float u = h < 24 ? x : y; // gradient directions, and compute dot product. float v = h < 16 ? y : z; float w = h < 8 ? z : t; return ((h & 1) != 0 ? -u : u) + ((h & 2) != 0 ? -v : v) + ((h & 4) != 0 ? -w : w); } This Is My World Generation Code Block[,] BlocksInMap = new Block[1024, 256]; public bool IsWorldGenerated = false; Random r = new Random(); private void RunThread() { for (int BH = 0; BH <= 256; BH++) { for (int BW = 0; BW <= 1024; BW++) { Block b = new Block(); if (BH >= 192) { } BlocksInMap[BW, BH] = b; } } IsWorldGenerated = true; } public void GenWorld() { new Thread(new ThreadStart(RunThread)).Start(); } And This Is A Example Of How I Set Blocks Block b = new Block(); b.BlockType = = Block.BlockTypes.Air; This Is A Example Of How I Set Models foreach (Block b in MyWorld) { switch(b.BlockType) { case Block.BlockTypes.Dirt: b.Model = DirtModel; break; ect. } } How Would I Use These To Generate To World (The Block Array) And If Possible Thread It More? btw It's 1024 Wide And 256 Tall

    Read the article

  • Metro Walkthrough: Creating a Task List with a ListView and IndexedDB

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can work with data in a Metro style application written with JavaScript. In particular, we create a super simple Task List application which enables you to create and delete tasks. Here’s a video which demonstrates how the Task List application works: In order to build this application, I had to take advantage of several features of the WinJS library and technologies including: IndexedDB – The Task List application stores data in an IndexedDB database. HTML5 Form Validation – The Task List application uses HTML5 validation to ensure that a required field has a value. ListView Control – The Task List application displays the tasks retrieved from the IndexedDB database in a WinJS ListView control. Creating the IndexedDB Database The Task List application stores all of its data in an IndexedDB database named TasksDB. This database is opened/created with the following code: var db; var req = window.msIndexedDB.open("TasksDB", 1); req.onerror = function () { console.log("Could not open database"); }; req.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement:true }); }; The msIndexedDB.open() method accepts two parameters: the name of the database to open and the version of the database to open. If a database with a matching version already exists, then calling the msIndexedDB.open() method opens a connection to the existing database. If the database does not exist then the upgradeneeded event is raised. You handle the upgradeneeded event to create a new database. In the code above, the upgradeneeded event handler creates an object store named “tasks” (An object store roughly corresponds to a database table). When you add items to the tasks object store then each item gets an id property with an auto-incremented value automatically. The code above also includes an error event handler. If the IndexedDB database cannot be opened or created, for whatever reason, then an error message is written to the Visual Studio JavaScript Console window. Displaying a List of Tasks The TaskList application retrieves its list of tasks from the tasks object store, which we created above, and displays the list of tasks in a ListView control. Here is how the ListView control is declared: <div id="tasksListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: TaskList.tasks.dataSource, itemTemplate: select('#taskTemplate'), tapBehavior: 'toggleSelect', selectionMode: 'multi', layout: { type: WinJS.UI.ListLayout } }"> </div> The ListView control is bound to the TaskList.tasks.dataSource data source. The TaskList.tasks.dataSource is created with the following code: // Create the data source var tasks = new WinJS.Binding.List(); // Open the database var db; var req = window.msIndexedDB.open("TasksDB", 1); req.onerror = function () { console.log("Could not open database"); }; req.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement:true }); }; // Load the data source with data from the database req.onsuccess = function () { db = req.result; var tran = db.transaction("tasks"); tran.objectStore("tasks").openCursor().onsuccess = function(event) { var cursor = event.target.result; tasks.dataSource.beginEdits(); if (cursor) { tasks.dataSource.insertAtEnd(null, cursor.value); cursor.continue(); } else { tasks.dataSource.endEdits(); }; }; }; // Expose the data source and functions WinJS.Namespace.define("TaskList", { tasks: tasks }); Notice the success event handler. This handler is called when a database is successfully opened/created. In the code above, all of the items from the tasks object store are retrieved into a cursor and added to a WinJS.Binding.List object named tasks. Because the ListView control is bound to the WinJS.Binding.List object, copying the tasks from the object store into the WinJS.Binding.List object causes the tasks to appear in the ListView: Adding a New Task You add a new task in the Task List application by entering the title of a new task into an HTML form and clicking the Add button. Here’s the markup for creating the form: <form id="addTaskForm"> <input id="newTaskTitle" title="New Task" required /> <button>Add</button> </form> Notice that the INPUT element includes a required attribute. In a Metro application, you can take advantage of HTML5 Validation to validate form fields. If you don’t enter a value for the newTaskTitle field then the following validation error message is displayed: For a brief introduction to HTML5 validation, see my previous blog entry: http://stephenwalther.com/blog/archive/2012/03/13/html5-form-validation.aspx When you click the Add button, the form is submitted and the form submit event is raised. The following code is executed in the default.js file: // Handle Add Task document.getElementById("addTaskForm").addEventListener("submit", function (evt) { evt.preventDefault(); var newTaskTitle = document.getElementById("newTaskTitle"); TaskList.addTask({ title: newTaskTitle.value }); newTaskTitle.value = ""; }); The code above retrieves the title of the new task and calls the addTask() method in the tasks.js file. Here’s the code for the addTask() method which is responsible for actually adding the new task to the IndexedDB database: // Add a new task function addTask(taskToAdd) { var transaction = db.transaction("tasks", IDBTransaction.READ_WRITE); var addRequest = transaction.objectStore("tasks").add(taskToAdd); addRequest.onsuccess = function (evt) { taskToAdd.id = evt.target.result; tasks.dataSource.insertAtEnd(null, taskToAdd); } } The code above does two things. First, it adds the new task to the tasks object store in the IndexedDB database. Second, it adds the new task to the data source bound to the ListView. The dataSource.insertAtEnd() method is called to add the new task to the data source so the new task will appear in the ListView (with a nice little animation). Deleting Existing Tasks The Task List application enables you to select one or more tasks by clicking or tapping on one or more tasks in the ListView. When you click the Delete button, the selected tasks are removed from both the IndexedDB database and the ListView. For example, in the following screenshot, two tasks are selected. The selected tasks appear with a teal background and a checkmark: When you click the Delete button, the following code in the default.js file is executed: // Handle Delete Tasks document.getElementById("btnDeleteTasks").addEventListener("click", function (evt) { tasksListView.winControl.selection.getItems().then(function(items) { items.forEach(function (item) { TaskList.deleteTask(item); }); }); }); The selected tasks are retrieved with the TaskList selection.getItem() method. In the code above, the deleteTask() method is called for each of the selected tasks. Here’s the code for the deleteTask() method: // Delete an existing task function deleteTask(listViewItem) { // Database key != ListView key var dbKey = listViewItem.data.id; var listViewKey = listViewItem.key; // Remove item from db and, if success, remove item from ListView var transaction = db.transaction("tasks", IDBTransaction.READ_WRITE); var deleteRequest = transaction.objectStore("tasks").delete(dbKey); deleteRequest.onsuccess = function () { tasks.dataSource.remove(listViewKey); } } This code does two things: it deletes the existing task from the database and removes the existing task from the ListView. In both cases, the right task is removed by using the key associated with the task. However, the task key is different in the case of the database and in the case of the ListView. In the case of the database, the task key is the value of the task id property. In the case of the ListView, on the other hand, the task key is auto-generated by the ListView. When the task is removed from the ListView, an animation is used to collapse the tasks which appear above and below the task which was removed. The Complete Code Above, I did a lot of jumping around between different files in the application and I left out sections of code. For the sake of completeness, I want to include the entire code here: the default.html, default.js, and tasks.js files. Here are the contents of the default.html file. This file contains the UI for the Task List application: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Task List</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- TaskList references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script type="text/javascript" src="js/tasks.js"></script> <style type="text/css"> body { font-size: x-large; } form { display: inline; } #appContainer { margin: 20px; width: 600px; } .win-container { padding: 10px; } </style> </head> <body> <div> <!-- Templates --> <div id="taskTemplate" data-win-control="WinJS.Binding.Template"> <div> <span data-win-bind="innerText:title"></span> </div> </div> <h1>Super Task List</h1> <div id="appContainer"> <form id="addTaskForm"> <input id="newTaskTitle" title="New Task" required /> <button>Add</button> </form> <button id="btnDeleteTasks">Delete</button> <div id="tasksListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: TaskList.tasks.dataSource, itemTemplate: select('#taskTemplate'), tapBehavior: 'toggleSelect', selectionMode: 'multi', layout: { type: WinJS.UI.ListLayout } }"> </div> </div> </div> </body> </html> Here is the code for the default.js file. This code wires up the Add Task form and Delete button: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll().then(function () { // Get reference to Tasks ListView var tasksListView = document.getElementById("tasksListView"); // Handle Add Task document.getElementById("addTaskForm").addEventListener("submit", function (evt) { evt.preventDefault(); var newTaskTitle = document.getElementById("newTaskTitle"); TaskList.addTask({ title: newTaskTitle.value }); newTaskTitle.value = ""; }); // Handle Delete Tasks document.getElementById("btnDeleteTasks").addEventListener("click", function (evt) { tasksListView.winControl.selection.getItems().then(function(items) { items.forEach(function (item) { TaskList.deleteTask(item); }); }); }); }); } }; app.start(); })(); Finally, here is the tasks.js file. This file contains all of the code for opening, creating, and interacting with IndexedDB: (function () { "use strict"; // Create the data source var tasks = new WinJS.Binding.List(); // Open the database var db; var req = window.msIndexedDB.open("TasksDB", 1); req.onerror = function () { console.log("Could not open database"); }; req.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement:true }); }; // Load the data source with data from the database req.onsuccess = function () { db = req.result; var tran = db.transaction("tasks"); tran.objectStore("tasks").openCursor().onsuccess = function(event) { var cursor = event.target.result; tasks.dataSource.beginEdits(); if (cursor) { tasks.dataSource.insertAtEnd(null, cursor.value); cursor.continue(); } else { tasks.dataSource.endEdits(); }; }; }; // Add a new task function addTask(taskToAdd) { var transaction = db.transaction("tasks", IDBTransaction.READ_WRITE); var addRequest = transaction.objectStore("tasks").add(taskToAdd); addRequest.onsuccess = function (evt) { taskToAdd.id = evt.target.result; tasks.dataSource.insertAtEnd(null, taskToAdd); } } // Delete an existing task function deleteTask(listViewItem) { // Database key != ListView key var dbKey = listViewItem.data.id; var listViewKey = listViewItem.key; // Remove item from db and, if success, remove item from ListView var transaction = db.transaction("tasks", IDBTransaction.READ_WRITE); var deleteRequest = transaction.objectStore("tasks").delete(dbKey); deleteRequest.onsuccess = function () { tasks.dataSource.remove(listViewKey); } } // Expose the data source and functions WinJS.Namespace.define("TaskList", { tasks: tasks, addTask: addTask, deleteTask: deleteTask }); })(); Summary I wrote this blog entry because I wanted to create a walkthrough of building a simple database-driven application. In particular, I wanted to demonstrate how you can use a ListView control with an IndexedDB database to store and retrieve database data.

    Read the article

  • CodePlex Daily Summary for Monday, April 02, 2012

    CodePlex Daily Summary for Monday, April 02, 2012Popular ReleasesDocument.Editor: 2012.2: Whats New for Document.Editor 2012.2: New Save Copy support New Page Setup support Minor Bug Fix's, improvements and speed upsVidCoder: 1.3.2: Added option for the minimum title length to scan. Added support to enable or disable LibDVDNav. Added option to prompt to delete source files after clearing successful completed items. Added option to disable remembering recent files and folders. Tweaked number box to only select all on a quick click.MJP's DirectX 11 Samples: Light Indexed Deferred Rendering: Implements light indexed deferred using per-tile light lists calculated in a compute shader, as well as a traditional deferred renderer that uses a compute shader for per-tile light culling and per-pixel shading.Pcap.Net: Pcap.Net 0.9.0 (66492): Pcap.Net - March 2012 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes a packet interpretation framework. Version 0.9.0 (Change Set 66492)March 31, 2012 release of the Pcap.Net framework. Follow Pcap.Net on Google+Follow Pcap.Net on Google+ Files Pcap.Net.DevelopersPack.0.9.0.66492.zip - Includes all the tutorial example projects source files, the binaries in a 3rdParty directory and the documentation. It include...Extended WPF Toolkit: Extended WPF Toolkit - 1.6.0: Want an easier way to install the Extended WPF Toolkit?The Extended WPF Toolkit is available on Nuget. What's in the 1.6.0 Release?BusyIndicator ButtonSpinner Calculator CalculatorUpDown CheckListBox - Breaking Changes CheckComboBox - New Control ChildWindow CollectionEditor CollectionEditorDialog ColorCanvas ColorPicker DateTimePicker DateTimeUpDown DecimalUpDown DoubleUpDown DropDownButton IntegerUpDown Magnifier MaskedTextBox MessageBox MultiLineTex...ScriptIDE: Release 4.4: ...Media Companion: MC 3.434b Release: General This release should be the last beta for 3.4xx. If there are no major problems, by the end of the week it will upgraded to 3.500 Stable! The latest mc_com.exe should be included too! TV Bug fix - crash when using XBMC scraper for TV episodes. Bug fix - episode count update when adding new episodes. Bug fix - crash when actors name was missing. Enhanced TV scrape progress text. Enhancements made to missing episodes display. Movies Bug fix - hide "Play Trailer" when multisaev...Better Explorer: Better Explorer 2.0.0.831 Alpha: - A new release with: - many bugfixes - changed icon - added code for more failsafe registry usage on x64 systems - not needed regfix anymore - added ribbon shortcut keys - Other fixes Note: If you have problems opening system libraries, a suggestion was given to copy all of these libraries and then delete the originals. Thanks to Gaugamela for that! (see discussion here: 349015 ) Note2: I was upload again the setup due to missing file!MonoGame - Write Once, Play Everywhere: MonoGame 2.5: The MonoGame team are pleased to announce that MonoGame v2.5 has been released. This release contains important bug fixes, implements optimisations and adds key features. MonoGame now has the capability to use OpenGLES 2.0 on Android and iOS devices, meaning it now supports custom shaders across mobile and desktop platforms. Also included in this release are native orientation animations on iOS devices and better Orientation support for Android. There have also been a lot of bug fixes since t...Circuit Diagram: Circuit Diagram 2.0 Alpha 3: New in this release: Added components: Microcontroller Demultiplexer Flip & rotate components Open XML files from older versions of Circuit Diagram Text formatting for components New CDDX syntax Other fixesUmbraco CMS: Umbraco 5.1 CMS (Beta): Beta build for testing - please report issues at issues.umbraco.org (Latest uploaded: 5.1.0.123) What's new in 5.1? The full list of changes is on our http://progress.umbraco.org task tracking page. It shows items complete for 5.1, and 5.1 includes items for 5.0.1 and 5.0.2 listed there too. Here's two headline acts: Members5.1 adds support for backoffice editing of Members. We support the pairing up of our content type system in Hive with regular ASP.NET Membership providers (we ship a def...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.2.11: One Click Install from NuGet Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla Public Licence 2. 2. User interface control and associated data access layer classes have been added to aid developers integrating 51Degrees.mobi into wider projects such as content management systems or web hosting management solutions. Use the following in a web form or user control to access these new UI components. <%@ Register Assembly="FiftyOne.Foundation" Namespace="...JSON Toolkit: JSON Toolkit 3.1: slight performance improvement (5% - 10%) new JsonException classPicturethrill: Version 2.3.28.0: Straightforward image selection. New clean UI look. Super stable. Simplified user experience.SQL Monitor - managing sql server performance: SQL Monitor 4.2 alpha 16: 1. finally fixed problem with logic fault checking for temporary table name... I really mean finally ...ScintillaNET: ScintillaNET 2.5: A slew of bug-fixes with a few new features sprinkled in. This release also upgrades the SciLexer and SciLexer64 DLLs to version 3.0.4. The official stuff: Issue # Title 32402 32402 27137 27137 31548 31548 30179 30179 24932 24932 29701 29701 31238 31238 26875 26875 30052 30052 Vodigi Open Source Interactive Digital Signage: Vodigi Release 5.0: Vodigi Release 5.0 The .ZIP file for this release contains everything you need to setup and install Vodigi 5.0. Setup and intallation documentation is included in the .ZIP file. Vodigi Release 5.0 consists of the following core components: Vodigi Administrator Web Site Vodigi Player Windows Application Vodigi Media Uploader Windows Application Vodigi Databases Refer to the documentation included in the .ZIP file to setup and configure your servers and player devices for this release.Harness: Harness 2.0.2: change to .NET Framework Client Profile bug fix the download dialog auto answer. bug fix setFocus command. add "SendKeys" command. remove "closeAll" command. minor bugs fixed.BugNET Issue Tracker: BugNET 0.9.161: Below is a list of fixes in this release. Bug BGN-2092 - Link in Email "visit your profile" not functional BGN-2083 - Manager of bugnet can not edit project when it is not public BGN-2080 - clicking on a link in the project summary causes error (0.9.152.0) BGN-2070 - Missing Functionality On Feed.aspx BGN-2069 - Calendar View does not work BGN-2068 - Time tracking totals not ok BGN-2067 - Issues List Page Size Bug: Index was out of range. Must be non-negative and less than the si...YAF.NET (aka Yet Another Forum.NET): v1.9.6.1 RTW: v1.9.6.1 FINAL is .NET v4.0 ONLY v1.9.6.1 has: Performance Improvements .NET v4.0 improvements Improved FaceBook Integration KNOWN ISSUES WITH THIS RELEASE: ON INSTALL PLEASE DON'T CHECK "Upgrade BBCode Extensions...". More complete change list and discussion here: http://forum.yetanotherforum.net/yaf_postst14201_v1-9-6-1-RTW-Dated--3-26-2012.aspxNew ProjectsAdvanced JavaScript outlining for Visual Studio 11: This is extension for Visual Studio 11 that adds additional outlining for JavaScript Editorakrypt2: qt-based GUI for libaxel http://axelkenzo.ru/index.php?section=libaxel.downloadaluminium: aluminium calculationAuditDbContext - Entity Framework Auditing Context: AuditDbContext provides entity change auditing for Entity Framework POCO entities.AutoBox: Creates a fresh .Net developer environment from a bare OS utilizing powershell, chocolatey and webpi.Bookregator: Bookregator is a C# application written to aggregate data using Amazon's Advertising API, WorldCat, and GoodReads.Bootstrap.ConfirmModal: There is a situation that we need user to confirm before they proceed their action. You don’t want to accidently delete very important information. So I come up the idea to extend the bootstrap modal popup to create a confirm modal before calling the function to delete some stuffColour Lovers .NET: A .NET library for the Colour Lovers API.Cursos y Causas: Cursos y Causas desarrollado en asp.net MVCDataModels: DataModels is a project which aims to allow for easy reuse of specific data models using a very simple API. easy framework is used to fast work: codingEGM Engine: The engine for the Express Game Maker editor.E-Junkey: Project personalExpress Game Maker: You can use Express Game Maker to makes games without the need to write a single line of code! EGM's source is shared and is constantly improved by developers around the world. Learn Express Game Maker in no time with tutorials, videos and templates. Share what you learn with the community and ask the community for help. Express Game Maker is free, if you paid for it anywhere, we suggest you ask for a refund.Extending Razor Engine: Extending Razor Engine. Nice and clean solution for CMS system, such as Kentico, red dot, etc.FloodWarn: A series of server and client apps for monitoring flood levels on the Snoqualmie River in King County, Washington.GetThatList: With GetThatList people will find an easy way to copy a music playlist and its songs to another location, being another folder or a remote computer. It is designed so that it can be exposed to the final user as an standalone application or a Shell extension for playlist files.HashMapper - Object-Hash Mapper for Redis: Object-Hash Mapper for Redis and BookSleeve.hostedit: small utility to quickly change the host file. toggles a single clickI-Control: SecretIIS Hosts File Manager: Here's an IIS 7.5 and 8.0 module to add host headers to the Hosts file without having to edit it with notepad. Very useful if you create a lot of web sites for testing or demo purposes. Interval Trainer: Inspired by new research* on interval training this application will help you easily transition into interval based workouts. Current Features Two interval cycles that can be individually customized in time length Preloaded ideal workout for aerobic exercise based on suggested research* (high intensity sprints during workout interval, light jogs during rest) Coming Soon Custom interval workouts with as many intervals in a cycle as necessary Persistent workout settings *Links to t...MicroRuntime: The MicroRuntime project is a .NET utility library.MS Office Word Navigation: Navigate forward/backward inside a Word document MvcFlow: Integration between Workflow Foundation 4.5 and ASP.NET MVC 4NewLineReplacer: Replace letter fast and easy in great textfilesObject-Oriented CAML: Using CAML objectsOpenCover: A code coverage tool for .NET 2 and above, support for 32 and 64 processes (including Silverlight) with both branch and sequence points; now supporting coverage by test feature. This is a mirror of the original github repository to allow codeplex users to contribute. The latest downloads can be found here https://github.com/sawilde/opencover/downloads and is also available using nuget. Pavings.NET: Library for applied interval analysis including intervals, boxes and sub-pavings. Interval analysis is a method of approximating sets with any degree of precision and it has applications from optimization to robotics. Inspired by book "Applied Interval Analysis" by Luc Jaulin et al.proyectoIntegrador3: Proyecto Integrador 3QPAPrintLib: Print every document by its recommended programmsChord: Typing text on the consoles doesn’t have to mean another trauma. Having to navigate to each letter with arrows and analog sticks is really inefficient & lame. In fact you can easily encode each character as a combination of positions of two analog sticksServer DateTime: Server DateTime renders the date and time from the server and make it active using javascript. It is in Military Time Format.SjclHelpers: Helpers for using the Stanford Javascript Crypto Library with .NET.SSAS AMO DB: SSAS AMO DB is a database version of AMO which helps to view the metadata stored in the SSAS cube. The Metadata will be loaded from the SSAS cube using AMO into a SQL Server database using SSIS package. From that database user can generate reports for the SSAS metadata. This database stores the below SSAS objects and their properties Server Databases DataSource DataSourceView DataSourceViewTablecolumns Cube CubeDimension DimensionAttribute AttributeKeyColumns AttributeKeyColum...Stump: A really small BDD framework built on top of nunitSvnbox.org svn sync project (dropbox like): Sync your folder on svn repository work as a teamTextShadowWrapper: TextShadowWrapper is a custom server control for ASP.NET web pages. It inherits from System.Web.UI.WebControls.Label and supports CSS3 text shadows.tjnetSite: Web Site for the Tijuana .Net User groupTönnenKlapps: XNA game where you try to smash a 3D spinning barrel using the correct coloured buttons and the right timing.WP7 Selected Pivot: an example showing how to navigate from one page to desired pivot on another pageXAML Metro Application Isolated Storage Helper: XAMLMetroAppIsolatedStorageHelper helps to Save, Retrieve and Delete structured data in the Isolated Storage. This helper class helps in XAML based Metro application. xsockets: XSockets Test

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • New in MySQL Enterprise Edition: Policy-based Auditing!

    - by Rob Young
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For those with an interest in MySQL, this weekend's MySQL Connect conference in San Francisco has gotten off to a great start. On Saturday Tomas announced the feature complete MySQL 5.6 Release Candidate that is now available for Community adoption and testing. This announcement marks the sprint to GA that should be ready for release within the next 90 days. You can get a quick summary of the key 5.6 features here or better yet download the 5.6 RC (under “Development Releases”), review what's new and try it out for yourself! There were also product related announcements around MySQL Cluster 7.3 and MySQL Enterprise Edition . This latter announcement is of particular interest if you are faced with internal and regulatory compliance requirements as it addresses and solves a pain point that is shared by most developers and DBAs; new, out of the box compliance for MySQL applications via policy-based audit logging of user and query level activity. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One of the most common requests we get for the MySQL roadmap is for quick and easy logging of audit events. This is mainly due to how web-based applications have evolved from nice-to-have enablers to mission-critical revenue generation and the important role MySQL plays in the new dynamic. In today’s virtual marketplace, PCI compliance guidelines ensure credit card data is secure within e-commerce apps; from a corporate standpoint, Sarbanes-Oxely, HIPAA and other regulations guard the medical, financial, public sector and other personal data centric industries. For supporting applications audit policies and controls that monitor the eyes and hands that have viewed and acted upon the most sensitive of data is most commonly implemented on the back-end database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} With this in mind, MySQL 5.5 introduced an open audit plugin API that enables all MySQL users to write their own auditing plugins based on application specific requirements. While the supporting docs are very complete and provide working code samples, writing an audit plugin requires time and low-level expertise to develop, test, implement and maintain. To help those who don't have the time and/or expertise to develop such a plugin, Oracle now ships MySQL 5.5.28 and higher with an easy to use, out-of-the-box auditing solution; MySQL Enterprise Audit. MySQL Enterprise Audit The premise behind MySQL Enterprise Audit is simple; we wanted to provide an easy to use, policy-based auditing solution that enables you to quickly and seamlessly add compliance to their MySQL applications. MySQL Enterprise Audit meets this requirement by enabling you to: 1. Easily install the needed components. Installation requires an upgrade to MySQL 5.5.28 (Enterprise edition), which can be downloaded from the My Oracle Support portal or the Oracle Software Delivery Cloud. After installation, you simply add the following to your my.cnf file to register and enable the audit plugin: [mysqld] plugin-load=audit_log.so (keep in mind the audit_log suffix is platform dependent, so .dll on Windows, etc.) or alternatively you can load the plugin at runtime: mysql> INSTALL PLUGIN audit_log SONAME 'audit_log.so'; 2. Dynamically enable and disable the audit stream for a specific MySQL server. A new global variable called audit_log_policy allows you to dynamically enable and disable audit stream logging for a specific MySQL server. The variable parameters are described below. 3. Define audit policy based on what needs to be logged (everything, logins, queries, or nothing), by server. The new audit_log_policy variable uses the following valid, descriptively named values to enable, disable audit stream logging and to filter the audit events that are logged to the audit stream: "ALL" - enable audit stream and log all events "LOGINS" - enable audit stream and log only login events "QUERIES" - enable audit stream and log only querie events "NONE" - disable audit stream 4. Manage audit log files using basic MySQL log rotation features. A new global variable, audit_log_rotate_on_size, allows you to automate the rotation and archival of audit stream log files based on size with archived log files renamed and appended with datetime stamp when a new file is opened for logging. 5. Integrate the MySQL audit stream with MySQL, Oracle tools and other third-party solutions. The MySQL audit stream is written as XML, using UFT-8 and can be easily formatted for viewing using a standard XML parser. This enables you to leverage tools from MySQL and others to view the contents. The audit stream was also developed to meet the Oracle database audit stream specification so combined Oracle/MySQL shops can import and manage MySQL audit images using the same Oracle tools they use for their Oracle databases. So assuming a successful MySQL 5.5.28 upgrade or installation, a common set up and use case scenario might look something like this: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} It should be noted that MySQL Enterprise Audit was designed to be transparent at the application layer by allowing you to control the mix of log output buffering and asynchronous or synchronous disk writes to minimize the associated overhead that comes when the audit stream is enabled. The net result is that, depending on the chosen audit stream log stream options, most application users will see little to no difference in response times when the audit stream is enabled. So what are your next steps? Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Get all of the grainy details on MySQL Enterprise Audit, including all of the additional configuration options from the MySQL documentation. MySQL Enterprise Edition customers can download MySQL 5.5.28 with the Audit extension for production use from the My Oracle Support portal. Everyone can download MySQL 5.5.28 with the Audit extension for evaluation from the Oracle Software Delivery Cloud. Learn more about MySQL Enterprise Edition. As always, thanks for your continued support of MySQL!

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Using multiple column layout with HTML 5 and CSS 3

    - by nikolaosk
    This is going to be the fourth post in a series of posts regarding HTML 5. You can find the other posts here , here and here.In this post I will provide a hands-on example with HTML 5 and CSS 3 on how to create a page with multiple columns and proper layout.I will show you how to use CSS 3 to create columns much easier than relying on DIV elements and the float CSS rule.I will also show you how to use browser-specific prefix rules (-ms for Internet Explorer and -moz for Firefox ) for browsers that do not fully support CSS 3.In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here.I will create a simple page with information about HTML 5, CSS 3 and JQuery. This is the full HTML 5 code. <!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">       </head>  <body>    <div id="header">      <h1>Learn cutting edge technologies</h1>      <p>HTML 5, JQuery, CSS3</p>    </div>    <div id="main">      <div id="mainnews">        <div>          <h2>HTML 5</h2>        </div>        <div>          <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>          <div class="quote">            <h4>Do More with Less</h4>            <p>             jQuery is a fast and concise JavaScript Library that simplifies HTML document traversing, event handling, animating, and Ajax interactions for rapid web development.             </p>            </div>          <p>            The HTML5 test(html5test.com) score is an indication of how well your browser supports the upcoming HTML5 standard and related specifications. Even though the specification isn't finalized yet, all major browser manufacturers are making sure their browser is ready for the future. Find out which parts of HTML5 are already supported by your browser today and compare the results with other browsers.                      The HTML5 test does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform.</p>        </div>      </div>              <div id="CSS">        <div>          <h2>CSS 3 Intro</h2>        </div>        <div>          <p>          Cascading Style Sheets (CSS) is a style sheet language used for describing the presentation semantics (the look and formatting) of a document written in a markup language. Its most common application is to style web pages written in HTML and XHTML, but the language can also be applied to any kind of XML document, including plain XML, SVG and XUL.          </p>        </div>      </div>            <div id="CSSmore">        <div>          <h2>CSS 3 Purpose</h2>        </div>        <div>          <p>            CSS is designed primarily to enable the separation of document content (written in HTML or a similar markup language) from document presentation, including elements such as the layout, colors, and fonts.[1] This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple pages to share formatting, and reduce complexity and repetition in the structural content (such as by allowing for tableless web design).          </p>        </div>      </div>                </div>    <div id="footer">        <p>Feel free to google more about the subject</p>      </div>     </body>  </html>  The markup is very easy to follow. I have used some HTML 5 tags and the relevant HTML 5 doctype.The CSS code (style.css) follows  body{        line-height: 30px;        width: 1024px;        background-color:#eee;      }            p{        font-size:17px;    font-family:"Comic Sans MS"      }      p,h2,h3,h4{        margin: 0 0 20px 0;      }            #main, #header, #footer{        width: 100%;        margin: 0px auto;        display:block;      }            #header{        text-align: center;         border-bottom: 1px solid #000;         margin-bottom: 30px;      }            #footer{        text-align: center;         border-top: 1px solid #000;         margin-bottom: 30px;      }            .quote{        width: 200px;       margin-left: 10px;       padding: 5px;       float: right;       border: 2px solid #000;       background-color:#F9ACAE;      }            .quote :last-child{        margin-bottom: 0;      }            #main{        column-count:2;        column-gap:20px;        column-rule: 1px solid #000;        -moz-column-count: 2;        -webkit-column-count: 2;        -moz-column-gap: 20px;        -webkit-column-gap: 20px;        -moz-column-rule: 1px solid #000;        -webkit-column-rule: 1px solid #000;      }       All the rules in the css code are pretty simple. The layout is achieved with that CSS rule #main{        column-count:2;        column-gap:20px;        column-rule: 1px solid #000;        -moz-column-count: 2;        -webkit-column-count: 2;        -moz-column-gap: 20px;        -webkit-column-gap: 20px;        -moz-column-rule: 1px solid #000;        -webkit-column-rule: 1px solid #000; Do note the column-count,column-gap and column-rule properties. These properties make the two column layout possible.Please have a look at the picture below to see why I used prefixes for Chrome (webkit) and Firefox(moz).It clearly indicates that the CSS 3 column layout are not supported from Firefox and Chrome.   Finally I test my simple HTML 5 page using the latest versions of Firefox,Internet Explorer and Chrome. In my machine I have installed Firefox 15.0.1.Have a look at the picture below to see how the page looks  I have installed Google Chrome 21.0 in my machine.Have a look at the picture below to see how the page looks Have a look at the picture below to see how my page looks in IE 10.  My page looks the same in all browsers. Hope it helps!!!

    Read the article

  • CodePlex Daily Summary for Wednesday, March 02, 2011

    CodePlex Daily Summary for Wednesday, March 02, 2011Popular ReleasesDocument.Editor: 2011.9: Whats new for Document.Editor 2011.9: New Templates System New Plug-in System New Replace dialog New reset settings Minor Bug Fix's, improvements and speed upsTortoiseHg: TortoiseHg 2.0: TortoiseHg 2.0 is a complete rewrite of TortoiseHg 1.1, switching from PyGtk to PyQtVG Content Display Web Part: VG Content Display Web Part V1.0: Install package shall be installed using PowerShell or stsadm commands or using Central Administration interface. Please see instructions in the http://vgcdwp.codeplex.com/releases/view/61805#DownloadId=212622 file.DirectQ: Release 1.8.7 (Beta 3): Fixes some problems and adds some more enhancements.Sandcastle Help File Builder: SHFB v1.9.2.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. NOTE: The included help file and the online help have not been completely updated to reflect all changes in this release. A refresh will be issue...Network Monitor Open Source Parsers: Microsoft Network Monitor Parsers 3.4.2554: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, we have added 4 new protocol parsers and updated 79 existing parsers in the NetworkMonitor_Pa...Ajax Minifier: Microsoft Ajax Minifier 4.13: New features: switches and settings for turning off Conditional Compilation comment processing; for adding variable and/or function names that should not be renamed automatically; for adding manual renaming of variables/functions/properties; for automatic evaluation of certain literal expressions (but not all).Image Resizer for Windows: Image Resizer 3 Preview 1: Prepare to have your minds blown. This is the first preview of what will eventually become 39613. There are still a lot of rough edges and plenty of areas still under construction, but for your basic needs, it should be relativly stable. Note: You will need the .NET Framework 4 installed to use this version. Below is a status report of where this release is in terms of the overall goal for version 3. If you're feeling a bit technically ambitious and want to check out some of the features th...JSON Toolkit: JSON Toolkit 1.1: updated GetAllJsonObjects() method and GetAllProperties() methods to JsonObject and Properties propertiesFacebook Graph Toolkit: Facebook Graph Toolkit 1.0: Refer to http://computerbeacon.net for Documentation and Tutorial New features:added FQL support added Expires property to Api object added support for publishing to a user's friend / Facebook Page added support for posting and removing comments on posts added support for adding and removing likes on posts and comments added static methods for Page class added support for Iframe Application Tab of Facebook Page added support for obtaining the user's country, locale and age in If...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.1: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager small improvements for some helpers and AjaxDropdown has Data like the Lookup except it's value gets reset and list refilled if any element from data gets changedManaged Extensibility Framework: MEF 2 Preview 3: This release aims .net 4.0 and Silverlight 4.0. Accordingly, there are two solutions files. The assemblies are named System.ComponentModel.Composition.Codeplex.dll as a way to avoid clashing with the version shipped with the 4th version of the framework. Introduced CompositionOptions to container instantiation CompositionOptions.DisableSilentRejection makes MEF throw an exception on composition errors. Useful for diagnostics Support for open generics Support for attribute-less registr...PHPExcel: PHPExcel 1.7.6 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.4: Version: 2.0.0.4 (Milestone 4): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...VidCoder: 0.8.2: Updated auto-naming to handle seconds and frames ranges as well. Deprecated the {chapters} token for auto-naming in favor of {range}. Allowing file drag to preview window and enabling main window shortcut keys to work no matter what window is focused. Added option in config to enable giving custom names to audio tracks. (Note that these names will only show up certain players like iTunes or on the iPod. Players that support custom track names normally may not show them.) Added tooltips ...SQL Server Compact Toolbox: Standalone version 2.0 for SQL Server Compact 4.0: Download the Visual Studio add-in for SQL Server Compact 4.0 and 3.5 from here Standalone version of (most of) the same functionality as the add-in, for SQL Server Compact 4.0. Useful for anyone not having Visual Studio Professional or higher installed. Requires .NET 4.0. Any feedback much appreciated.Claims Based Identity & Access Control Guide: Drop 1 - Claims Identity Guide V2: Highlights of drop #1 This is the first drop of the new "Claims Identity Guide" edition. In this release you will find: All previous samples updated and enhanced. All code upgraded to .NET 4 and Visual Studio 2010. Extensive cleanup. Refactored Simulated Issuers: each solution now gets its own issuers. This results in much cleaner and simpler to understand code. Added Single Sign Out support. Added first sample using ACS ("ACS as a Federation Provider"). This sample extends the ori...Simple Notify: Simple Notify Beta 2011-02-25: Feature: host the service with a single click in console Feature: host the service as a windows service Feature: notification cient application Feature: push client application Feature: push notifications from your powershell script Feature: C# wrapper libraries for your applicationspatterns & practices: Project Silk: Project Silk Community Drop 3 - 25 Feb 2011: IntroductionWelcome to the third community drop of Project Silk. For this drop we are requesting feedback on overall application architecture, code review of the JavaScript Conductor and Widgets, and general direction of the application. Project Silk provides guidance and sample implementations that describe and illustrate recommended practices for building modern web applications using technologies such as HTML5, jQuery, CSS3 and Internet Explorer 9. This guidance is intended for experien...Minemapper: Minemapper v0.1.5: Now supports new Minecraft beta v1.3 map format, thanks to updated mcmap. Disabled biomes, until Minecraft Biome Extractor supports new format.New ProjectsAndrea Angella Personal Repository: This is my personal code repository.Audipyme: Aplicación web para el análisis y gestión de riesgos basado en la norma ISO 27001.Backfire 4 Umbraco: An implementation of backfire to work with UmbracoBadminton: Source codeCarManagement: Car ManagementCEQuery: CEQuery is usefull for managing SQL Server CE databases. It is developed in C# and supports SQL Server CE 3.5 and 4.0.Circuit Diagram: Circuit Diagram enables you to make electronic circuit diagrams and allows them to be exported as images. Ideal for use in coursework, you no longer have to use image editing programs to paste components together.CodeI2iportal: fdfdfCycling waypoints for Windows Phone 7: Show cycling POI on the Windows Phone 7d3d test: no summarydaemoniq - windows service hosting for mere mortals: Daemoniq provides a layer of a abstraction on top of System.ServiceProcess. This allows developers to concentrate on the functionality of their windows services in .Net by providing functionality such as configuration, deployment and debuggability.DimMock: DimMock is an object mocking framework for .NET Framework 4.0 that enables developers to mock simple and mildly complex classes extremely quick. euler 11: euler 11 problemeuler 30: euler 30euler 36 problem: euler 36 problemFiverw: Fiverw with ASP.NETGameList Creator for Wario's Jewel (GameBoy emulator for Windows Phone 7): Cette application vous permet de créer un catalogue de jeux vidéo pour des émulateurs mobiles et notamment Wario's Jewels, un emulateur GameBoy pour WP7. This application make it easier to create a game list for mobile emulators like Wario's Jewels, an WP7 GameBoy emulator.Gestion no Conformes: Gestión de No ConformesIamV Silverlight player: SilverLight photo album and music playerIcon Manager for Umbraco: Icon Manager for Umbraco - manage the icons within the setting secion of Umbraco by adding / removing from an available icon poolIE9 Extensions 4 Umbraco: A set of IE 9 extensions for Umbracoie9ify: A jQuery Plugin for adding IE9 features (site pinning, site mode, etc.) to your websitesImgUR.NET: ImgUr.NET helps the new .NET programmer host images on ImgUr. Developed in Visual Basic 2010, it can be used using only one line of code! You don't even need to parse the XML, ImgUr.NET automagically extract the URL and gives it to you! You can also get raw XML to parse yourself.Markup Programmability: Markup Programmability extends Blend Interactivity to a full programming language in markup: control flow, expressions, functions, objects, commands, converters, events, and more. Write an MVVM prototype in markup-only or use it for enhanced interactivity.MSCRM 4.0 Appointment Reminders (popup application without needing Outlook!): MSCRM 4.0 Appointment Reminders (popup application without Outlook!). Want to have your CRM Appointments popup on your desktop but don't have Outlook? Frustated by the CRM connection always going down and disconnecting itself? Built for local network use.myAspPracticals: my Asp.net pacticals-practical2NickMaoMix: It is only a sample project in Spring.Net + NHibernate + ASP.NET MVCOrchard Content Permissions: Orchard module enabling users to permit access to specified content items.Orchard LatestTwitter Module: Display latest Tweets for a user in an Orchard widget.PDF Little Signer: PDF Little Signer is a .NET3.5 library for self signing PDF document. It's very easy to use. It uses iTextSharp.perolas: perolasPGeom2D: PGeom2D is an attempt to derive the project http://livegeometry.codeplex.com/ as a more CAD oriented / desktop application. - it should use GDI+ instead of silverlight / WPF. - it should provide layers and blocks features. - it should support more export/import formats.Practical4: This is my practical4 webApplicationQxado: A .net framework written by BillQianReactive Extensions (Rx) Koans: Use this VS2010 project to learn Reactive (Rx) by doing small hacks in prepared lessons. Koans are a great way to get started learning Rx. Included Topics are: Observable Streams Time Events Aggregations Subscriptions RegularPractice: It is a project where i can upload my own prectice and demo projects of asp.net .RSVP04: RSVP systemschool project: Studiegids Projectshowyourfrustration: This web site is made for fun and enjoy over the internat.Silverlight Classified Cabinet: The Silverlight Classified Cabinet is a component that makes it easier to show classified items in a nice way. With the power of Silverlight, it lets users zoom and drag the items, categorize them into shelves and see detailed view. SkilledRESKAT: This Module helps SKilledRES for Automating Student Knowledge AssessmentTera X Emulator: Tera Emulator based on C#Test Tasks: Collection of completed test exercises.VG Content Display Web Part: VG Content Display Web Part is an alternative to Content Query Web Part (CQWP). It is easy to set up, uses custom XSLT file and it works with SharePoint 2010 Foundation. It allows displaying context-related content using current page metadata as filter values.vkPlayer: The standalone player for VKontakte siteWarMap Downloader: Easy way for download Warcraft 3 maps, from popular epicwar.com server. Hope you Enjoy!Web Crawler Sample: Simple solution that shows how NUnit, RhinoMock, Unity 2.0 DI container, and Parallel Extensions can play together. It is not considered as a best practice but rather aims to serve as a quick start solution.WP7Clipboard: WP7Clipboard is a library to mimic a shared clipboard on Windows Phone 7. This enables the user to copy objects from one application and post them into another without having to use a webservice/etc.????????? ?????????: "????????? ?????????" - ??? ???????, ??????????????? ????? ??????????????? ??????? ??? ??????? ??????? ????????????. ????????? ???????? ??? ?????? ? ????? ??????????? ??????? ??????? ?? ?????????? ????? ???.

    Read the article

  • Abstracting functionality

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/22/abstracting-functionality.aspxWhat is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it. Focus on functionality Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.) The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency. But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.) Sure, this data could be embellished with functionality. But nevertheless functionality was second. When you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects. I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java. But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations. To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of. What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased. No customer ever wanted data or structures. Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer. Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”). Build software incrementally From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about. Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers). An increment is some added functionality or quality of functionality.[1] So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures. Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.) The need for abstraction In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants. That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale. We learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams. That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example. There are several reasons for that, I´d say. Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2] In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart. And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree. Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this. Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality. That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions. Or, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions. We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer. So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data. Enter: Flow Design. Abstracting functionality using data flows I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow. Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature. With data flow we get rid of all the limiting traits of former approaches to modeling functionality. In addition, nomen est omen, data flows include data in the functionality picture. With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in. That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2] Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding. Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data. Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too. The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed. On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps. That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman. Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote. Summary I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff. Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy. Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be. And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day. To me data flow design simply is one of the missing links of systematic lightweight software design. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality. ? No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible. ?

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289  | Next Page >