Search Results

Search found 8687 results on 348 pages for 'per ersson'.

Page 87/348 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Making firefox refresh images faster

    - by Earlz
    I have a thing I'm doing where I need a webpage to stream a series of images from the local client computer. I have a very simple run here: http://jsbin.com/idowi/34 The code is extremely simple setTimeout ( "refreshImage()", 100 ); function refreshImage(){ var date = new Date() var ticks = date.getTime() $('#image').attr('src','http://127.0.0.1:2723/signature?'+ticks.toString()); setTimeout ("refreshImage()", 100 ); } Basically I have a signature pad being used on the client machine. We want for the signature to show up in the web page and for them to see themselves signing it within the web page(the pad does not have an LCD to show it to them right there). So I setup a simple local HTTP server which grabs an image of what the current state of the signature pad looks like and it gets sent to the browser. This has no problems in any browser(tested in IE7, 8, and Chrome) but Firefox where it is extremely laggy and jumpy and doesn't keep with the 10 FPS rate. Does anyone have any ideas on how to fix this? I've tried creating very simple double buffering in javascript but that just made things worse. Also for a bit more information it seems that Firefox is executing the javascript at the correct framerate as on the server the requests are coming in at a constant speed. But the images are only refreshed inconsistently ranging from 5 times per second all the way down to 0 times per second(taking 2 seconds to do a refresh) Also I have tried using different image formats all with the same results. The formats I've tried include bitmaps, PNGs, and GIFs (GIFs caused a minor problem in Chrome with flicker though) Could it be possible that Firefox is somehow caching my images causing a slight lag? I send these headers though: Pragma-directive: no-cache Cache-directive: no-cache Cache-control: no-cache Pragma: no-cache Expires: 0

    Read the article

  • Twitter Bootstrap: how to put unknown number of span* within a row-fluid?

    - by StackOverflowNewbie
    Assume I have the following nesting: <div class="cointainer-fluid"> <div class="row-fluid"> <div class="span3"> <!-- left sidebar here --> </div> <div class="span9"> <!-- main content here --> </div> </div> </div> I'd like to put an unknown number of <div class="span3"></div> in the main content area. (Each of the span3 is suppose to contain a product photo, name, price, etc.) Of course, my aim is that be responsive. So, I might display 20 products, which I'd like to possibly display 5 products per "row" on a wide screen, then 4 products per "row" on a slightly less wide screen, then 3, then 2, then 1. For example (each X represents a product): Wide Screen row 1: X X X X X row 2: X X X X X row 3: X X X X X row 4: X X X X X Less Wide Screen row 1: X X X X row 2: X X X X row 3: X X X X row 4: X X X X row 5: X X X X Even Less Wide Screen row 1: X X X row 2: X X X row 3: X X X row 4: X X X row 5: X X X row 6: X X X row 7: X X It seems like I need to do nested rows. However, if I do that, then I'll only be able to fit a certain amount of products in each nested row. That'll cause problems as the screen width decreases, for example (each X represents a product): Wide Screen row 1: X X X X X row 2: X X X X X row 3: X X X X X Less Wide Screen row 1: X X X X X row 2: X X X X X row 3: X X X X X How do I do what I want to do in Twitter Bootstrap?

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • hash fragments and collisions cont.

    - by Mark
    For this application I've mine I feel like I can get away with a 40 bit hash key, which seems awfully low, but see if you can confirm my reasoning (I want a small key because I want a small filename and the key will be converted to a filename): (Note: only accidental collisions a concern - no security issues.) A key point here is that the population in question is divided into groups, and a collision is only relevant if it occurs within the same group. A "group" is a directory on a user's system (the contents of files are hashed and a collision is only relevant if it occurs for files within the same directory). So with speculating roughly 100,000 potential users, say 2^17, that corresponds to 2^18 "groups" assuming 2 directories per user on average. So with a 40 bit key I can expect 2^(20+9) files created (among all users) before a collision occurs for some user somewhere. (Or IOW 2^((40+18)/2), due to the "birthday effect".) That's an average 4096 unique files created per user, for 2^17 users, before a single collision occurs for some user somewhere. And then that long again before another collision occurs somewhere (right?)

    Read the article

  • Project management software, available options

    - by canni
    Hey, sorry for posting this here, I know that this question better suites into SuperUser, but I would like to know answers from developers point of view. I have been using Indefero for project management etc. for some time, but I found that Indefero limitations are too big for my team. I'm searching project-management software that best suites this needs: Open-Source, but I can consider commercial apps GIT integration is mandatory, best if it can support multiple repos per project Time-tracking, good if it can have Gannt chart connected with issues etc. Issue, milestone, task tracking Good if it can be integrated with Gitosis, or have similar repository access control It must have an option, to setup on our own server Markdown syntax support is mandatory (or easy way to install plugin for this etc.) Issue tagging will be and advantage It will be used by developers team by 99% of time, but it has to have some simple interface, that clients can fill up bug reports etc. per project. It does not have to fill all this needs, but good if it can :) What options do You know, and can recommend?

    Read the article

  • SQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. I'm using MySQL 4.1+, but if you know the answer for another engine, just share. Appreciated advice in advance :)

    Read the article

  • Fake location for the device (custom)

    - by AtomRiot
    I know there are a few apps out there to fake a devices location but specifically what i want to do is use a location grabbed from a url. What direction should I look for setting the location on the device. The scenario i have is a jailbroken Wi-Fi iPad tethered to a nexus one. The nexus one would host a background service that when a request is recieved, it would respond with gps data of its current location. The jailbroken ipad would have a background service that either updated the location on a time interval, or on a per request basis (depending on how i have to implement it) by submitting a request to the tethered nexus one service. That data would then be set on the ipad and an application requesting location would get the service data. The goal is to recreate the location faker app's functionality with the exception of the spoofed location comes from the nexus ones gps via the service but i have not yet found out how to set the location data for the device. I can ofcourse implement this in a per app basis but it would be awesome to have any app be able to use it.

    Read the article

  • Merging rows with uniqueness constraints

    - by Flambino
    I've got a little time-tracking web app (implemented in Rails 3.2.8 & MySQL). The app has several users who add their time to specific tasks, on a given date. The system is set up so a user can only have 1 time entry (i.e. row) per task per date. I.e. if you add time twice on the same task and date, it'll add time to the existing row, rather than create a new one. Now I'm looking to merge 2 tasks. In the simplest terms, merging task ID 2 into task ID 1 would take this time | user_id | task_id | date ------+----------+----------+----------- 10 | 1 | 1 | 2012-10-29 15 | 2 | 1 | 2012-10-29 10 | 1 | 2 | 2012-10-29 5 | 3 | 2 | 2012-10-29 and change it into this time | user_id | task_id | date ------+----------+----------+----------- 20 | 1 | 1 | 2012-10-29 <-- time values merged (summed) 15 | 2 | 1 | 2012-10-29 <-- no change 5 | 3 | 1 | 2012-10-29 <-- task_id changed (no merging necessary) I.e. merge by summing the time values, where the given user_id/date/task combo would conflict. I figure I can use a unique constraint to do a ON DUPLICATE KEY UPDATE ... if I do an insert for every task_id=2 entry. But that seems pretty inelegant. I've also tried to figure a way to first update all the rows in task 1 with the summed-up times, but I can't quite figure that one out. Any ideas?

    Read the article

  • Clustered Graphs Visualization Techniques

    - by jameszhao00
    I need to visualize a relatively large graph (6K nodes, 8K edges) that has the following properties: Distinct Clusters. Approximately 50-100 Nodes per cluster and moderate interconnectivity at the cluster level Minimal (5-10 inter-cluster edges per cluster) interconnectivity between clusters Let global edge overlap = The edge overlaps caused by directly visualizing a graph of Clusters = {A, B, C, D, E}, Edges = {Pentagram of those clusters, which is non-planar by the way and will definitely generate edge overlap if you draw it out directly} Let Local Edge Overlap = the above but { A, B, C, D, E } are just nodes. I need to visualize graphs with the above in a way that satisfies the following requirements No global edge overlap (i.e. edge overlaps caused by inter-cluster properties is not okay) Local edge overlap within a cluster is fine Anyone have thoughts on how to best visualize a graph with the requirements above? One solution I've come up with to deal with the global edge overlap is to make sure a cluster A can only have a max of 1 direct edge to another cluster (B) during visualization. Any additional inter-cluster edges between cluster A - C, A - D, ... are disconnected and additional node/edges A - A_C, C - C_A, A - A_D, D - D_A... are created. Anyone have any thoughts?

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • Creating 3rd party library in Android. Need suggestions!!

    - by Lalith
    Hello all, I am planning to create a 3rd party library for android application developers. Also the main concern is I do not want the users to make many changes to their code base. The following are the main things I want to clarify before diving into the project. 1) I want to attach my own event handlers dynamically to the application. As per above limitations I mentioned, I cannot make users put the relevant code in their handlers as I need to support existing applications as well. I found out that we cannot attach more than one event handler per one listener. For example Button.setOnclickListener(somelistener) .. will set one listener. In traditional desktop applications we can add more than one handlers but here if I do one more Button.setOnclickListener(someOtherListener) this one overwrites the previous listener. Is there any way I can attach my own listener along with application's existing listeners dynamically? Also is it really not supported on android yet? 2) Is there any 3rd party alternative that exists already that monitors user's interactions in androidi? 3) If this is not supported now are there any future plans to support this feature? Please let me know about this as I did not get much feedback on Stackoverflow. Any suggestions and workarounds would really help me continue with this project. Regards, Lalith

    Read the article

  • Adding an element to a multidimensional array

    - by stef
    How can I loop through the array below and an element per array, with key "url_slug" and value "foo"? I tried with array_push but that gets rid of the key names (it seems?) Doing a foreach($array as $k = $v) doesn't do it either, I think. The new array should be exactly the same only having 4 elements per array instead of 3, with the key / values above. Array ( [0] => Array ( [name_en] => Test 5 [url_name_nl] => test-5 [cat_name] => mobile ) [1] => Array ( [name_en] => Test 10 [url_name_nl] => test-10 [cat_name] => mobile ) [2] => Array ( [name_en] => Test 25 [url_name_nl] => test-25 [cat_name] => mobile ) ) EDIT: full working solution. A little more complex than originally described foreach ($prods as $key => &$value) { if($key == "cat_name") $slug = $value['cat_name']; $url_slug = $this->lang->line($slug); $value['url_slug'] = $url_slug; }

    Read the article

  • Why is this giving me an infinite loop?

    - by Chase Yuan
    I was going through a code used to calculate investments until it has doubled and I received an infinite loop that I can't seem to solve. Can anyone figure out why this is giving me an infinite loop? I've gone through myself but I can't seem to find the problem. The "period" referred is how many times per year the interest is compounded. double account = 0; //declares the variables to be used double base = 0; double interest = 0; double rate = 0; double result = 0; double times = 0; int years = 0; int j; System.out.println("This is a program that calculates interest."); Scanner kbReader = new Scanner(System.in); //enters in all data System.out.print("Enter account balance: "); account = kbReader.nextDouble(); System.out.print("Enter interest rate (as decimal): "); rate = kbReader.nextDouble(); System.out.println(" " + "Years to double" + " " + "Ending balance"); base = account; result = account; for (j=0; j<3; j++){ System.out.print("Enter period: "); times = kbReader.nextDouble(); while (account < base*2){ interest = account * rate / times; account = interest + base; years++; } account = (((int)(account * 100))/100.0); //results System.out.print(" " + i + " " + account + "\n"); account = result; } The code should ask for three "periods", or three different times the entered data is compounded per year (ex annually, monthly, daily etc.) Thanks a lot!

    Read the article

  • Foreign-key-like merge in R

    - by skyl
    I'm merging a bunch of csv with 1 row per id/pk/seqn. > full = merge(demo, lab13am, by="seqn", all=TRUE) > full = merge(full, cdq, by="seqn", all=TRUE) > full = merge(full, mcq, by="seqn", all=TRUE) > full = merge(full, cfq, by="seqn", all=TRUE) > full = merge(full, diq, by="seqn", all=TRUE) > print(length(full$ridageyr)) [1] 9965 > print(summary(full$ridageyr)) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00 11.00 19.00 29.73 48.00 85.00 Everything is great. But, I have another file which has multiple rows per id like: "seqn","rxd030","rxd240b","nhcode","rxq250" 56,2,"","",NA,NA,"" 57,1,"ACETAMINOPHEN","01200",2 57,1,"BUDESONIDE","08800",1 58,1,"99999","",NA 57 has two rows. So, if I naively try to merge this file, I have a ton more rows and my data gets all skewed up. > full = merge(full, rxq, by="seqn", all=TRUE) > print(length(full$ridageyr)) [1] 15643 > print(summary(full$ridageyr)) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00 14.00 41.00 40.28 66.00 85.00 Is there a normal idiomatic way to deal with data like this? Suppose I want a way to make a simple model like MYSPECIAL_FACTOR <- somehow() glm(MYSPECIAL_FACTOR ~ full$ridageyr, family=binomial) where MYSPECIAL_FACTOR is, say, whether or not rxd240b == "ACETAMINOPHEN" for the observations which are unique by seqn. You can reproduce by running the first bit of this.

    Read the article

  • Prevent cached objects to end up in the database with Entity Framework

    - by Dirk Boer
    We have an ASP.NET project with Entity Framework and SQL Azure. A big part of our data only needs to be updated a few times a day, other data is very volatile. The data that barely changes we cache in memory at startup, detach from the context and than use it mainly for reading, drastically lowering the amount of database requests we have to do. The volatile data is requested everytime by a DbContext per Http request. When we do an update to the cached data, we send a message to all instances to catch a fresh version of all the data from the SQL server. So far, so good. Until we introduced a bug that linked one of these 'cached' objects to the 'volatile' data, and did a SaveChanges. Well, that was quite a mess. The whole data tree was added again and again by every update, corrupting the whole database with a whole lot of duplicated data. As a complete hack I added a completely arbitrary column with a UniqueConstraint and some gibberish data on one of the root tables; hopefully failing the SaveChanges() next time we introduce such a bug because it will violate the Unique Constraint. But it is of course hacky, and I'm still pretty scared ;P Are there any better ways to prevent whole tree's of cached objects ending up in the database? More information Project is ASP.NET MVC I cache this data, because it is mainly read only, and this saves a tons of extra database calls per http request

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • CodePlex Daily Summary for Sunday, April 01, 2012

    CodePlex Daily Summary for Sunday, April 01, 2012Popular Releasesxyzzy+: April 1, 2012: SHA1: 6a07f0ed8d8006f26936a5bb45cf85405d8de8a4 WarningThis release is not for daily use, just for fun. keymaps are broken. (For example, C-g, #\TAB and #\RET will not work in minibuffer) dialogs are completely broken. Usual xyzzy+class lisp_object; typedef lisp_object *lisp; lsymbol *p = ldata <lsymbol, Tsymbol>::alloc (); Today's xyzzy+ref class lisp_object; typedef lisp_object ^lisp; lsymbol ^p = gcnew lsymbol (Tsymbol); PrerequisitesMicrosoft Visual C++ 2010 SP1 Redistributable Pack...VidCoder: 1.3.2: Added option for the minimum title length to scan. Added support to enable or disable LibDVDNav. Added option to prompt to delete source files after clearing successful completed items. Added option to disable remembering recent files and folders. Tweaked number box to only select all on a quick click.MJP's DirectX 11 Samples: Light Indexed Deferred Rendering: Implements light indexed deferred using per-tile light lists calculated in a compute shader, as well as a traditional deferred renderer that uses a compute shader for per-tile light culling and per-pixel shading.Extended WPF Toolkit: Extended WPF Toolkit - 1.6.0: Want an easier way to install the Extended WPF Toolkit?The Extended WPF Toolkit is available on Nuget. What's in the 1.6.0 Release?BusyIndicator ButtonSpinner Calculator CalculatorUpDown CheckListBox - Breaking Changes CheckComboBox - New Control ChildWindow CollectionEditor CollectionEditorDialog ColorCanvas ColorPicker DateTimePicker DateTimeUpDown DecimalUpDown DoubleUpDown DropDownButton IntegerUpDown Magnifier MaskedTextBox MessageBox MultiLineTex...ScriptIDE: Release 4.4: ...Media Companion: MC 3.434b Release: General This release should be the last beta for 3.4xx. If there are no major problems, by the end of the week it will upgraded to 3.500 Stable! The latest mc_com.exe should be included too! TV Bug fix - crash when using XBMC scraper for TV episodes. Bug fix - episode count update when adding new episodes. Bug fix - crash when actors name was missing. Enhanced TV scrape progress text. Enhancements made to missing episodes display. Movies Bug fix - hide "Play Trailer" when multisaev...Better Explorer: Better Explorer 2.0.0.831 Alpha: - A new release with: - many bugfixes - changed icon - added code for more failsafe registry usage on x64 systems - not needed regfix anymore - added ribbon shortcut keys - Other fixes Note: If you have problems opening system libraries, a suggestion was given to copy all of these libraries and then delete the originals. Thanks to Gaugamela for that! (see discussion here: 349015 ) Note2: I was upload again the setup due to missing file!LINQ Extensions Library: 1.0.2.7: Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) New Align/Match extensions (1.0.2.6) Ready to use stable code with comprehensive unit tests and samples New Pivot extensions New Filter ExtensionsMonoGame - Write Once, Play Everywhere: MonoGame 2.5: The MonoGame team are pleased to announce that MonoGame v2.5 has been released. This release contains important bug fixes, implements optimisations and adds key features. MonoGame now has the capability to use OpenGLES 2.0 on Android and iOS devices, meaning it now supports custom shaders across mobile and desktop platforms. Also included in this release are native orientation animations on iOS devices and better Orientation support for Android. There have also been a lot of bug fixes since t...SQL Server Reporting Services MSBuild Tasks: Beta Release 1.1.15427: This update beta release base on feedback from a user. Also a coding error was corrected. The updates are as follows: Remove Redundant task: CreateDataSubscriptions. Updated CreateSubscriptions To handle both Subscriptions and Data-Driven Subscriptions. Also the change how the CreateSubscriptions works. If the report, for wihch if define for the subscription, already has subscription define then by default all the Subscriptions for that report are not deploy. This can be overr...Circuit Diagram: Circuit Diagram 2.0 Alpha 3: New in this release: Added components: Microcontroller Demultiplexer Flip & rotate components Open XML files from older versions of Circuit Diagram Text formatting for components New CDDX syntax Other fixesUmbraco CMS: Umbraco 5.1 CMS (Beta): Beta build for testing - please report issues at issues.umbraco.org (Latest uploaded: 5.1.0.123) What's new in 5.1? The full list of changes is on our http://progress.umbraco.org task tracking page. It shows items complete for 5.1, and 5.1 includes items for 5.0.1 and 5.0.2 listed there too. Here's two headline acts: Members5.1 adds support for backoffice editing of Members. We support the pairing up of our content type system in Hive with regular ASP.NET Membership providers (we ship a def...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.2.11: One Click Install from NuGet Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla Public Licence 2. 2. User interface control and associated data access layer classes have been added to aid developers integrating 51Degrees.mobi into wider projects such as content management systems or web hosting management solutions. Use the following in a web form or user control to access these new UI components. <%@ Register Assembly="FiftyOne.Foundation" Namespace="...JSON Toolkit: JSON Toolkit 3.1: slight performance improvement (5% - 10%) new JsonException classPicturethrill: Version 2.3.28.0: Straightforward image selection. New clean UI look. Super stable. Simplified user experience.SQL Monitor - managing sql server performance: SQL Monitor 4.2 alpha 16: 1. finally fixed problem with logic fault checking for temporary table name... I really mean finally ...ScintillaNET: ScintillaNET 2.5: A slew of bug-fixes with a few new features sprinkled in. This release also upgrades the SciLexer and SciLexer64 DLLs to version 3.0.4. The official stuff: Issue # Title 32402 32402 27137 27137 31548 31548 30179 30179 24932 24932 29701 29701 31238 31238 26875 26875 30052 30052 Harness: Harness 2.0.2: change to .NET Framework Client Profile bug fix the download dialog auto answer. bug fix setFocus command. add "SendKeys" command. remove "closeAll" command. minor bugs fixed.BugNET Issue Tracker: BugNET 0.9.161: Below is a list of fixes in this release. Bug BGN-2092 - Link in Email "visit your profile" not functional BGN-2083 - Manager of bugnet can not edit project when it is not public BGN-2080 - clicking on a link in the project summary causes error (0.9.152.0) BGN-2070 - Missing Functionality On Feed.aspx BGN-2069 - Calendar View does not work BGN-2068 - Time tracking totals not ok BGN-2067 - Issues List Page Size Bug: Index was out of range. Must be non-negative and less than the si...YAF.NET (aka Yet Another Forum.NET): v1.9.6.1 RTW: v1.9.6.1 FINAL is .NET v4.0 ONLY v1.9.6.1 has: Performance Improvements .NET v4.0 improvements Improved FaceBook Integration KNOWN ISSUES WITH THIS RELEASE: ON INSTALL PLEASE DON'T CHECK "Upgrade BBCode Extensions...". More complete change list and discussion here: http://forum.yetanotherforum.net/yaf_postst14201_v1-9-6-1-RTW-Dated--3-26-2012.aspxNew Projects.NET Micro Framework - String Extensions: String Extension class library for .NET Micro Framework. This includes basic type conversion from 'byte' to 'string'.AGS: AGSAtlas Engine: Atlas is a game object-component engine using XNA 4 for Windows Phone 7.1. It is currently very early in it's development and is very much a work in progress.Cet Open Toolbox: Public repository for open sources projects brought to you by CET Electronics. Featuring .Net, .Net Micro Framework and several related technologies.ClassM: ClassM is an app that uses Metro Style for Windows 8. This application is intended to facilitate the management of classes taught by a teacher.CommandLineHelp: CommandLineHelp is a framework for simplifying the automated execution of command-line programs and saving their output.Conectayas: Conectayas is an open source "Connect Four" alike game but transformable to "Tic-Tac-Toe" and to a lot of similar games that uses mouse. Written in DHTML (JavaScript, CSS and HTML). Very configurable. This cross-platform and cross-browser game was tested under BeOS, Linux, *BSD, Windows and others.Crudo: CRUDO - The MCG (Model-Controller-Generator) CGF (Code Generation Framework) Visit The Project HomePage: http://adityayadav.com/CRUDO_The_MCG_Model_Controller_Generator_CGF_Code_Generation_Framework.aspx Licenses: 1) GPL v2 2) Commercial (contact us for information)Desafio Dot.Net: Projeto para o Desafio DotNetFurcadia Heimdall Tester: An application that helps Furcadia technicians test the integrity of the game server. It checks for availability of each heimdall, its connectivity to the rest of the system (horton/tribble) and how often it receives a user compared to the rest of them.GS1: D is a 2D game demo written in C++ and using an API : HAPI for the graphic part and the audio part. All the xml files are handled with tinyXML. It is a vertical scrolling shoot'em up where the player controls a dragon flying in Central Park.GS2: In Zombies, you are a wizard, the most powerful wizard in the world, and two days ago, the Devil forces began to attack our world. The only person capable of stopping them is you, this is why the Devil himself came to you and took your powers. You're now alone, without any weaponHeterogeneous Data Centre: The Heterogeneous Data Centre project supersedes the Materials Data Centre, a JISC-funded initiative to build an infrastructure for materials scientists and engineers to publish their experimental data online. The HDC can support data from any discipline, not just engineering.HJJM Adv. Database Project: Advanced database project for Hughes, Johnson, Johnson, and McShannon.Hundiyas: Hundiyas is an open source "Battleship" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses mouse. This cross-platform and cross-browser game was tested under BeOS, Linux, *BSD, Windows and others.IpSpy: IpSpy is a Windows Service Application that checks External IP address and if it changed, IpSpy sends Email with new IP to specified email addressMake calculator in asp.net: create calculator in asp.netMarTech SharePoint Sandboxed Solutions: Microsoft SharePoint 2010 is missing some key functionalities to make sure SharePoint is easy to use. My Sandbox Solutions adds these missing functionalities and makes it easier for consultants to implement the wanted functionalities. By using sandboxed solutions no farm solution has to be installed and every site can have it own solutions. Sandbox solutions gives flexibility to the site administrator without disturbing the farm administrator and security risks.MDS Administration: Master Data Services Administrator. Compare MDS models from the same or different serversmicrostockUploader: Uploads multiple JPEG images with additional files (RAW, EPS) to multiple microstocks. Supports FTP resume. Supports buggy routers which drop FTP connection after some timeout.Min-Mang: A logical game implementation.Multiverse OS: A Cosmos based O.S.N2F Yverdon Database Helper: A class to aid in performing simple database queries within N2F Yverdon. Also provides the capability to store queries for later use.N2F Yverdon Scryle Manager: This extension will provide a way to manage javascript and stylesheet files for inclusion in your templates. Compression, combination and minification are included.OPSM: OPSM Miner & information projectPatternPro Regular Expression Engine: PatternPro RXE is a Regular Expression Engine coded entirely in C# that has some features not offered in the MS implementation. The PatterProRXE project also contains a multi-state text scanner that makes it easy to create multi-state text scanners and parsers.PinBeiWang: PinBeiWangProgram Options: Parse command line optionsrealestateanalytics: Analytics for real estateRegistrationManagement: registration management of our company using asp.netSchool Project 12: SchoolProject12SelfService: Simple self service projectSMVector3: Vector3 class implemented as float array or with SIMD instructions with the same interface so it is transparant whether you decide to use one version or another. You can also change version during the life cycle of the projects.SVNTAGWC - Tag a SVN working copy: SVNTAGWC will help users and configuration managers tag builds of their projects. It will automatically freeze all external revisions and add all unversioned files to a specified copy (or tag).WeiboImage: a weibo image projectweizhi: sina weibo readerWindows Media Autorization: Windows Media Autorizaton PlugIn for windows media 9 WinRtBehaviors: A project for WinRT Attached behaviorswpfPostgres: Started...ZLib: by zapline 278998871@qq.com???????????: ???????? «???????????», ???????????? ? ?????? ?????????????? ??????????? ???????? ?? C#. ???????? ?? C#.

    Read the article

  • CodePlex Daily Summary for Monday, April 02, 2012

    CodePlex Daily Summary for Monday, April 02, 2012Popular ReleasesDocument.Editor: 2012.2: Whats New for Document.Editor 2012.2: New Save Copy support New Page Setup support Minor Bug Fix's, improvements and speed upsVidCoder: 1.3.2: Added option for the minimum title length to scan. Added support to enable or disable LibDVDNav. Added option to prompt to delete source files after clearing successful completed items. Added option to disable remembering recent files and folders. Tweaked number box to only select all on a quick click.MJP's DirectX 11 Samples: Light Indexed Deferred Rendering: Implements light indexed deferred using per-tile light lists calculated in a compute shader, as well as a traditional deferred renderer that uses a compute shader for per-tile light culling and per-pixel shading.Pcap.Net: Pcap.Net 0.9.0 (66492): Pcap.Net - March 2012 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes a packet interpretation framework. Version 0.9.0 (Change Set 66492)March 31, 2012 release of the Pcap.Net framework. Follow Pcap.Net on Google+Follow Pcap.Net on Google+ Files Pcap.Net.DevelopersPack.0.9.0.66492.zip - Includes all the tutorial example projects source files, the binaries in a 3rdParty directory and the documentation. It include...Extended WPF Toolkit: Extended WPF Toolkit - 1.6.0: Want an easier way to install the Extended WPF Toolkit?The Extended WPF Toolkit is available on Nuget. What's in the 1.6.0 Release?BusyIndicator ButtonSpinner Calculator CalculatorUpDown CheckListBox - Breaking Changes CheckComboBox - New Control ChildWindow CollectionEditor CollectionEditorDialog ColorCanvas ColorPicker DateTimePicker DateTimeUpDown DecimalUpDown DoubleUpDown DropDownButton IntegerUpDown Magnifier MaskedTextBox MessageBox MultiLineTex...ScriptIDE: Release 4.4: ...Media Companion: MC 3.434b Release: General This release should be the last beta for 3.4xx. If there are no major problems, by the end of the week it will upgraded to 3.500 Stable! The latest mc_com.exe should be included too! TV Bug fix - crash when using XBMC scraper for TV episodes. Bug fix - episode count update when adding new episodes. Bug fix - crash when actors name was missing. Enhanced TV scrape progress text. Enhancements made to missing episodes display. Movies Bug fix - hide "Play Trailer" when multisaev...Better Explorer: Better Explorer 2.0.0.831 Alpha: - A new release with: - many bugfixes - changed icon - added code for more failsafe registry usage on x64 systems - not needed regfix anymore - added ribbon shortcut keys - Other fixes Note: If you have problems opening system libraries, a suggestion was given to copy all of these libraries and then delete the originals. Thanks to Gaugamela for that! (see discussion here: 349015 ) Note2: I was upload again the setup due to missing file!MonoGame - Write Once, Play Everywhere: MonoGame 2.5: The MonoGame team are pleased to announce that MonoGame v2.5 has been released. This release contains important bug fixes, implements optimisations and adds key features. MonoGame now has the capability to use OpenGLES 2.0 on Android and iOS devices, meaning it now supports custom shaders across mobile and desktop platforms. Also included in this release are native orientation animations on iOS devices and better Orientation support for Android. There have also been a lot of bug fixes since t...Circuit Diagram: Circuit Diagram 2.0 Alpha 3: New in this release: Added components: Microcontroller Demultiplexer Flip & rotate components Open XML files from older versions of Circuit Diagram Text formatting for components New CDDX syntax Other fixesUmbraco CMS: Umbraco 5.1 CMS (Beta): Beta build for testing - please report issues at issues.umbraco.org (Latest uploaded: 5.1.0.123) What's new in 5.1? The full list of changes is on our http://progress.umbraco.org task tracking page. It shows items complete for 5.1, and 5.1 includes items for 5.0.1 and 5.0.2 listed there too. Here's two headline acts: Members5.1 adds support for backoffice editing of Members. We support the pairing up of our content type system in Hive with regular ASP.NET Membership providers (we ship a def...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.2.11: One Click Install from NuGet Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla Public Licence 2. 2. User interface control and associated data access layer classes have been added to aid developers integrating 51Degrees.mobi into wider projects such as content management systems or web hosting management solutions. Use the following in a web form or user control to access these new UI components. <%@ Register Assembly="FiftyOne.Foundation" Namespace="...JSON Toolkit: JSON Toolkit 3.1: slight performance improvement (5% - 10%) new JsonException classPicturethrill: Version 2.3.28.0: Straightforward image selection. New clean UI look. Super stable. Simplified user experience.SQL Monitor - managing sql server performance: SQL Monitor 4.2 alpha 16: 1. finally fixed problem with logic fault checking for temporary table name... I really mean finally ...ScintillaNET: ScintillaNET 2.5: A slew of bug-fixes with a few new features sprinkled in. This release also upgrades the SciLexer and SciLexer64 DLLs to version 3.0.4. The official stuff: Issue # Title 32402 32402 27137 27137 31548 31548 30179 30179 24932 24932 29701 29701 31238 31238 26875 26875 30052 30052 Vodigi Open Source Interactive Digital Signage: Vodigi Release 5.0: Vodigi Release 5.0 The .ZIP file for this release contains everything you need to setup and install Vodigi 5.0. Setup and intallation documentation is included in the .ZIP file. Vodigi Release 5.0 consists of the following core components: Vodigi Administrator Web Site Vodigi Player Windows Application Vodigi Media Uploader Windows Application Vodigi Databases Refer to the documentation included in the .ZIP file to setup and configure your servers and player devices for this release.Harness: Harness 2.0.2: change to .NET Framework Client Profile bug fix the download dialog auto answer. bug fix setFocus command. add "SendKeys" command. remove "closeAll" command. minor bugs fixed.BugNET Issue Tracker: BugNET 0.9.161: Below is a list of fixes in this release. Bug BGN-2092 - Link in Email "visit your profile" not functional BGN-2083 - Manager of bugnet can not edit project when it is not public BGN-2080 - clicking on a link in the project summary causes error (0.9.152.0) BGN-2070 - Missing Functionality On Feed.aspx BGN-2069 - Calendar View does not work BGN-2068 - Time tracking totals not ok BGN-2067 - Issues List Page Size Bug: Index was out of range. Must be non-negative and less than the si...YAF.NET (aka Yet Another Forum.NET): v1.9.6.1 RTW: v1.9.6.1 FINAL is .NET v4.0 ONLY v1.9.6.1 has: Performance Improvements .NET v4.0 improvements Improved FaceBook Integration KNOWN ISSUES WITH THIS RELEASE: ON INSTALL PLEASE DON'T CHECK "Upgrade BBCode Extensions...". More complete change list and discussion here: http://forum.yetanotherforum.net/yaf_postst14201_v1-9-6-1-RTW-Dated--3-26-2012.aspxNew ProjectsAdvanced JavaScript outlining for Visual Studio 11: This is extension for Visual Studio 11 that adds additional outlining for JavaScript Editorakrypt2: qt-based GUI for libaxel http://axelkenzo.ru/index.php?section=libaxel.downloadaluminium: aluminium calculationAuditDbContext - Entity Framework Auditing Context: AuditDbContext provides entity change auditing for Entity Framework POCO entities.AutoBox: Creates a fresh .Net developer environment from a bare OS utilizing powershell, chocolatey and webpi.Bookregator: Bookregator is a C# application written to aggregate data using Amazon's Advertising API, WorldCat, and GoodReads.Bootstrap.ConfirmModal: There is a situation that we need user to confirm before they proceed their action. You don’t want to accidently delete very important information. So I come up the idea to extend the bootstrap modal popup to create a confirm modal before calling the function to delete some stuffColour Lovers .NET: A .NET library for the Colour Lovers API.Cursos y Causas: Cursos y Causas desarrollado en asp.net MVCDataModels: DataModels is a project which aims to allow for easy reuse of specific data models using a very simple API. easy framework is used to fast work: codingEGM Engine: The engine for the Express Game Maker editor.E-Junkey: Project personalExpress Game Maker: You can use Express Game Maker to makes games without the need to write a single line of code! EGM's source is shared and is constantly improved by developers around the world. Learn Express Game Maker in no time with tutorials, videos and templates. Share what you learn with the community and ask the community for help. Express Game Maker is free, if you paid for it anywhere, we suggest you ask for a refund.Extending Razor Engine: Extending Razor Engine. Nice and clean solution for CMS system, such as Kentico, red dot, etc.FloodWarn: A series of server and client apps for monitoring flood levels on the Snoqualmie River in King County, Washington.GetThatList: With GetThatList people will find an easy way to copy a music playlist and its songs to another location, being another folder or a remote computer. It is designed so that it can be exposed to the final user as an standalone application or a Shell extension for playlist files.HashMapper - Object-Hash Mapper for Redis: Object-Hash Mapper for Redis and BookSleeve.hostedit: small utility to quickly change the host file. toggles a single clickI-Control: SecretIIS Hosts File Manager: Here's an IIS 7.5 and 8.0 module to add host headers to the Hosts file without having to edit it with notepad. Very useful if you create a lot of web sites for testing or demo purposes. Interval Trainer: Inspired by new research* on interval training this application will help you easily transition into interval based workouts. Current Features Two interval cycles that can be individually customized in time length Preloaded ideal workout for aerobic exercise based on suggested research* (high intensity sprints during workout interval, light jogs during rest) Coming Soon Custom interval workouts with as many intervals in a cycle as necessary Persistent workout settings *Links to t...MicroRuntime: The MicroRuntime project is a .NET utility library.MS Office Word Navigation: Navigate forward/backward inside a Word document MvcFlow: Integration between Workflow Foundation 4.5 and ASP.NET MVC 4NewLineReplacer: Replace letter fast and easy in great textfilesObject-Oriented CAML: Using CAML objectsOpenCover: A code coverage tool for .NET 2 and above, support for 32 and 64 processes (including Silverlight) with both branch and sequence points; now supporting coverage by test feature. This is a mirror of the original github repository to allow codeplex users to contribute. The latest downloads can be found here https://github.com/sawilde/opencover/downloads and is also available using nuget. Pavings.NET: Library for applied interval analysis including intervals, boxes and sub-pavings. Interval analysis is a method of approximating sets with any degree of precision and it has applications from optimization to robotics. Inspired by book "Applied Interval Analysis" by Luc Jaulin et al.proyectoIntegrador3: Proyecto Integrador 3QPAPrintLib: Print every document by its recommended programmsChord: Typing text on the consoles doesn’t have to mean another trauma. Having to navigate to each letter with arrows and analog sticks is really inefficient & lame. In fact you can easily encode each character as a combination of positions of two analog sticksServer DateTime: Server DateTime renders the date and time from the server and make it active using javascript. It is in Military Time Format.SjclHelpers: Helpers for using the Stanford Javascript Crypto Library with .NET.SSAS AMO DB: SSAS AMO DB is a database version of AMO which helps to view the metadata stored in the SSAS cube. The Metadata will be loaded from the SSAS cube using AMO into a SQL Server database using SSIS package. From that database user can generate reports for the SSAS metadata. This database stores the below SSAS objects and their properties Server Databases DataSource DataSourceView DataSourceViewTablecolumns Cube CubeDimension DimensionAttribute AttributeKeyColumns AttributeKeyColum...Stump: A really small BDD framework built on top of nunitSvnbox.org svn sync project (dropbox like): Sync your folder on svn repository work as a teamTextShadowWrapper: TextShadowWrapper is a custom server control for ASP.NET web pages. It inherits from System.Web.UI.WebControls.Label and supports CSS3 text shadows.tjnetSite: Web Site for the Tijuana .Net User groupTönnenKlapps: XNA game where you try to smash a 3D spinning barrel using the correct coloured buttons and the right timing.WP7 Selected Pivot: an example showing how to navigate from one page to desired pivot on another pageXAML Metro Application Isolated Storage Helper: XAMLMetroAppIsolatedStorageHelper helps to Save, Retrieve and Delete structured data in the Isolated Storage. This helper class helps in XAML based Metro application. xsockets: XSockets Test

    Read the article

  • why is Outlook 2007 continuously losing Connection to Exchange Server

    - by Manu
    what could be the reason for Outlook 2007 continously losing and reestablishing the connection to exchange? I tried disabling all anti-viruses and firewalls but it did not help. I should mention that even though this seems to happen to all users, some users cannot even send emails because it happens every few seconds while others can work relatively undisturbed (it happens a few times per hour).

    Read the article

  • Bandwidth Limit for IIS FTP 7.0(or 7.5)

    - by oruchreis
    Hi, FTP Server 7 doesn't support to set bandwidth limit on upload or download. Is there any way to limit bandwidth of both upload or download? Is there any extension or plugin to have this feature. I don't want to use third party ftp servers. Maybe, there is an extension for IIS to limit. Edit: I want to limit per user or virtual directory. Regards.

    Read the article

  • Windows Server 2003 Terminal Server does not give out all available licenses (solved)

    - by Erwin Blonk
    I installed the Terminal Server role in Windows Server 2003 Standard 64-bits. Still, only 2 connections are allowed. The License Manager says that there are 10 Device CALs available, which is correct, and that none are given out. For good measure I let the server reboot, to no effect. Before this, there was another server (same Windows, except that it is 32 bits) active as a licensing server. I removed the role first and then then added it to the new server. I then removed the Terminal Server Licensing Server component off the old one and added it to the new one. After that, I added to licenses. When that didn't give the required result, I rebooted to new server. Still, the new server, with licenses and all, acts as if it has the 2 license RDP. The server are all stand-alone, there is no active directory been set up. Both servers are in different workgroups. Update (4/12/10): The server has changed the entries in the Terminal Server Licensing a few times. After installing the licenses it added an entry of which the exact phrasing I forgot but it was about temporary Windows 2003 device licenses. Later it added Windows Server 2003 - TS Per Device CAL. The temporary held 2 licenses (standard RDP licenses, I think) and the other 10. At some point, seemingly unrelated from the testing we did, it used a licenses from the new pool. This morning, 2 licenses were used from the pool of 10 and only 1 from the temporary/RDP pool (I wish I had screenshots to show, it changed every few hours oir so it seems). Although I had already activated the server over the internet, and re-activated it, I decided to go through the whole procedure by phone. Update 2 (4/12/10) The problem has been solved. It seems the activation over the web, while it said to have succeeded, did not work correctly. After activating by phone, it did work. What was different from the old setup and what put me on the wrong foot from that moment, was that I now need to create seperate user account because a session with one user account will be taken over by someone else when that account is used by that person. On the previous server, it was possible to open several sesions with the same account. We now use Per Device licenses, I'm not sure what was used before. Thanks all for the replies.

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >