Search Results

Search found 772 results on 31 pages for 'ordering'.

Page 26/31 | < Previous Page | 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Help dealing with data dependency between two registration forms

    - by franko75
    I have a tricky issue here with a registration of both a user and his/her pet. Both the user and the pet are treated as separate entities and both require separate registration forms. However, the user's pet has to be linked to the user via a foreign key in the database. The process is basically that when a new user joins the site, firstly they register their pet, then they register themselves. The reason for this order is to check their pet's eligibility for the site (there are some criteria to be met) first, instead of getting the user to sign up only to then find out their pet is ineligible. It is this ordering of the form submissions which is causing me a bit of a headache, as follows... The site is being developed with an MVC framework, and the User registration process is managed via a method in a User_form controller, while the pet registration process is managed via a method in the Pet_form controller. The pet registration form happens first, and the pet data can be saved without the owner_id at this stage, with the user id possibly being added (e.g by retrieving pet's id from session) following user registration. However, doing it this way could potentially result in redundant data, where pet records would be created in the database, but if the user doesn't actually register themselves too, then the pets will be ownerless records in the DB. Other option is to serialize the new pet's data at the pet registration stage, don't save it to the DB until the user fills out their registration form. Once the user is created, i can pass serialised data AND the owner_id to a method in the Pet Model which can update the DB. However, I also need to set the newly created $pet to $this-pet which I then access for a sequence of other related forms. Should I just set the session variable in the model method? Then in the Pet controller constructor, do a check for pet stored in session, if yes, assign to $this-pet... If this makes any sense to anybody and you have some advice, i'd be grateful to hear it!

    Read the article

  • boost multi_index partial indexes

    - by Gokul
    Hi, I want to implement inside boost multi-index two sets of keys with same search criteria but different eviction criteria. Say i have two sets of data with same search condition, but one set needs a MRU(Most Recently Used) list of 100 and the other set requires a MRU of 200. Say the entry is like this class Student { int student_no; char sex; std::string address; }; The search criteria is student_no, but for sex='m', we need MRU of 200 and for sex='f', we need a MRU of 100. Now i have a solution where in i introduce a new ordered index to maintain ordering. For example the IndexSpecifierList will be something like this typedef multi_index_container< Student, indexed_by< ordered_unique< member<Student, int, &Student::student_no> >, ordered_unique< composite_key< member<Student, char, &Student::sex>, member<Student, int, &Student::sex_specific_student_counter> > > > > student_set Now everytime, i am inserting a new one, i have to take a equal_range for that using index 2 and remove the oldest one and if something is getting re-used, i have to update it by incrementing the counter. Is there a better solution to this kind of problem? Thanks, Gokul.

    Read the article

  • How to use clearInterval() and then make changes to the DOM?

    - by George D.
    I have this code and the problem I have is that I want to stop the loop and then replace the text 'Done!' that comes from the sms-loader.php script with the "textnew" string. Problem is that the loop occurs one more time so the text in the div.checkstatus field is again replaced by the calling php script. The strange thing is that I see the log message and again I get a new (and final) request, although the ordering is the opposite (first stop then replace text()) in my script. I need to understand why this is happening. $(document).ready(function() { var interval = ""; $('.checkstatus').each(function(){ var msgid = $(this).data('msg') $this = $(this), hid = $this.data('history'), textnew = '<a href="sms-dstatus.php?id='+msgid+'&sid=' +hid+ '&amp;keepThis=true&amp;TB_iframe=true&amp;height=430&amp;width=770" title="Delivery Status" class="thickbox"><img src="../template/icons/supermini/chart_curve.png" alt="status" width="16" height="16" /></a>'; interval = setInterval(function() { $this.load('../pages/sms-loader.php?id='+msgid); // stop the loop if($this.text()=='Done!'){ // stop it clearInterval(interval); console.log(textnew); this.html(textnew); /// this line is the problem } }, 5000); // 5 secs }); });

    Read the article

  • Performing measures within the execution of a c++ code every t milliseconds

    - by user506901
    Given a while loop and the function ordering as follows: int k=0; int total=100; while(k<total){ doSomething(); if(approx. t milliseconds elapsed) { measure(); } ++k; } I want to perform 'measure' every t-th milliseconds. However, since 'doSomething' can be close to the t-th millisecond from the last execution, it is acceptable to perform the measure after approximately t milliseconds elapsed from the last measure. My question is: how could this be achieved? One solution would be to set timer to zero, and measure it after every 'doSomething'. When it is withing the acceptable range, I perform measures, and reset. However, I'm not which c++ function I should use for such a task. As I can see, there are certain functions, but the debate on which one is the most appropriate is outside of my understanding. Note that some of the functions actually take into account the time taken by some other processes, but I want my timer to only measure the time of the execution of my c++ code (I hope that is clear). Another thing is the resolution of the measurements, as pointed out below. Suppose the medium option of those suggested.

    Read the article

  • List with non-null elements ends up containing null. A synchronization issue?

    - by Alix
    Hi. First of all, sorry about the title -- I couldn't figure out one that was short and clear enough. Here's the issue: I have a list List<MyClass> list to which I always add newly-created instances of MyClass, like this: list.Add(new MyClass()). I don't add elements any other way. However, then I iterate over the list with foreach and find that there are some null entries. That is, the following code: foreach (MyClass entry in list) if (entry == null) throw new Exception("null entry!"); will sometimes throw an exception. I should point out that the list.Add(new MyClass()) are performed from different threads running concurrently. The only thing I can think of to account for the null entries is the concurrent accesses. List<> isn't thread-safe, after all. Though I still find it strange that it ends up containing null entries, instead of just not offering any guarantees on ordering. Can you think of any other reason? Also, I don't care in which order the items are added, and I don't want the calling threads to block waiting to add their items. If synchronization is truly the issue, can you recommend a simple way to call the Add method asynchronously, i.e., create a delegate that takes care of that while my thread keeps running its code? I know I can create a delegate for Add and call BeginInvoke on it. Does that seem appropriate? Thanks.

    Read the article

  • sort elements in jQuery object

    - by Fresheyeball
    <div class="myClass">1</div> <div class="myClass">2</div> <div class="myClass">3</div> <div class="myClass">4</div> <div class="myClass">5</div> <div class="myClass">6</div> var $myClass = $('.myClass'); $myClass.eq(0).text() //returns '1' $myClass.eq(4).text() //returns '5' $myClass.eq(5).text() //returns '6' What I want to do is reorder the objects in jQuery manually. //fantasy command reversing $myClass.eqSorter([5,4,3,2,1,0]); $myClass.eq(0).text() //returns '6' $myClass.eq(5).text() //returns '1' What I really want is to allow a fully custom order input //fantasy command custom ordering $myClass.eqSorter([3,5,4,2,0,1]); $myClass.eq(0).text() //returns '4' $myClass.eq(5).text() //returns '2' Now I've looked at basic ways of doing this like .sort as a part of javascript, but .sort takes a dynamic argument the compares things, and does not allow for a fully custom order. I also looked at making a new blank jquery object that I could pass elements into like this $newObjectVar.eq(0) = $myClass.eq(3); for example. But as of yet I have not found a way to make a blank jQuery object like this.

    Read the article

  • Is there any PDF parser written in objective-c or c?

    - by user549683
    I'm writing a pdf reader iPhone application. I know how to show pdf file in view using CGPDF** classes in iOS. What I want to do now is to search text in pdf file, and highlight the searched text. So, I need a library which can detect what text is in what position. Besides, I want the library able to handle unicode and Chinese characters. I've searched for a few days but still cannot find anything suitable. I've tried xpdf, but it is written in c++. I don't know how to use c++ code in iPhone app. I've also tried http://www.codeproject.com/KB/cpp/ExtractPDFText.aspx but it does not handle Chinese characters. I've tried to code by myself, but the encoding in PDF is really complicated. For example, I don't know what to refer to when I want to decode the text by the following font: 8 0 obj << /Type /Font /Subtype /Type0 /Encoding /Identity-H /BaseFont /RNXJTV+PMingLiU /DescendantFonts [ 157 0 R ] >> endobj 157 0 obj << /Type /Font /Subtype /CIDFontType2 /BaseFont /RNXJTV+PMingLiU /CIDSystemInfo << /Registry (Adobe) /Ordering (CNS1) /Supplement 0 >> /FontDescriptor 158 0 R /W 161 0 R /DW 1000 /CIDToGIDMap 162 0 R >> endobj 158 0 obj << /Type /FontDescriptor /Ascent 801 /CapHeight 711 /Descent -199 /Flags 32 /FontBBox [0 -199 999 801] /FontName /RNXJTV+PMingLiU /ItalicAngle 0 /StemV 0 /Leading 199 /MaxWidth 1000 /XHeight 533 /FontFile2 159 0 R >> endobj

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

  • Getting the current item number or index when using will_paginate in rails app

    - by Rich
    I have a rails app that stores movies watched, books read, etc. The index page for each type lists paged collections of all its items, using will_paginate to bring back 50 items per page. When I output the items I want to display a number to indicate what item in the total collection it is. The numbering should be reversed as the collection is displayed with most recent first. This might not relate to will_paginate but rather some other method of calculation. I will be using the same ordering in multiple types so it will need to be reusable. As an example, say I have 51 movies. The first item of the first page should display: Fight Club - Watched: 30th Dec 2010 Whilst the last item on the page should display: The Matrix - Watched: 3rd Jan 2010 The paged collection is available as an instance variable e.g. @movies, and @movies.count will display the number of items in the paged collection. So if we're on page 1, movies.count == 50, whilst on page 2 @movies.count == 1. Using Movie.count would give 51. If the page number and page size can be accessed the number could be calculated so how can they be returned? Though I'm hopeful there is something that already exists to handle this calculation!

    Read the article

  • CakePHP - Paginating an Array

    - by Ashok
    Cake handles pagination of a model with a simple $this-paginate(), but what should I use if I want to paginate a array of values? The Scenario is like this: $this->set('sitepages', $this->paginate()); This code in my index() returns an array like Array ( [0] => Array ( [Sitepage] => Array ( [id] => 13 [name] => Home [urlslug] => home [parent_id] => 1 [page_title] => Welcome to KIAMS, Pune [order] => 1 ) ) [1] => Array ( [Sitepage] => Array ( [id] => 26 [name] => About Us [urlslug] => aboutus [parent_id] => 1 [page_title] => [order] => 2 ) ) [2] => Array ( [Sitepage] => Array ( [id] => 27 [name] => Overview of KIAMS [urlslug] => aboutus/overview [parent_id] => 26 [page_title] => [order] => 2 ) ) I retrieved the same data using $this-Sitepage-find('all') and then performed some manipulations as required and form a array which is very similar to the above one, but the ordering gets changed. I want to paginate this new array and pass it to the view. I tried $this->set('sitepages',$this->paginate($newarray)) But the data is not getting paginated. Can some one please help with paginating the $newarray in CakePHP?

    Read the article

  • How do I sort an activerecord result set on a i18n translated column?

    - by PlanetMaster
    Hi, I have the following line in a view: <%= f.select(:province_id, options_from_collection_for_select(Province.find(:all, :conditions => { :country_id => @property.country_id }, :order => "provinces.name ASC"), :id, :name) %> In the province model I have the following: def name I18n.t(super) end Problem is that the :name field is translated (through the province model) and that the ordering is done by activerecord on the english name. The non-english result set can be wrongly sorted this way. We have a province in Belgium called 'Oost-Vlaanderen'. In english that is 'East-Flanders". Not good for sorting:) I need something like this, but it does not work: <%= f.select(:province_id, options_from_collection_for_select(Province.find(:all, :conditions => { :country_id => @property.country_id }, :order => "provinces.I18n.t(name) ASC"), :id, :name) %> What would be the best approach to solve this? As you may have noticed, my coding knowledge is very limited, sorry for that.

    Read the article

  • Code Golf: Connecting the dots

    - by ChristopheD
    Description: The input are multiple lines (terminated by a newline) which describe a 'field'. There are 'numbers' scattered across this field: the numbers always start at 1 they follow the ordering of the natural numbers: every 'next number' is incremented with 1 every number is surrounded by (at least) one whitespace on it's left and right Task: Draw lines between these numbers in their natural order (1 -> 2 -> 3 -> ...N) with the following characteristics: replace a number with a '+' character for horizontal lines: use '-' for vertical lines: use '|' going left and down or right and up: / going left and up or right and down: \ Important note: When drawing lines of type 4 and 5 you can assume that : (given points to connect with coordinates x1, y1 and x2, y2) distance(x1,x2) == distance(y1,y2). Have a look at the examples to see where you should 'attach' the lines. It is important to follow the order in which the dots are connected (newer lines can be drawn over older lines). Sample input 1 9 10 8 7 6 5 11 13 12 3 4 14 15 16 1 2 Sample output 1 /+ / | / | +/ +--+ | +\ | \ | \+ /+ | / | /+-------------+/ +---+ / | +--+ | + | +--------------------------+ Sample input 2 4 2 3 5 6 1 8 7 Sample output 2 /+ / | / | / | /+------------------+/ +--------+\ / \ +/ +--------------------------------------+ Winner: shortest solution (by code count). Input can be read via command line.

    Read the article

  • ORDERBY "human" alphabetical order using SQL string manipulation

    - by supertrue
    I have a table of posts with titles that are in "human" alphabetical order but not in computer alphabetical order. These are in two flavors, numerical and alphabetical: Numerical: Figure 1.9, Figure 1.10, Figure 1.11... Alphabetical: Figure 1A ... Figure 1Z ... Figure 1AA If I orderby title, the result is that 1.10-1.19 come between 1.1 and 1.2, and 1AA-1AZ come between 1A and 1B. But this is not what I want; I want "human" alphabetical order, in which 1.10 comes after 1.9 and 1AA comes after 1Z. I am wondering if there's still a way in SQL to get the order that I want using string manipulation (or something else I haven't thought of). I am not an expert in SQL, so I don't know if this is possible, but if there were a way to do conditional replacement, then it seems I could impose the order I want by doing this: delete the period (which can be done with replace, right?) if the remaining figure number is more than three characters, add a 0 (zero) after the first character. This would seem to give me the outcome I want: 1.9 would become 109, which comes before 110; 1Z would become 10Z, which comes before 1AA. But can it be done in SQL? If so, what would the syntax be? Note that I don't want to modify the data itself—just to output the results of the query in the order described. This is in the context of a Wordpress installation, but I think the question is more suitably an SQL question because various things (such as pagination) depend on the ordering happening at the MySQL query stage, rather than in PHP.

    Read the article

  • What is the n in O(n) when comparing sorting algorithms?

    - by Mumfi
    The question is rather simple, but I just can't find a good enough answer. I've taken a look at the most upvoted question regarding the Big-Oh notation, namely this: Plain English explanation of Big O It says there that: For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering). Now let's consider the simple bubble sort algorithm: for (int i = arr.length - 1; i > 0 ; i--) { for (int j = 0; j<i; j++) { if (arr[j] > arr[j+1]) { switchPlaces(...) } } } I know that worst case is O(n^2) and best case is O(n), but what is n exactly? If we attempt to sort an already sorted algorithm (best case), we would end up doing nothing, so why is it still O(n)? We are looping through 2 for-loops still, so if anything it should be O(n^2). n can't be the number of comparison operations, because we still compare all the elements, right? This confuses me, and I appreciate if someone could help me.

    Read the article

  • TCP/IP Implementation General Questions

    - by user2971023
    I've implemented the concepts shown here; http://wiki.unity3d.com/index.php/Simple_TCP/IP_Client_-_Server outside of unity and it works. (though i had to create the TCPIPServerApp from scratch as i could not find the base project anywhere). I have some general questions on how to use tcp/ip properly however. I've done some research on tcp/ip itself but I'm still a little confused. It seems like using the method above doesn't guarantee that I'll see the message (res). It just checks on every update to see if there is a different message in res. What if multiple messages are sent and the program lags or something, will i miss the earlier packet(s)? Should i instead do an array so it stores the last X messages? How do i know the data was received? Do I need to add a message id and build in my own ack into the data? Should i check to see if the port is in use before setting up a connection? Sorry for all the questions. This is all new to me but I enjoy this very much! ... Below already answered By Anton, Thanks It sounds like tcp uses its own packet numbering to ensure the packets end up in the right order on the other side. What if a packet is missed, are the subsequent packets thrown away? Or is this numbering and packet ordering, only for handling data that is broken out into multiple packets? TCP will automatically break the data into multiple packets if necessary right?

    Read the article

  • filtering search results with php

    - by fl3x7
    Hello, Cant really find any useful information on this through Google so hope someone here with some knowledge can help. I have a set of results which are pulled from a multi dimensional array. Currently the array key is the price of a product whilst the item contains another array which contains all the product details. key=>Item(name=>test, foo=>bar) So currently when I list the items I just order by the key, smallest first and it lists the products smallest price first. However I want to build on this so that when a user sees the results they can choose other ordering options like list all products by a name, certain manufacturer, colour, x ,y ,z etc etc from a drop down box(or something similar) This is where I need some guidance. Im just not sure how to go about it or best practise or anything. The only way I can think of is to order all the items by the nested array eg by the name, manufacturer etc. but how do I do that in PHP? Hope you understand what im trying to achieve(if not just ask). Any help on this with ideas, approaches or examples would be great. Thanks for reading p.s Im using PHP5

    Read the article

  • NHibernate: Using value tables for optimization AND dynamic join

    - by Kostya
    Hi all, My situation is next: there are to entities with many-to-many relation, f.e. Products and Categories. Also, categories has hierachial structure, like a tree. There is need to select all products that depends to some concrete category with all its childs (branch). So, I use following sql statement to do that: SELECT * FROM Products p WHERE p.ID IN ( SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 ) But with big set of data this query executes very long and I found some solution to optimize it, look at here: DECLARE @CatProducts TABLE ( ProductID int NOT NULL ) INSERT INTO @CatProducts SELECT DISTINCT pc.ProductID FROM ProductsCategories pc INNER JOIN Categories c ON c.ID = pc.CategoryID WHERE c.TLeft >= 1 AND c.TRight <= 33378 SELECT * FROM Products p INNER JOIN @CatProducts cp ON cp.ProductID = p.ID This query executes very fast but I don't know how to do that with NHIbernate. Note, that I need use only ICriteria because of dynamic filtering\ordering. If some one knows a solution for that, it will be fantastic. But I'll pleasure to any suggestions of course. Thank you ahead, Kostya

    Read the article

  • JQUERY - how to get updated value after ajax removes data from within it?

    - by Brian
    I have a an element with thumbnails. I allow users to sort their display order (which fires off an update to the DB via ajax). I also allow them to delete images (which, after deletion, fires off a request to update the display order for all remaining images). My problem is with binding or live I think, but I don't know where to apply it. The array fired off upon delete contains ALL the ids for the images that were there on page load. The issue is that after they delete an image the array STILL contains the original ids (including the one that was deleted) so it is obviously not refreshing the value of the element after ajax has removed things from inside it. I need to tell it to go get the refreshed contents... From what I have been reading, this is normal but I don't understand how to tie it into my routine. I need to trigger the mass re-ordering after any deletion. Any ideas gurus? $('a.delimg').click(function(){ var parent = $(this).parent().parent(); var id = $(this).attr('id'); $.ajax({ type: "POST", url: "../updateImages.php", data: "action=delete&id=" + id, beforeSend: function() { parent.animate({'backgroundColor':'#fb6c6c'},300); $.jnotify("<strong>Deleting This Image & Updating The Image Order</strong>", 5000); }, success: function(data) { parent.slideUp(300,function() { parent.remove(); $("#images ul").sortable(function() { //NEEDS TO GET THE UPDATED CONTENT var order = $(this).sortable("serialize") + '&action=updateRecordsListings'; $.post("../updateImages.php", order, function(theResponse){ $.jnotify("<strong>" + theResponse + "</strong>", 2000); }); }); }); } }); return false; }); Thanks for any help you can be.

    Read the article

  • Oracle Fusion Procurement Designed for User Productivity

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience Oracle Fusion Procurement Design Goals In Oracle Fusion Procurement, we set out to create a streamlined user experience based on the way users do their jobs. Oracle has spent hundreds of hours with customers to get to the heart of what users need to do their jobs. By designing a procurement application around user needs, Oracle has crafted a user experience that puts the tools that people need at their fingertips. In Oracle Fusion Procurement, the user experience is designed to provide the user with information that will drive navigation rather than requiring the user to find information. One of our design goals for Oracle Fusion Procurement was to reduce the number of screens and clicks that a user must go through to complete frequently performed tasks. The requisition process in Oracle Fusion Procurement (Figure 1) illustrates how we have streamlined workflows. Oracle Fusion Self-Service Procurement brings together billing metrics, descriptions of the order, justification for the order, a breakdown of the components of the order, and the amount—all in one place. Previous generations of procurement software required the user to navigate to several different pages to gather all of this information. With Oracle Fusion, everything is presented on one page. The result is that users can complete their tasks in less time. The focus is on completing the work, not finding the work. Figure 1. Creating a requisition in Oracle Fusion Self-Service Procurement is a consumer-like shopping experience. Will Oracle Fusion Procurement Increase Productivity? To answer this question, Oracle sought to model how two experts working head to head—one in an existing enterprise application and another in Oracle Fusion Procurement—would perform the same task. We compared Oracle Fusion designs to corresponding existing applications using the keystroke-level modeling (KLM) method. This method is based on years of research at universities such as Carnegie Mellon and research labs like Xerox Palo Alto Research Center. The KLM method breaks tasks into a sequence of operations and uses standardized models to evaluate all of the physical and cognitive actions that a person must take to complete a task: what a user would have to click, how long each click would take (not only the physical action of the click or typing of a letter, but also how long someone would have to think about the page when taking the action), and user interface changes that result from the click. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time is calculated. Task times from the model enable researchers to predict end-user productivity. For the study, we focused on modeling procurement business process task flows that were considered business or mission critical: high-frequency tasks and high-value tasks. The designs evaluated encompassed tasks that are currently performed by employees, professional buyers, suppliers, and sourcing professionals in advanced procurement applications. For each of these flows, we created detailed task scenarios that provided the context for each task, conducted task walk-throughs in both the Oracle Fusion design and the existing application, analyzed and documented the steps and actions required to complete each task, and applied standard time estimates to the operators in each task to estimate overall task completion times. The Results The KLM method predicted that the Oracle Fusion Procurement designs would result in productivity gains in each task, ranging from 13 percent to 38 percent, with an overall productivity gain of 22.5 percent. These performance gains can be attributed to a reduction in the number of clicks and screens needed to complete the tasks. For example, creating a requisition in Oracle Fusion Procurement takes a user through only two screens, while ordering the same item in a previous version requires six screens to complete the task. Modeling user productivity has resulted not only in advances in Oracle Fusion applications, but also in advances in other areas. We leveraged lessons learned from the KLM studies to establish products like Oracle E-Business Suite (EBS). New user experience features in EBS 12.1.3, such as navigational improvements to the main menu, a Google-type search using auto-suggest, embedded analytics, and an in-context list of values tool help to reduce clicks and improve efficiency. For more information about KLM, refer to the Measuring User Productivity blog.

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • Generic Sorting using C# and Lambda Expression

    - by Haitham Khedre
    Download : GenericSortTester.zip I worked in this class from long time and I think it is a nice piece of code that I need to share , it might help other people searching for the same concept. this will help you to sort any collection easily without needing to write special code for each data type , however if you need special ordering you still can do it , leave a comment and I will see if I need to write another article to cover the other cases. I attached also a fully working example to make you able to see how do you will use that .     public static class GenericSorter { public static IOrderedEnumerable<T> Sort<T>(IEnumerable<T> toSort, Dictionary<string, SortingOrder> sortOptions) { IOrderedEnumerable<T> orderedList = null; foreach (KeyValuePair<string, SortingOrder> entry in sortOptions) { if (orderedList != null) { if (entry.Value == SortingOrder.Ascending) { orderedList = orderedList.ApplyOrder<T>(entry.Key, "ThenBy"); } else { orderedList = orderedList.ApplyOrder<T>(entry.Key,"ThenByDescending"); } } else { if (entry.Value == SortingOrder.Ascending) { orderedList = toSort.ApplyOrder<T>(entry.Key, "OrderBy"); } else { orderedList = toSort.ApplyOrder<T>(entry.Key, "OrderByDescending"); } } } return orderedList; } private static IOrderedEnumerable<T> ApplyOrder<T> (this IEnumerable<T> source, string property, string methodName) { ParameterExpression param = Expression.Parameter(typeof(T), "x"); Expression expr = param; foreach (string prop in property.Split('.')) { expr = Expression.PropertyOrField(expr, prop); } Type delegateType = typeof(Func<,>).MakeGenericType(typeof(T), expr.Type); LambdaExpression lambda = Expression.Lambda(delegateType, expr, param); MethodInfo mi = typeof(Enumerable).GetMethods().Single( method => method.Name == methodName && method.IsGenericMethodDefinition && method.GetGenericArguments().Length == 2 && method.GetParameters().Length == 2) .MakeGenericMethod(typeof(T), expr.Type); return (IOrderedEnumerable<T>)mi.Invoke (null, new object[] { source, lambda.Compile() }); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • Brainless Backups

    - by Jesse
    I’m a software developer by trade which means to my friends and family I’m just a “computer guy”. It’s assumed that I know everything about every facet of computing from removing spyware to replacing hardware. I also can do all of this blindly over the phone or after hearing a five to ten word description of the problem over dinner ;-) In my position as CIO of my friends and families I’ve been in the unfortunate position of trying to recover music, pictures, or documents off of failed hard drives on more than one occasion. It’s not a great situation for anyone, and it’s always at these times that the importance of backups becomes so clear. Several months back a friend of mine found himself in this situation. The hard drive on his 8 year old laptop failed and took a good number of his digital photos with it. I think most folks can deal with losing some of their music and even some of their documents, but it really stings to lose pictures of past events and loved ones. After ordering a new laptop, my friend went out and bought an external hard drive so that he could start keeping a backup of his data. As fate would have it, several months later the drive in his new laptop failed and he learned the hard way that simply buying the external hard drive isn’t enough… you actually have to copy your stuff over every once in awhile! The importance of backup and recovery plans is (hopefully) well known in IT organizations. Well executed backup plans are in place, and hopefully the backup and recovery process is tested regularly. When you’re talking about users at home, however, the need for these backups is often understood far too late. Most typical users can’t be expected to remember to backup their data regularly and also don’t always have the know-how to setup automated backups. For my friends and family members in this situation I recommend tools like Dropbox, Carbonite, and Mozy. Here’s why I like them: They’re affordable: Dropbox and Mozy both have free offerings, though most people with lots of music and/or photos to backup will probably exceed the storage limitations of those free plans pretty quickly. Still, all three offer pretty affordable monthly or yearly plans. In my opinion, Carbonite’s unlimited storage plan for $50-$60 per year is the best value around. They’re easy to setup: Both Dropbox and Carbonite are very easy to get setup and start using. I’ve never used Mozy, but I imagine it’s similarly painless to get up and running. Backups are automatically “off-site”: A backup that is sitting on an external hard drive right next to your computer is great, but might not protect against flood damage, a power surge, or other disasters in that single location. These services exist “in the cloud” so to speak, helping mitigate those concerns. Granted, this kind of backup scheme requires some trust in the 3rd party to protect your data from both malicious people and disastrous events. This truly is a bit of a double edged sword, but I sleep well at night knowing that my data is being backed up and secured by a company made up of engineers that focus on the business of doing backups right. Backups are “brainless”: What I like most about services like these is that they work “automagically” in the background, watching for files to be updated and automatically backing up those changes. There’s no need to remember to plug in that external drive and copy your data over. Since starting to recommend these services to my friends and family I find myself wearing my “data recovery” hat far less often. The only way backups are effective for your standard computer user is if they’re completely automatic. Backups need to be brainless, or they just won’t work.

    Read the article

  • Creating shapes on the fly

    - by Bertrand Le Roy
    Most Orchard shapes get created from part drivers, but they are a lot more versatile than that. They can actually be created from pretty much anywhere, including from templates. One example can be found in the Layout.cshtml file of the ThemeMachine theme: WorkContext.Layout.Footer .Add(New.BadgeOfHonor(), "5"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this is really doing is create a new shape called BadgeOfHonor and injecting it into the Footer global zone (that has not yet been defined, which in itself is quite awesome) with an ordering rank of "5". We can actually come up with something simpler, if we want to render the shape inline instead of sending it into a zone: @Display(New.BadgeOfHonor()) Now let's try something a little more elaborate and create a new shape for displaying a date and time: @Display(New.DateTime(date: DateTime.Now, format: "d/M/yyyy")) For the moment, this throws a "Shape type DateTime not found" exception because the system has no clue how to render a shape called "DateTime" yet. The BadgeOfHonor shape above was rendering something because there is a template for it in the theme: Themes/ThethemeMachine/Views/BadgeOfHonor.cshtml. We need to provide a template for our new shape to get rendered. Let's add a DateTime.cshtml file into our theme's Views folder in order to make the exception go away: Hi, I'm a date time shape. Now we're just missing one thing. Instead of displaying some static text, which is not very interesting, we can display the actual time that got passed into the shape's dynamic constructor. Those parameters will get added to the template's Model, so they are easy to retrieve: @(((DateTime)Model.date).ToString(Model.format)) Now that may remind you a little of WebForm's user controls. That's a fair comparison, except that these shapes are much more flexible (you can add properties on the fly as necessary), and that the actual rendering is decoupled from the "control". For example, any theme can override the template for a shape, you can use alternates, wrappers, etc. Most importantly, there is no lifecycle and protocol abstraction like there was in WebForms. I think this is a real improvement over previous attempts at similar things.

    Read the article

  • #iPad at One Week: A Great Device Made with a Heavy Hand

    - by andrewbrust
    I have now had my iPad for a little over a week. In that time, Apple introduced the world to its iPhone OS 4 (and the SDK agreement’s draconian new section 3.3.1), HP introduced is Slate, and Microsoft got ready to launch Visual Studio 2010 and .NET 4.0. And through it all I have used my iPad. I've used it for email, calendar, controlling my Sonos, and writing an essay. I've used it for getting on TripIt and Twitter, and surfing the Web. I've used it for online banking, and online ordering and delivery of food. And the verdict? Honestly? I think it's a great device and I thoroughly enjoy using it. The screen is bright and vibrant. I am surprisingly fast and accurate when I type on it. The touch screen's responsiveness is nearly flawless. The software, including a number of third party applications, include pleasing animations and use of color that make it fun to get work done. And speaking of work, the Exchange integration is, dare I say it, robust. Not as full-featured as on a PC or Windows Mobile device, but still offering core functionality and, so far at least, without bugs. The UI is intuitive, not just to me, but also to my 5 1/2 year old, and also to my nearly-3-year-old son. They picked it up and, with just just a few pointers from me, they almost immediately knew what to do, whether they were looking at photos (and swiping/flicking along as they did so), using a drawing program, playing a game, or watching YouTube videos. The younger of the two of them even tried to get up on a chair and grab the thing today. He dropped it, from about 4 feet off the ground. And it's still fine. (Meanwhile, I'll be keeping it on a higher shelf.) I cannot fully describe yet what makes this form factor and this product so appealing. Maybe it's that it's an always-on device. Maybe it's just being able to hold such a nice, relatively large display so close. Maybe it's the design sensibility, that seems to pervade throughout the app ecosystem. Or maybe it's that one's fingers, and not pens or mice, are the software's preferred input device. Whatever the attraction, it's strong. And no matter how much I tend to root for Microsoft and against Apple, Cupertino has, in my mind, scored big, Can Microsoft compete? Yes, but not with the Windows 7 standard UI (nor with individual OEMs’ own UIs on top). I hope Microsoft builds a variant of the Windows Phone 7 specifically for tablet devices. And I hope they make it clear that all developers, and programming languages, are welcome to the platform. Once that’s established, the OEMs have to build great hardware with fast, responsive touch screens, under Microsoft's watchful eye. That may be the hardest part of getting this right. No matter what, Microsoft's got a fight on its hands. I don't know if it can count on winning that fight, either. But Silverlight and Live Tiles could certainly help. And so can treating developers like adults.  Apple seems intent on treating their devs like kids, and then giving the kids a curfew.  For that, dev-friendly Microsoft may one day give thanks.

    Read the article

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31  | Next Page >