Search Results

Search found 1770 results on 71 pages for 'stupid idiot'.

Page 70/71 | < Previous Page | 66 67 68 69 70 71  | Next Page >

  • PHP and XPath Loop

    - by user1794852
    Thank you all in advance. I've got some great answers to my sometimes stupid questions, so thank you again. I'm trying to parse a SOAP response using PHP, Xpath (namespaces) and SimpleXML. Down below is a snippet of the Response. What I need to do is loop through each <ns1:file></ns1:file> and add it to a DB. But I'm not sure how to do that. Please help! Namespace Stuff $x = simplexml_load_string($response); $x->registerXPathNamespace('ns1', 'http://ws.icontent.idefense.com/V3/2'); Here's the response: <ns1:mal_files> <ns1:file> <ns1:id>2895144</ns1:id> <ns1:md5>2189c3d3857ba0cabd19c8aa031d63cd</ns1:md5> <ns1:sha1>c20b26148caa059ecf85e9b29df4e28e8354d655</ns1:sha1> <ns1:path>%WINDIR%\Temp\Temporary Internet Files\Content.IE5\K9ANOPQB\1219831[1].htm</ns1:path> <ns1:size>110</ns1:size> <ns1:code_available>true</ns1:code_available> <ns1:len_char>fixed</ns1:len_char> </ns1:file> <ns1:file> <ns1:id>2895147</ns1:id> <ns1:md5>a533825ef1752630a300125b3eef6825</ns1:md5> <ns1:sha1>ec7feb1414b3c2e720f6c06e2750421b73634f87</ns1:sha1> <ns1:path>%WINDIR%\Temp\Temporary Internet Files\Content.IE5\4PMB8T67\bg[1].jpg</ns1:path> <ns1:size>707</ns1:size> <ns1:code_available>true</ns1:code_available> <ns1:len_char>fixed</ns1:len_char> </ns1:file> <ns1:file> <ns1:id>2895155</ns1:id> <ns1:md5>c88724e985efcc82173c0d3aa0b77dfd</ns1:md5> <ns1:sha1>ecc042ca06aac988cd4593bb2b25fa39c4b2a819</ns1:sha1> <ns1:path>%WINDIR%\Prefetch\24604775.EXE-3ADEC0C2.pf</ns1:path> <ns1:size>44940</ns1:size> <ns1:code_available>true</ns1:code_available> <ns1:len_char>fixed</ns1:len_char> </ns1:file> <ns1:file> <ns1:id>2895158</ns1:id> <ns1:md5>422a011793af6195ea517c2c4a26bdbc</ns1:md5> <ns1:sha1>f449240c091c27e2531342d6b291d6a7e4655834</ns1:sha1> <ns1:path>%WINDIR%\Temp\Temporary Internet Files\Content.IE5\KPYV0TM7\gamesleap[1].png</ns1:path> <ns1:size>2676</ns1:size> <ns1:code_available>true</ns1:code_available> <ns1:len_char>fixed</ns1:len_char> </ns1:file> </ns1:mal_files> PHP Code $md5 = $x->xpath('//ns1:mal_files/ns1:file/ns1:md5'); // Returns array :( The PHP code returns an Array of all MD5 for every <ns1:file></ns1:file> and that's not what I need. What I need is to loop through each of the (nodes?) for each of the <ns1:file> keying on the <ns1:id> and add all their associated elements (id, md5, sha1, etc) as a record, then move onto the next <ns1:file> and repeat. And obviously the PHP I'm using returns an array, which doesn't work here. Hopefully I've made my question clear... I appreciate any help. Thanks!

    Read the article

  • How do I process the largest match first in PHP?

    - by animuson
    Ok, so I tried searching around first but I didn't exactly know how to word this question or a search phrase. Let me explain. I have data that looks like this: <!-- data:start --> <!-- 0:start --> <!-- 0:start -->0,9<!-- 0:stop --> <!-- 1:start -->0,0<!-- 1:stop --> <!-- 2:start -->9,0<!-- 2:stop --> <!-- 3:start -->9,9<!-- 3:stop --> <!-- 4:start -->0,9<!-- 4:stop --> <!-- 0:stop --> <!-- 1:start --> <!-- 0:start -->1,5<!-- 0:stop --> <!-- 1:start -->1,6<!-- 1:stop --> <!-- 2:start -->3,6<!-- 2:stop --> <!-- 3:start -->3,8<!-- 3:stop --> <!-- 4:start -->4,8<!-- 4:stop --> <!-- 1:stop --> <!-- 2:start --> <!-- 0:start -->0,7<!-- 0:stop --> <!-- 1:start -->1,7<!-- 1:stop --> <!-- 2:stop --> <!-- data:stop --> So it's basically a bunch of points. Here is the code I'm currently using to try and parse it so that it would create an array like so: Array ( 0 => Array ( 0 => "0,9", 1 => "0,0", 2 => "9,0", 3 => "9,9", 4 => "0,9" ), 1 => Array ( 0 => "1,5", 1 => "1,6", 2 => "3,6", 3 => "3,8", 4 => "4,8" ), 2 => Array ( 0 => "0,7", 1 => "1,7" ) ) However, it is returning an array that looks like this: Array ( 0 => "0,9", 1 => "0,0", 2 => "9,0" ) Viewing the larger array that I have on my screen, you see that it's setting the first instance of that variable when matching. So how do I get it to find the widest match first and then process the insides. Here is the function I am currently using: function explosion($text) { $number = preg_match_all("/(<!-- ([\w]+):start -->)\n?(.*?)\n?(<!-- \\2:stop -->)/s", $text, $matches, PREG_SET_ORDER); if ($number == 0) return $text; else unset($item); foreach ($matches as $item) if (empty($data[$item[2]])) $data[$item[2]] = $this->explosion($item[3]); return $data; } I'm sure it will be something stupid and simple that I've overlooked, but that just makes it an easy answer for you I suppose.

    Read the article

  • Are all <canvas> tag dimensions in pixels?

    - by Simon Omega
    Are all tag dimensions in pixels? I am asking because I understood them to be. But my math is broken or I am just not grasping something here. I have been doing python mostly and just jumped back into Java Scripting. If I am just doing something stupid let me know. For a game I am writing, I wanted to have a blocky gradient. I have the following: HTML <canvas id="heir"></canvas> CSS @media screen { body { font-size: 12pt } /* Game Rendering Space */ canvas { width: 640px; height: 480px; border-style: solid; border-width: 1px; } } JavaScript (Shortened) function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 480; i = i + 10 ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, i, 640, 10); myblue = myblue - 2; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } function main () { var targetcontext = document.getElementById(“main”).getContext("2d"); testDraw(targetcontext); } To me this should produce a series of 640w by 10h pixel bars. In Google Chrome and Fire Fox I get 15 bars. To me that means ( 480 / 15 ) is 32 pixel high bars. So I change the code to: function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 16; i++ ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, (i * 10), 640, 10); myblue = myblue - 10; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } And get a true 32 pixel height result for comparison. Other than the fact that the first code snippet has shades of blue rendering in non-visible portions of the they are measuring 32 pixels. Now back to the Original Java Code... If I inspect the tag in Chrome it reports 640 x 480. If I inspect it in Fire Fox it reports 640 x 480. BUT! Fire Fox exports the original code to png at 300 x 150 (which is 15 rows of 10). Is it some how being resized to 640 x 480 by the CSS instead of being set to a true 640 x 480? Why, how, what? O_o I confused...

    Read the article

  • Find the right value in recursive array

    - by fire
    I have an array with multiple sub-arrays like this: Array ( [0] => Array ( [Page_ID] => 1 [Page_Parent_ID] => 0 [Page_Title] => Overview [Page_URL] => overview [Page_Type] => content [Page_Order] => 1 ) [1] => Array ( [0] => Array ( [Page_ID] => 2 [Page_Parent_ID] => 1 [Page_Title] => Team [Page_URL] => overview/team [Page_Type] => content [Page_Order] => 1 ) ) [2] => Array ( [Page_ID] => 3 [Page_Parent_ID] => 0 [Page_Title] => Funds [Page_URL] => funds [Page_Type] => content [Page_Order] => 2 ) [3] => Array ( [0] => Array ( [Page_ID] => 4 [Page_Parent_ID] => 3 [Page_Title] => Strategy [Page_URL] => funds/strategy [Page_Type] => content [Page_Order] => 1 ) [1] => Array ( [0] => Array ( [Page_ID] => 7 [Page_Parent_ID] => 4 [Page_Title] => A Class Fund [Page_URL] => funds/strategy/a-class-fund [Page_Type] => content [Page_Order] => 1 ) [1] => Array ( [0] => Array ( [Page_ID] => 10 [Page_Parent_ID] => 7 [Page_Title] => Information [Page_URL] => funds/strategy/a-class-fund/information [Page_Type] => content [Page_Order] => 1 ) [1] => Array ( [Page_ID] => 11 [Page_Parent_ID] => 7 [Page_Title] => Fund Data [Page_URL] => funds/strategy/a-class-fund/fund-data [Page_Type] => content [Page_Order] => 2 ) ) [2] => Array ( [Page_ID] => 8 [Page_Parent_ID] => 4 [Page_Title] => B Class Fund [Page_URL] => funds/strategy/b-class-fund [Page_Type] => content [Page_Order] => 2 ) I need a function to find the right Page_URL so if you know the $url is "funds/strategy/a-class-fund" I need to pass that to a function that returns a single array result (which would be the Page_ID = 7 array in this example). Having a bit of a stupid day, any help would be appreciated!

    Read the article

  • Critique my heap debugger

    - by FredOverflow
    I wrote the following heap debugger in order to demonstrate memory leaks, double deletes and wrong forms of deletes (i.e. trying to delete an array with delete p instead of delete[] p) to beginning programmers. I would love to get some feedback on that from strong C++ programmers because I have never done this before and I'm sure I've done some stupid mistakes. Thanks! #include <cstdlib> #include <iostream> #include <new> namespace { const int ALIGNMENT = 16; const char* const ERR = "*** ERROR: "; int counter = 0; struct heap_debugger { heap_debugger() { std::cerr << "*** heap debugger started\n"; } ~heap_debugger() { std::cerr << "*** heap debugger shutting down\n"; if (counter > 0) { std::cerr << ERR << "failed to release memory " << counter << " times\n"; } else if (counter < 0) { std::cerr << ERR << (-counter) << " double deletes detected\n"; } } } instance; void* allocate(size_t size, const char* kind_of_memory, size_t token) throw (std::bad_alloc) { void* raw = malloc(size + ALIGNMENT); if (raw == 0) throw std::bad_alloc(); *static_cast<size_t*>(raw) = token; void* payload = static_cast<char*>(raw) + ALIGNMENT; ++counter; std::cerr << "*** allocated " << kind_of_memory << " at " << payload << " (" << size << " bytes)\n"; return payload; } void release(void* payload, const char* kind_of_memory, size_t correct_token, size_t wrong_token) throw () { if (payload == 0) return; std::cerr << "*** releasing " << kind_of_memory << " at " << payload << '\n'; --counter; void* raw = static_cast<char*>(payload) - ALIGNMENT; size_t* token = static_cast<size_t*>(raw); if (*token == correct_token) { *token = 0xDEADBEEF; free(raw); } else if (*token == wrong_token) { *token = 0x177E6A7; std::cerr << ERR << "wrong form of delete\n"; } else { std::cerr << ERR << "double delete\n"; } } } void* operator new(size_t size) throw (std::bad_alloc) { return allocate(size, "non-array memory", 0x5AFE6A8D); } void* operator new[](size_t size) throw (std::bad_alloc) { return allocate(size, " array memory", 0x5AFE6A8E); } void operator delete(void* payload) throw () { release(payload, "non-array memory", 0x5AFE6A8D, 0x5AFE6A8E); } void operator delete[](void* payload) throw () { release(payload, " array memory", 0x5AFE6A8E, 0x5AFE6A8D); }

    Read the article

  • Why doesn't PHP's oci_connect return false?

    - by absolethe
    I have a situation in which we have two production databases that synchronize with one other. Server One is considered the primary. Sometimes due to maintenance or a disaster Server Two will become primary. In some of our code that means we have to manually go in and edit the server name for database connections. I find this annoying, so the last thing I wrote I put the server information for both and set up a loop. If oci_connect failed on the Server One 3 times it would move on to Server Two. If Server Two failed 3 times it would notify the user a connection couldn't be made. This has worked fine most times we've had the situation of switching the servers. Yesterday, for example, it worked fine. Today it didn't. It just sat and spun endlessly. No error in the PHP error log. No failure to move on from. No error output to the screen. Nothing for 5 minutes. So then I had to manually edit the stupid config file. I asked what could possibly be different and I was told "yesterday the database was down, but not the server. today the server is down." Okay...? But I don't see a distinction. I would expect oci_connect to return false if it can't establish any sort of communication with the server. I'd expect it to timeout and error. Not just pass it on when it receives an error code from the server. What if there's a network problem, for example? Is this a bug in oci_connect or is there a possibility that something in our PHP configuration gives oci_connect a crazily long timeout? If it is a sort of "bug" is there some way I can check to see if the server is up first? Like a ping? (Of course when I did a ping through the command prompt I got a response from Server One and then was told, "it's back now" although I am skeptical about the timing on that.) Anyway, if anyone could shed some light on why oci_connect might run endlessly without failing and how to keep it from doing so I'd be grateful. -- Edit: My code looks like the examples on PHP.net only in some loops. $count = count($servers); for($i = 0; $i < $count; $i++){ if((!isset($connection)) || ($connection == false)){ // Attempt to connect to the oracle database $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); // Try again if there was a failure if(($connection == false) || (isset($con_error))){ // Three (two more) tries per alternative for($j = $st; $j < $fn; $j++){ // Try again to connect $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); } // for($j = 2; $j < 4; $j++) } // if($connection == false) } // if(!isset($connection) || ($connection == false)) } // for($i = 0; $i < $count; $i++)

    Read the article

  • Getting 404 in Android app while trying to get xml from localhost

    - by Patrick
    This must be something really stupid, trying to solve this issue for a couple of days now and it's really not working. I searched everywhere and there probably is someone with the same problem, but I can't seem to find it. I'm working on an Android app and this app pulls some xml from a website. Since this website is down a lot, I decided to save it and run it locally. Now what I did; - I downloaded the kWs app for hosting the downloaded xml file - I put the file in the right directory and could access it through the mobile browser, but not with my app (same code as I used with pulling it from some other website, not hosted by me, only difference was the URL obviously) So I tried to host it on my PC and access it with my app from there. Again the same results, the mobile browsers had no problem finding it, but the app kept saying 404 Not Found: "The requested URL /test.xml&parama=Someone&paramb= was not found on this server." Note: Don't mind the 2 parameters I am sending, I needed that to get the right stuff from the website that wasn't hosted by me. My code: public String getStuff(String name){ String URL = "http://10.0.0.8/test.xml"; ArrayList<NameValuePair> params = new ArrayList<NameValuePair>(2); params.add(new BasicNameValuePair("parama", name)); params.add(new BasicNameValuePair("paramb", "")); APIRequest request = new APIRequest(URL, params); try { RequestXML rxml = new RequestXML(); AsyncTask<APIRequest, Void, String> a = rxml.execute(request); ... } catch(Exception e) { e.printStackTrace(); } return null; } That should be working correctly. Now the RequestXML class part: class RequestXML extends AsyncTask<APIRequest, Void, String>{ @Override protected String doInBackground(APIRequest... uri) { HttpClient httpclient = new DefaultHttpClient(); String completeUrl = uri[0].url; // ... Add parameters to URL ... HttpGet request = null; try { request = new HttpGet(new URI(completeUrl)); } catch (URISyntaxException e1) { e1.printStackTrace(); } HttpResponse response; String responseString = ""; try { response = httpclient.execute(request); StatusLine statusLine = response.getStatusLine(); if(statusLine.getStatusCode() == HttpStatus.SC_OK){ // .. It crashes here, because statusLine.getStatusCode() returns a 404 instead of a 200. The xml is just plain xml, nothing special about it. I changed the contents of the .htaccess file into "ALLOW FROM ALL" (works, cause the browser on my mobile device can access it and shows the correct xml). I am running Android 4.0.4 and I am using the default browser AND chrome on my mobile device. I am using MoWeS to host the website on my PC. Any help would be appreciated and if you need to know anything before you can find an answer to this problem, I'll be more than happy to give you that info. Thank you for you time! Cheers.

    Read the article

  • jQuery .find() doesn't return data in IE but does in Firefox and Chrome

    - by Steve Hiner
    I helped a friend out by doing a little web work for him. Part of what he needed was an easy way to change a couple pieces of text on his site. Rather than having him edit the HTML I decided to provide an XML file with the messages in it and I used jQuery to pull them out of the file and insert them into the page. It works great... In Firefox and Chrome, not so great in IE7. I was hoping one of you could tell me why. I did a fair but of googling but couldn't find what I'm looking for. Here's the XML: <?xml version="1.0" encoding="utf-8" ?> <messages> <message type="HeaderMessage"> This message is put up in the header area. </message> <message type="FooterMessage"> This message is put in the lower left cell. </message> </messages> And here's my jQuery call: <script type="text/javascript"> $(document).ready(function() { $.get('messages.xml', function(d) { //I have confirmed that it gets to here in IE //and it has the xml loaded. //alert(d); gives me a message box with the xml text in it //alert($(d).find('message')); gives me "[object Object]" //alert($(d).find('message')[0]); gives me "undefined" //alert($(d).find('message').Length); gives me "undefined" $(d).find('message').each(function() { //But it never gets to here in IE var $msg = $(this); var type = $msg.attr("type"); var message = $msg.text(); switch (type) { case "HeaderMessage": $("#HeaderMessageDiv").html(message); break; case "FooterMessage": $("#footermessagecell").html(message); break; default: } }); }); }); </script> Is there something I need to do differently in IE? Based on the message box with [object Object] I'm assumed that .find was working in IE but since I can't index into the array with [0] or check it's Length I'm guessing that means .find isn't returning any results. Any reason why that would work perfectly in Firefox and Chrome but fail in IE? I'm a total newbie with jQuery so I hope I haven't just done something stupid. That code above was scraped out of a forum and modified to suit my needs. Since jQuery is cross-platform I figured I wouldn't have to deal with this mess. Edit: I've found that if I load the page in Visual Studio 2008 and run it then it will work in IE. So it turns out it always works when run through the development web server. Now I'm thinking IE just doesn't like doing .find in XML loaded off of my local drive so maybe when this is on an actual web server it will work OK. I have confirmed that it works fine when browsed from a web server. Must be a peculiarity with IE. I'm guessing it's because the web server sets the mime type for the xml data file transfer and without that IE doesn't parse the xml correctly.

    Read the article

  • How does java.util.Collections.contains() perform faster than a linear search?

    - by The111
    I've been fooling around with a bunch of different ways of searching collections, collections of collections, etc. Doing lots of stupid little tests to verify my understanding. Here is one which boggles me (source code further below). In short, I am generating N random integers and adding them to a list. The list is NOT sorted. I then use Collections.contains() to look for a value in the list. I intentionally look for a value that I know won't be there, because I want to ensure that the entire list space is probed. I time this search. I then do another linear search manually, iterating through each element of the list and checking if it matches my target. I also time this search. On average, the second search takes 33% longer than the first one. By my logic, the first search must also be linear, because the list is unsorted. The only possibility I could think of (which I immediately discard) is that Java is making a sorted copy of my list just for the search, but (1) I did not authorize that usage of memory space and (2) I would think that would result in MUCH more significant time savings with such a large N. So if both searches are linear, they should both take the same amount of time. Somehow the Collections class has optimized this search, but I can't figure out how. So... what am I missing? import java.util.*; public class ListSearch { public static void main(String[] args) { int N = 10000000; // number of ints to add to the list int high = 100; // upper limit for random int generation List<Integer> ints; int target = -1; // target will not be found, forces search of entire list space long start; long end; ints = new ArrayList<Integer>(); start = System.currentTimeMillis(); System.out.print("Generating new list... "); for (int i = 0; i < N; i++) { ints.add(((int) (Math.random() * high)) + 1); } end = System.currentTimeMillis(); System.out.println("took " + (end-start) + "ms."); start = System.currentTimeMillis(); System.out.print("Searching list for target (method 1)... "); if (ints.contains(target)) { // nothing } end = System.currentTimeMillis(); System.out.println(" Took " + (end-start) + "ms."); System.out.println(); ints = new ArrayList<Integer>(); start = System.currentTimeMillis(); System.out.print("Generating new list... "); for (int i = 0; i < N; i++) { ints.add(((int) (Math.random() * high)) + 1); } end = System.currentTimeMillis(); System.out.println("took " + (end-start) + "ms."); start = System.currentTimeMillis(); System.out.print("Searching list for target (method 2)... "); for (Integer i : ints) { // nothing } end = System.currentTimeMillis(); System.out.println(" Took " + (end-start) + "ms."); } }

    Read the article

  • Q1 2010 New Feature: Paging with RadGridView for Silverlight and WPF

    We are glad to announce that the Q1 2010 Release has added another weapon to RadGridViews growing arsenal of features. This is the brand new RadDataPager control which provides the user interface for paging through a collection of data. The good news is that RadDataPager can be used to page any collection. It does not depend on RadGridView in any way, so you will be free to use it with the rest of your ItemsControls if you chose to do so. Before you read on, you might want to download the samples solution that I have attached. It contains a sample project for every scenario that I will discuss later on. Looking at the code while reading will make things much easier for you. There is something for everyone among the 10 Visual Studio projects that are included in the solution. So go and grab it. I. Paging essentials The single most important piece of software concerning paging in Silverlight is the System.ComponentModel.IPagedCollectionView interface. Those of you who are on the WPF front need not worry though. As you might already know, Teleriks Silverlight and WPF controls is share the same code-base. Since WPF does not contain a similar interface, Telerik has provided its own Telerik.Windows.Data.IPagedCollectionView. The IPagedCollectionView interface contains several important members which are used by RadGridView to perform the actual paging. Silverlight provides a default implementation of this interface which, naturally, is called PagedCollectionView. You should definitely take a look at its source code in case you are interested in what is going on under the hood. But this is not a prerequisite for our discussion. The WPF default implementation of the interface is Teleriks QueryableCollectionView which, among many other interfaces, implements IPagedCollectionView. II. No Paging In order to gradually build up my case, I will start with a very simple example that lacks paging whatsoever. It might sound stupid, but this will help us build on top of this paging-devoid example. Let us imagine that we have the simplest possible scenario. That is a simple IEnumerable and an ItemsControl that shows its contents. This will look like this: No Paging IEnumerable itemsSource = Enumerable.Range(0, 1000); this.itemsControl.ItemsSource = itemsSource; XAML <Border Grid.Row="0" BorderBrush="Black" BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1" BorderBrush="Black" BorderThickness="1" Margin="5">     <TextBlock Text="No Paging"/> </Border> Nothing special for now. Just some data displayed in a ListBox. The two sample projects in the solution that I have attached are: NoPaging_WPF NoPaging_SL3 With every next sample those two project will evolve in some way or another. III. Paging simple collections The single most important property of RadDataPager is its Source property. This is where you pass in your collection of data for paging. More often than not your collection will not be an IPagedCollectionView. It will either be a simple List<T>, or an ObservableCollection<T>, or anything that is simply IEnumerable. Unless you had paging in mind when you designed your project, it is almost certain that your data source will not be pageable out of the box. So what are the options? III. 1. Wrapping the simple collection in an IPagedCollectionView If you look at the constructors of PagedCollectionView and QueryableCollectionView you will notice that you can pass in a simple IEnumerable as a parameter. Those two classes will wrap it and provide paging capabilities over your original data. In fact, this is what RadGridView does internally. It wraps your original collection in an QueryableCollectionView in order to easily perform many useful tasks such as filtering, sorting, and others, but in our case the most important one is paging. So let us start our series of examples with the most simplistic one. Imagine that you have a simple IEnumerable which is the source for an ItemsControl. Here is how to wrap it in order to enable paging: Silverlight IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new PagedCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; WPF IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new QueryableCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; XAML <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> This will do the trick. It is quite simple, isnt it? The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithWrapping_WPF PagingSimpleCollectionWithWrapping_SL3 III. 2. Binding to RadDataPager.PagedSource In case you do not like this approach there is a better one. When you assign an IEnumerable as the Source of a RadDataPager it will automatically wrap it in a QueryableCollectionView and expose it through its PagedSource property. From then on, you can attach any number of ItemsControls to the PagedSource and they will be automatically paged. Here is how to do this entirely in XAML: Using RadDataPager.PagedSource <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding PagedSource, ElementName=radDataPager}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding ItemsSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithPagedSource_WPF PagingSimpleCollectionWithPagedSource_SL3 IV. Paging collections implementing IPagedCollectionView Those of you who are using WCF RIA Services should feel very lucky. After a quick look with Reflector or the debugger we can see that the DomainDataSource.Data property is in fact an instance of the DomainDataSourceView class. This class implements a handful of useful interfaces: ICollectionView IEnumerable INotifyCollectionChanged IEditableCollectionView IPagedCollectionView INotifyPropertyChanged Luckily, IPagedCollectionView is among them which lets you do the whole paging in the server. So lets do this. We will add a DomainDataSource control to our page/window and connect the items control and the pager to it. Here is how to do this: MainPage <riaControls:DomainDataSource x:Name="invoicesDataSource"                               AutoLoad="True"                               QueryName="GetInvoicesQuery">     <riaControls:DomainDataSource.DomainContext>         <services:ChinookDomainContext/>     </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding Data, ElementName=invoicesDataSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Data, ElementName=invoicesDataSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> By the way, you can replace the ListBox from the above code snippet with any other ItemsControl. It can be RadGridView, it can be the MS DataGrid, you name it. Essentially, RadDataPager is sending paging commands to the the DomainDataSource.Data. It does not care who, what, or how many different controls are bound to this same Data property of the DomainDataSource control. So if you would like to experiment with this, you can throw in any number of other ItemsControls next to the ListBox, bind them in the same manner, and all of them will be paged by our single RadDataPager. Furthermore, you can throw in any number of RadDataPagers and bind them to the same property. Then when you page with any one of them will automatically update all of the rest. The whole picture is simply beautiful and we can do all of this thanks to WCF RIA Services. The two sample projects (Silverlight only) in the solution that I have attached are: PagingIPagedCollectionView PagingIPagedCollectionView.Web IV. Paging RadGridView While you can replace the ListBox in any of the above examples with a RadGridView, RadGridView offers something extra. Similar to the DomainDataSource.Data property, the RadGridView.Items collection implements the IPagedCollectionView interface. So you are already thinking: Then why not bind the Source property of RadDataPager to RadGridView.Items? Well thats exactly what you can do and you will start paging RadGridView out-of-the-box. It is as simple as that, no code-behind is involved: MainPage <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <telerikGrid:RadGridView Name="radGridView"                              ItemsSource="{Binding ItemsSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Items, ElementName=radGridView}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingRadGridView_SL3 PagingRadGridView_WPF With this last example I think I have covered every possible paging combination. In case you would like to see an example of something that I have not covered, please let me know. Also, make sure you check out those great online examples: WCF RIA Services with DomainDataSource Paging Configurator Endless Paging Paging Any Collection Paging RadGridView Happy Paging! Download Full Source Code Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • From Bluehost to WP Engine, My WordPress Story

    - by thatjeffsmith
    This is probably the longest blog post I’ve written in a LONG time. And if you’re used to coming here for the Oracle stuff, this post is not about that. It’s about my blog, and the stuff under the hood that makes it run, AKA WordPress. If you want to skip to the juicy stuff, then use these shortcuts: My Site Slowed Down How I Moved to WP Engine How WP Engine ‘Hooked’ Me Why WP Engine? I started thatJeffSmith.com on May 28th, 2010. I had been already been blogging for several years, but a couple of really smart people I respected (Andy, Brent – thanks again!) suggested that I take ownership of my content and begin building my personal brand. I thought that was a good idea, and so I signed up for service with bluehost. Bluehost makes setting up a WordPress site very, very easy. And, they continued to be easy to work with for the past 2 years. I would even recommend them to anyone looking to host their own WordPress install/site. For $83.40, I purchased a year’s worth of service and my domain name registration – a very good value. And then last year I paid $107.40 for another year’s services. And when that year expired I paid another $190.80 for an additional two year’s service in advance. I had been up to that point, getting my money’s worth. And then, just a few weeks ago… My Site Slowed to a Crawl That spike was from an April Fool's Day Post, I think Why? Well, when I first started blogging, I had the same problem that most beginner bloggers have – not many readers. In my first year of blogging, I think the highest number of readers on a single day was about 125. I remember that day as I was very excited to break 100! Bluehost was very reliable, serving up my content with maybe a total of 3-4 outages in the past 2 years. Support was usually very prompt with answers and solutions, and I love their ‘Chat now’ technology – much nicer than message boards only or pay-to-talk phone support. In the past 6 months however, I noticed a couple of things: daily traffic was increasing – woohoo! my service was experiencing severe CPU throttling – doh! To be honest, I wasn’t aware the throttling was occuring, but I did know that the response time of my blog was starting to lag. Average load times were approaching 20-30 seconds. Not good when good sites are loading in 5 seconds or less. And just this past week, in getting ready to launch a new website for work that sucked in an RSS feed from my blog, the new page was left waiting for more than a minute. Not good! In fact my boss asked, why aren’t you blogging on Blogger? Ugh. I tried a few things to fix the problem: I paid for a premium WordPress theme – Themify’s Grido (thanks to @SQLRockstar for the heads-up) I installed a couple of WP caching plugins I read every WP optimization blog post I could get my greedy little eyes on However, at the same time I was also getting addicted to WordPress bloggers talking about all the cool things you could do with your blog. As a result I had at one point about 30 different plugins installed. WordPress runs on MySQL, and certain queries running via these plugins were starving for CPU. Plugins that would be called every page load meant that as more people clicked on my site, the more CPU I needed. I’m not stupid, so I eventually figured out that maybe less plugins was better, and was able to go down to just 20. But still, the site was running like a dog. CPU Throttling, makes MySQL wait to run a query Bluehost runs shared servers. Your site runs on the same box that several hundred (or thousand?) other services are running on. If you take more CPU than they think you should have, they will limit your service by making you stand in line for CPU, AKA ‘throttling.’ This is not bad. This business model allows them to serve many, many users for a very fair price. It works great until, well, until it doesn’t. I noticed in the last week that for every minute of service, I was being throttled between 60 and 300 seconds. If there were 5 MySQL processes running, then every single one of them were being held in check. The blog visitor notice this as their page requests would take a minute or more to be answered. Bluehost unfortunately doesn’t offer dedicated server hosting, so there was no real upgrade path for me follow and remain one of their customers. So what was I to do? Uninstall every plugin and hope the site sped up? Ask for people to take turns on my blog? I decided to spend my way out of the problem. I signed up for service with WP Engine and moved ThatJeffSmith.com The first 2 months are free, and after that it’s about $29/month to run my site on their system. My math tells me that’s a good bit more expensive than what Bluehost was charging me – to the tune of about 300% more a month. Oh, and I should just say that my blog is a personal blog even though I talk about work stuff here. I don’t get paid for blogging, I don’t sell ads, and I don’t expense the service fees – this is my personal passion. So is it worth it? In the first 4 days, it seems to be totally worth it. Load times have gone from 20-30 seconds to less than 5 seconds. A few folks have told me via Twitter that they notice faster page loads. I anticipate this will indirectly lead to more traffic as Google penalizes you in search results if your site is too slow, and of course some folks won’t even bother waiting more than 5-10 seconds. I noticed right away that writing posts, uploading pictures, and just using the WordPress dashboard in general was much more responsive. So writing is less of a chore now, which means I won’t have a good reason not to write How I Moved to WP Engine I signed up for the service and registered my domain. I then took a full export of my ‘old’ site by doing a FTP GET of all my files, then did a MySQL database backup, exported my WordPress Theme settings to a .zip file, and then finally used the WordPress ‘Export’ feature. I then used the WordPress ‘Import’ on the new site to load up my posts. Then I uploaded the theme .zip package from Themify. Then I FTP’d the ‘wp-content’ directory up to my new server using SFTP (WP Engine only supports secure FTP – good on them!) Using a temporary URL to see my new site, I was able to confirm that everything looked mostly OK – I’ll detail the challenges and issues of fixing the content next – but then it was time to ‘flip the switch.’ I updated the IP address that the DNS lookup tables use to route traffic to my new server. In a matter of minutes the DNS servers around the world were updated and it was time to see the new site! But It Was ‘Broken’ I had never moved a website before, and in my rush to update the DNS, I had changed the records without really finding out what I was supposed to do first. After re-reading the directions provided by WP Engine and following the guidance of their support engineer, I realized I had needed to set the CNAME (Alias) ‘www’ record to point to a different URL than the ‘www.thatjeffsmith.com’ entry I had set. Once corrected the site was up and running in less than a minute. Then It Was Only Mostly Broken Many of my plugins weren’t working. Apparently just ftp’ing the wp-content directory up wasn’t the proper way to re-install the plugin. I suspect file permissions or file ownership wasn’t proper. Some plug-ins were working, many had their settings wiped to the defaults, and a few just didn’t work again. I had to delete the directory of the plug-in manually via SFTP, and then use the WP Dashboard to install it from scratch. And here was my first ‘lesson’ – don’t switch the DNS records until you’ve completely tested your new site. I wasn’t able to navigate the old WP console to review my plug-in settings. Thankfully I was able to use the Wayback Machine to reverse engineer some things, and of course most plug-ins aren’t that complicated to setup to begin with. An example of one that I had to redo from scratch is the ‘Twitter @Anywhere Plus’ plugin that I use to create the form that allows folks to tweet a post they enjoyed at the end of each story. How WP Engine ‘Hooked’ Me I actually signed up with another provider first. They ranked highly in Google searches and a few Tweeps recommended them to me. But hours after signing up and I still didn’t have sever reyady, I was ready to give up on them. They offered no chat or phone support – only mail and message boards. And the message boards were rife with posts about how the service had gone downhill in the past 6 months. To their credit, they did make it easy to cancel, although I did have to do so via email as their website ‘cancel’ button was non-existent. Within minutes of activating my WP Engine account I had received my welcome message and directions on how to get started. I was able to see my staged website right away. They also did something very cool before I even got started – they looked at my existing site and told me by how much they could improve its performance. The proof is in the web pudding. I like this for a few reasons, but primarily I liked their business model. It told me they knew what they were doing, and that they were willing to put their money where their mouth was. This was further evident by their 60-day money back guarantee. And if I understand it correctly, they don’t even take your money until after that 60 day period is over. After a day, I was welcomed by the WP Engine social media team, and was given the opportunity to subscribe to their newsletter and follow their account on Twitter. I noticed their Twitter team is sure to post regular WordPress tips several times a day. It’s not just an account that’s setup for the sake of having a Twitter presence. These little things add up and give me confidence in my decision to choose them as my hosting partner. ‘Partner’ – that’s a lot nicer word than just ‘service provider,’ isn’t it? Oh, and they offered me a t-shirt. Don’t ever doubt the power of a ‘free’ t-shirt! How awesome is this e-mail, from a customer perspective? I wasn’t really expecting any of this. Exceeding expectations before I have even handed over a single dollar seems like a pretty good business plan. This is how you treat customers. Love them to death, and they reward you with loyalty. But Jeff, You Skipped a Piece Here, Why WP Engine? I found them on one of those ‘Top 10′ list posts, and pulled up their webpage. I noticed they offered a specialized service – they host WordPress installs, and that’s it. Their servers are tuned specifically for running WordPress. They had in bolded text, things like ‘INSANELY FAST. INFINITELY SCALABLE.’ and ‘LIGHTNING SPEED.’ And then they offered insurance against hackers and they took care of automatic backups and restores. The only drawbacks I have noticed so far relate to plugins I used that have been ‘blacklisted.’ In order to guarantee that ‘lightning’ speed, they have banned the use of the CPU-suckiest plugins. One of those is the ‘Related Posts’ plugin. So if you are a subscriber and are reading this in your email, you’ll notice there’s no links back to my blog to continue reading other related stories. Since that referral traffic is very small single-digit for my site, I decided that I’m OK with that. I’d rather have the warp-speed page loads. Again, I think that will lead to higher traffic down the road. In 50+ days I will need to decide if WP Engine is a permanent solution. I’ll be sure to update this post when that time comes and let y’all know how it turns out.

    Read the article

  • CodePlex Daily Summary for Tuesday, January 04, 2011

    CodePlex Daily Summary for Tuesday, January 04, 2011Popular ReleasesMini Memory Dump Diagnosis using Asp.Net: MiniDump HealthMonitor: Enable developers to generate mini memory dumps in case of unhandled exceptions or for specific exception scenarios without using any external tools , only using Asp.Net 2.0 and above. Many times in production , QA Servers the issues require post-mortem or low-level system debugging efforts. This Memory dump generator will help in those scenarios.EnhSim: EnhSim 2.2.9 BETA: 2.2.9 BETAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the Gobl...xUnit.net - Unit Testing for .NET: xUnit.net 1.7 Beta: xUnit.net release 1.7 betaBuild #1533 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision)...Json.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14977.000: Prerequisites: ============== o Visual Studio 2008 / Visual Studio 2010 o ReSharper 5.1.1753.4 o StyleCop 4.4.1.2 Preview This release adds no new features, has bug fixes around performance and unhandled errors reported on YouTrack.BloodSim: BloodSim - 1.3.1.0: - Restructured simulation log back end to something less stupid to drastically reduce simulation time and memory usage - Removed a debug log entry that was left over from testing of 1.3.0.0 - Fixed a rounding and calculation error with Haste rating - Added option for Rune of SwordshatteringDbDocument: DbDoc Initial Version: DbDoc Initial versionASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.2: Atomic CMS 2.1.2 release notes Atomic CMS installation guide Kind Of Magic MSBuild Task: Beta 4: Update to keep up with latest bug fixes. To those who don't like Magic/NoMagic attributes, you may change these names in KindOfMagic.targets file: Change this line: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)"/> to something like this: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)" MagicAttribute="MyMagicAttribute" NoMagicAttribute="MyNoMagicAttribute"/>N2 CMS: 2.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) All-round improvements and bugfixes File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open the proj...Wii Backup Fusion: Wii Backup Fusion 1.0: - Norwegian translation - French translation - German translation - WBFS dump for analysis - Scalable full HQ cover - Support for log file - Load game images improved - Support for image splitting - Diff for images after transfer - Support for scrubbing modes - Search functionality for log - Recurse depth for Files/Load - Show progress while downloading game cover - Supports more databases for cover download - Game cover loading routines improvedAutoLoL: AutoLoL v1.5.1: Fix: Fixed a bug where pressing Save As would not select the Mastery Directory by default Unexpected errors are now always reported to the user before closing AutoLoL down.* Extracted champion data to Data directory** Added disclaimer to notify users this application has nothing to do with Riot Games Inc. Updated Codeplex image * An error report will be shown to the user which can help the developers to find out what caused the error, this should improve support ** We are working on ...TortoiseHg: TortoiseHg 1.1.8: TortoiseHg 1.1.8 is a minor bug fix release, with minor improvementsBlogEngine.NET: BlogEngine.NET 2.0: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. To get started, be sure to check out our installatio...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...StyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...DocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!New Projects3D SharePoint Tag Cloud in Silverlight: This is a Silverlight Tag Cloud intended for use in SharePoint but can be configured for use separately as well. Based on a blog post from Peter Gerritsen (http://blog.petergerritsen.nl/2009/02/14/creating-a-3d-tagcloud-in-silverlight-part-1/).AffinityUI for Unity 3D: AffinityUI makes writing GUI code for Unity 3D easier with a component based API that allows you to write GUIs in a declarative fashion using a fluent interface, update events and powerful two-way databinding. A visual editor and scene GUI support are planned for the future.calendar synchronization: This will enable tight integration between ivvy core and infragistics scheduler. Will utilize oData and native isolated strorage instead of SQLite as planed earlier.Card Games Library: The aim of this project is to offer a framework for creating card games logics and rules. This project is not focused on particular games but offers a felxible and extensible base for creating card games, like; poker, solitaire, rummy, etcDbDocument: DbDoc tool seeks to be helper tool side by side with MS SQL server management studio tool, you can design your DB Tables in visualized way through Diagrams and then use “DbDoc” tool to generate design document in MS Word table formatDelux: Delux project for educationDnEcuDiag - A .NET Ecu Diagnostic Application: DnEcuDiag is an application framework for developing electronic control unit diagnostic functionality without the need to implement code.Dynamics CRM 4.0 Recycle Bin: This add-on for Microsoft Dynamics CRM 4.0 to enable Recycle Bin Features (Restore, Permenant Delete) for CRM users.EPPLib.it: Il sistema sincrono di registrazione e mantenimento dei nomi a dominio del Registro.it utilizza il protocollo EPP (Extensible Provisioning Protocol). EPPLib consente di interfacciare i propri progetti con il sistema sincrono del Registro.it.IoC4Fun: Very LightWeight IoC or DI Container only required Net Framework 3.5 written in C#. For Small Apps or Educational purpose. Not intrusion code added like others Containers. Just Plan Register Dependencies and Let Container resolve magically the Injection Dependencies.ISocial: WP7 Application, centralise les réseaux sociauxJohnnyIssa: begASP.NET Version 4.LiveCPE: LiveCPE is an Academic project based on C# and ASP.Net. It aims to be a social network website. MS CRM - Skype Connector: Skype Addon for Microsoft Dynamics CRM allows to dial CRM Contacts, Lead and so on via Skype. It's developed in ASP.NET and C#.Ryan's Projects: a group of random c# projectsSeleniuMspec: SeleniuMspec is a template for Selenium IDE that generates Selenium tests for Mspec (a .NET BDD framework). SharePoint FAQ Web Part: The SharePoint FAQ Web Part makes it easier for SharePoint Users to display FAQs feature on their SharePoint sites. You'll no longer have to maintain HTML and anchors in the Content Editor Web Part, as the FAQs are now automatically generated from a SharePoint list. Silverlight Planet: Projetos da comunidade Silverlight Planet www.silverlightplanet.net.brWP7 Expense Report: Expense Report for Windows Phone 7xll - the easy way to create Excel add-ins: The xll C++ library makes it easy to create xll add-ins for Excel. It also supports the big grid, wide-character strings, and thread-safe functions. ????????: ????,??,??,????,????,??????

    Read the article

  • CodePlex Daily Summary for Saturday, January 29, 2011

    CodePlex Daily Summary for Saturday, January 29, 2011Popular ReleasesRawr: Rawr 4.0.17 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 and on the Version Notes page: http://rawr.codeplex.com/wikipage?title=VersionNotes As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you...VivoSocial: VivoSocial 7.4.2: Version 7.4.2 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.4.2 release and see if they persist. If you have any questions about this release, please post them in our Support forums. If you are experiencing a bug or would like to request a new feature, please submit it to our issue tracker. Web Controls * Updated Business Objects and added a new SQL Data Provider File. Groups * Fixed a security issue whe...PHP Manager for IIS: PHP Manager 1.1.1 for IIS 7: This is a minor release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus several bug fixes (see change list for more details). Also, this release includes Russian language support. SHA1 codes for the downloads are: PHPManagerForIIS-1.1.0-x86.msi - 6570B4A8AC8B5B776171C2BA0572C190F0900DE2 PHPManagerForIIS-1.1.0-x64.msi - 12EDE004EFEE57282EF11A8BAD1DC1ADFD66A654BloodSim: WControls.dll update: Priority Update It's just come to my attention that the latest version of WControls.dll was not included in the 1.4 release and as a result, BloodSim has been unuseable. Please download WControls.dll from here, and this will rectify the issue.VFPX: VFP2C32 2.0.0.8 Release Candidate: This release includes several bugfixes, new functions and finally a CHM help file for the complete library.DB>doc for Microsoft SQL Server: 1.0.0.0: Initial release Supported output HTML WikiPlex markup Raw XML Supported objects Tables Primary Keys Foreign Keys ViewsmojoPortal: 2.3.6.1: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2361-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Office Web.UI: Alpha preview: This is the first alpha release. Very exciting moment... This download is just the demo application : "Contoso backoffice Web App". No source included for the moment, just the app, for testing purposes and for feedbacks !! This package includes a set of official MS Office icons (143 png 16x16 and 132 png 32x32) to really make a great app ! Please rate and give feedback ThanksParallel Programming with Microsoft Visual C++: Drop 6 - Chapters 4 and 5: This is Drop 6. It includes: Drafts of the Preface, Introduction, Chapters 2-7, Appendix B & C and the glossary Sample code for chapters 2-7 and Appendix A & B. The new material we'd like feedback on is: Chapter 4 - Parallel Aggregation Chapter 5 - Futures The source code requires Visual Studio 2010 in order to run. There is a known bug in the A-Dash sample when the user attempts to cancel a parallel calculation. We are working to fix this.Catel - WPF and Silverlight MVVM library: 1.1: (+) Styles can now be changed dynamically, see Examples application for a how-to (+) ViewModelBase class now have a constructor that allows services injection (+) ViewModelBase services can now be configured by IoC (via Microsoft.Unity) (+) All ViewModelBase services now have a unit test implementation (+) Added IProcessService to run processes from a viewmodel with directly using the process class (which makes it easier to unit test view models) (*) If the HasErrors property of DataObjec...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.160: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release improves NodeXL's Twitter and Pajek features. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file in Windows Explorer and select "Extract All." Close Ex...Kooboo CMS: Kooboo CMS 3.0 CTP: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL, RavenDB and SQLCE. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder to enable...UOB & ME: UOB ME 2.6: UOB ME 2.6????: ???? V1.0: ???? V1.0 ??Password Generator: 2.2: Parallel password generation Password strength calculation ( Same method used by Microsoft here : https://www.microsoft.com/protect/fraud/passwords/checker.aspx ) Minor code refactoringVisual Studio 2010 Architecture Tooling Guidance: Spanish - Architecture Guidance: Francisco Fagas http://geeks.ms/blogs/ffagas, Microsoft Most Valuable Professional (MVP), localized the Visual Studio 2010 Quick Reference Guidance for the Spanish communities, based on http://vsarchitectureguide.codeplex.com/releases/view/47828. Release Notes The guidance is available in a xps-only (default) or complete package. The complete package contains the files in xps, pdf and Office 2007 formats. 2011-01-24 Publish version 1.0 of the Spanish localized bits.ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager the html generation has been optimized, the html page size is much smaller nowFacebook Graph Toolkit: Facebook Graph Toolkit 0.6: new Facebook Graph objects: Application, Page, Post, Comment Improved Intellisense documentation new Graph Api connections: albums, photos, posts, feed, home, friends JSON Toolkit upgraded to version 0.9 (beta release) with bug fixes and new features bug fixed: error when handling empty JSON arrays bug fixed: error when handling JSON array with square or large brackets in the message bug fixed: error when handling JSON obejcts with double quotation in the message bug fixed: erro...Microsoft All-In-One Code Framework: Visual Studio 2008 Code Samples 2011-01-23: Code samples for Visual Studio 2008MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (3): Instructions for installation: http://www.galasoft.ch/mvvm/installing/manually/ Includes the hotfix templates for Windows Phone 7 development. This is only relevant if you didn't already install the hotfix described at http://blog.galasoft.ch/archive/2010/07/22/mvvm-light-hotfix-for-windows-phone-7-developer-tools-beta.aspx.New ProjectsAsp.Net MvcRoute Debugger Visualizer: Asp.Net MvcRoute Debugger Visualizer is an extension for Visual Studio 2010.It displays the matched route data and datatokens of all defined routes in your mvc applicationAutoLaunch for Windows Embedded Compact (CE): AutoLaunch component for Windows Embedded CE 6.0 and Windows Embedded Compact 7BicubicResizeSilverlight: A client-side Silverlight library for performing a bicubic resize. Cloud Monitoring Studio: Cloud Monitoring Studio provides real-time performance monitoring mechanism for Azure deployed applications/Azure roles with the help of easy-to-understand graphs. Users can configure required performance counters for individual Azure roles through easy-to-use & intuitive GUI.CoreCon for Windows Embedded Compact (CE): CoreCon component for Windows Embedded CE 6.0 and Windows Embedded Compact 7, to simplify the tasks needed to include CoreCon files to the OS image.DokanDiscUtils Bridge: This a a bridge for the Dokan library and the DiscUtils library that allows Any partition/image that discutils can open to be mounted using dokanEverything About My Share: Everything About My ShareExtraordinary Ideas: This project contains some Extra oradinary solutions for difficult software problems.. Article's are in Turkish Language tahircakmak.blogspot.comIE9 Helper: The IE9 Helper is an ASP.NET Helper to makes adding IE9 features simple.ITSolutions: .NET Sandbox SolutionsKitty - Async Server: Kitty is an Async Server with C# and Java versions. It builds on Parallelism to deliver a Internet Scale Server StarterKit.mediainfocollector: Retrieves info from mediafilesMini - Async WebSocket Server: Mini is an Async WebSocket Server with versions in C# and Java. It builds on Parallelism and Kitty to deliver an Internet Scale WebSocket Server StarterKit. MVC N-tier EMR Sample Application: This sample uses EMR to showcase a MVC N-Tier application. It uses WCF to separate the presentation from the DB/Backend.MyPhotoGallery: MyPhotoGallery is a photo gallery developed in ASP.Net MVC.NuGet: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development.OneCode CodeBox: OneCode CodeBoxonecode toolkit: toolsOrchard SSL: Orchard Secured Sockets Layer (SSL) adds new features to force the usage of SSL in sections like authentication pages or the dashboard.OrchardCMS - Lokalizacija: OrchardCMS - Lokalizacija je projekat lokalizacije Orchard CMS-a na srpski, hrvatski, srpskohrvatski, hrvatskosrpski i sve ostale jezike sa podrucija ex-YU. Pokušacemo da iskorisitmo slicnosti u jezicima kako bi jedni drugima pomogli i olakšali lokalizaciju Orchard-a. Pyxis Install Maker: This utility creates single file installs for Pyxis 2 applications.SaleCard: TestSharePoint Social Networking: A Collection of solutions aimed at adding social network to your SharePoint 2010 site. The collection will include web parts, ribbon extensions and other type of solutions that will add or enhance the social networking capabilities of SharePoint. The solution is written in c#.SmartScreenTerminator: Pissed off by the stupid "smart screen" asking you to "remember to protect your password" every time you click someone else's blog link in your Windows Live homepage? Use this three line appication to allow your IE to automatically navigate to the destination page.StreetScene: StreetScene is a sample project showing how to use Microsoft technologies such as ASP.NET, Azure and Silverlight to build rich applications.svmnetx: testtest_project: test projectTweet-Links: Tweet-Links is designed for ultra-fast sending a twitter selected text and files and images.WCMF - Windows Content Management Framework: WCMF is a framework that allows one to build CMS appllications that target the Windows platform (whereas a typical CMS system is usually web based). Used technologies are MEF, WCF, WPF (including Silverlight) and Caliburn.Micro.WebNoteAOP: A sample project for aspect oriented programming with PostSharp. The main purpose is a short introduction. All aspects are made in a very simple way. They should be useful for simple applications. Please keep in mind, that complex problems often require a complex solution, too.

    Read the article

  • Animation Color [on hold]

    - by user2425429
    I'm having problems in my java program for animation. I'm trying to draw a hexagon with a shape similar to that of a trapezoid. Then, I'm making it move to the right for a certain amount of time (DEMO_TIME). Animation and ScreenManager are "API" classes, and AnimationTest1 is a demo. In my test program, it runs with a black screen and white stroke color. I'd like to know why this happened and how to fix it. I'm a beginner, so I apologize for this question being stupid to all you game programmers. Here is the code I have now: import java.awt.DisplayMode; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Polygon; import java.util.ArrayList; import java.util.List; import java.util.concurrent.Executor; import java.util.concurrent.Executors; import javax.swing.ImageIcon; public class AnimationTest1 { public static void main(String args[]) { AnimationTest1 test = new AnimationTest1(); test.run(); } private static final DisplayMode POSSIBLE_MODES[] = { new DisplayMode(800, 600, 32, 0), new DisplayMode(800, 600, 24, 0), new DisplayMode(800, 600, 16, 0), new DisplayMode(640, 480, 32, 0), new DisplayMode(640, 480, 24, 0), new DisplayMode(640, 480, 16, 0) }; private static final long DEMO_TIME = 4000; private ScreenManager screen; private Image bgImage; private Animation anim; public void loadImages() { // create animation List<Polygon> polygons=new ArrayList(); int[] x=new int[]{20,4,4,20,40,56,56,40}; int[] y=new int[]{20,32,40,44,44,40,32,20}; polygons.add(new Polygon(x,y,8)); anim = new Animation(); //# of frames long startTime = System.currentTimeMillis(); long currTimer = startTime; long elapsedTime = 0; boolean animated = false; Graphics2D g = screen.getGraphics(); int width=200; int height=200; while (currTimer - startTime < DEMO_TIME*2) { //draw the polygons if(!animated){ for(int j=0; j<polygons.size();j++){ for(int pos=0; pos<polygons.get(j).npoints; pos++){ polygons.get(j).xpoints[pos]+=1; } } anim.setNewPolyFrame(polygons , width , height , 64); } else{ // update animation anim.update(elapsedTime); draw(g); g.dispose(); screen.update(); try{ Thread.sleep(20); } catch(InterruptedException ie){} } if(currTimer - startTime == DEMO_TIME) animated=true; elapsedTime = System.currentTimeMillis() - currTimer; currTimer += elapsedTime; } } public void run() { screen = new ScreenManager(); try { DisplayMode displayMode = screen.findFirstCompatibleMode(POSSIBLE_MODES); screen.setFullScreen(displayMode); loadImages(); } finally { screen.restoreScreen(); } } public void draw(Graphics g) { // draw background g.drawImage(bgImage, 0, 0, null); // draw image g.drawImage(anim.getImage(), 0, 0, null); } } ScreenManager: import java.awt.Color; import java.awt.DisplayMode; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GraphicsConfiguration; import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import java.awt.Toolkit; import java.awt.Window; import java.awt.event.KeyListener; import java.awt.event.MouseListener; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import javax.swing.JFrame; import javax.swing.JPanel; public class ScreenManager extends JPanel { private GraphicsDevice device; /** Creates a new ScreenManager object. */ public ScreenManager() { GraphicsEnvironment environment=GraphicsEnvironment.getLocalGraphicsEnvironment(); device = environment.getDefaultScreenDevice(); setBackground(Color.white); } /** Returns a list of compatible display modes for the default device on the system. */ public DisplayMode[] getCompatibleDisplayModes() { return device.getDisplayModes(); } /** Returns the first compatible mode in a list of modes. Returns null if no modes are compatible. */ public DisplayMode findFirstCompatibleMode( DisplayMode modes[]) { DisplayMode goodModes[] = device.getDisplayModes(); for (int i = 0; i < modes.length; i++) { for (int j = 0; j < goodModes.length; j++) { if (displayModesMatch(modes[i], goodModes[j])) { return modes[i]; } } } return null; } /** Returns the current display mode. */ public DisplayMode getCurrentDisplayMode() { return device.getDisplayMode(); } /** Determines if two display modes "match". Two display modes match if they have the same resolution, bit depth, and refresh rate. The bit depth is ignored if one of the modes has a bit depth of DisplayMode.BIT_DEPTH_MULTI. Likewise, the refresh rate is ignored if one of the modes has a refresh rate of DisplayMode.REFRESH_RATE_UNKNOWN. */ public boolean displayModesMatch(DisplayMode mode1, DisplayMode mode2) { if (mode1.getWidth() != mode2.getWidth() || mode1.getHeight() != mode2.getHeight()) { return false; } if (mode1.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI && mode2.getBitDepth() != DisplayMode.BIT_DEPTH_MULTI && mode1.getBitDepth() != mode2.getBitDepth()) { return false; } if (mode1.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN && mode2.getRefreshRate() != DisplayMode.REFRESH_RATE_UNKNOWN && mode1.getRefreshRate() != mode2.getRefreshRate()) { return false; } return true; } /** Enters full screen mode and changes the display mode. If the specified display mode is null or not compatible with this device, or if the display mode cannot be changed on this system, the current display mode is used. <p> The display uses a BufferStrategy with 2 buffers. */ public void setFullScreen(DisplayMode displayMode) { JFrame frame = new JFrame(); frame.setUndecorated(true); frame.setIgnoreRepaint(true); frame.setResizable(true); device.setFullScreenWindow(frame); if (displayMode != null && device.isDisplayChangeSupported()) { try { device.setDisplayMode(displayMode); } catch (IllegalArgumentException ex) { } } frame.createBufferStrategy(2); Graphics g=frame.getGraphics(); g.setColor(Color.white); g.drawRect(0, 0, frame.WIDTH, frame.HEIGHT); frame.paintAll(g); g.setColor(Color.black); g.dispose(); } /** Gets the graphics context for the display. The ScreenManager uses double buffering, so applications must call update() to show any graphics drawn. <p> The application must dispose of the graphics object. */ public Graphics2D getGraphics() { Window window = device.getFullScreenWindow(); if (window != null) { BufferStrategy strategy = window.getBufferStrategy(); return (Graphics2D)strategy.getDrawGraphics(); } else { return null; } } /** Updates the display. */ public void update() { Window window = device.getFullScreenWindow(); if (window != null) { BufferStrategy strategy = window.getBufferStrategy(); if (!strategy.contentsLost()) { strategy.show(); } } // Sync the display on some systems. // (on Linux, this fixes event queue problems) Toolkit.getDefaultToolkit().sync(); } /** Returns the window currently used in full screen mode. Returns null if the device is not in full screen mode. */ public Window getFullScreenWindow() { return device.getFullScreenWindow(); } /** Returns the width of the window currently used in full screen mode. Returns 0 if the device is not in full screen mode. */ public int getWidth() { Window window = device.getFullScreenWindow(); if (window != null) { return window.getWidth(); } else { return 0; } } /** Returns the height of the window currently used in full screen mode. Returns 0 if the device is not in full screen mode. */ public int getHeight() { Window window = device.getFullScreenWindow(); if (window != null) { return window.getHeight(); } else { return 0; } } /** Restores the screen's display mode. */ public void restoreScreen() { Window window = device.getFullScreenWindow(); if (window != null) { window.dispose(); } device.setFullScreenWindow(null); } /** Creates an image compatible with the current display. */ public BufferedImage createCompatibleImage(int w, int h, int transparency) { Window window = device.getFullScreenWindow(); if (window != null) { GraphicsConfiguration gc = window.getGraphicsConfiguration(); return gc.createCompatibleImage(w, h, transparency); } return null; } } Animation: import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Polygon; import java.awt.image.BufferedImage; import java.util.ArrayList; import java.util.List; /** The Animation class manages a series of images (frames) and the amount of time to display each frame. */ public class Animation { private ArrayList frames; private int currFrameIndex; private long animTime; private long totalDuration; /** Creates a new, empty Animation. */ public Animation() { frames = new ArrayList(); totalDuration = 0; start(); } /** Adds an image to the animation with the specified duration (time to display the image). */ public synchronized void addFrame(BufferedImage image, long duration){ ScreenManager s = new ScreenManager(); totalDuration += duration; frames.add(new AnimFrame(image, totalDuration)); } /** Starts the animation over from the beginning. */ public synchronized void start() { animTime = 0; currFrameIndex = 0; } /** Updates the animation's current image (frame), if necessary. */ public synchronized void update(long elapsedTime) { if (frames.size() >= 1) { animTime += elapsedTime; /*if (animTime >= totalDuration) { animTime = animTime % totalDuration; currFrameIndex = 0; }*/ while (animTime > getFrame(0).endTime) { frames.remove(0); } } } /** Gets the Animation's current image. Returns null if this animation has no images. */ public synchronized Image getImage() { if (frames.size() > 0&&!(currFrameIndex>=frames.size())) { return getFrame(currFrameIndex).image; } else{ System.out.println("There are no frames!"); System.exit(0); } return null; } private AnimFrame getFrame(int i) { return (AnimFrame)frames.get(i); } private class AnimFrame { Image image; long endTime; public AnimFrame(Image image, long endTime) { this.image = image; this.endTime = endTime; } } public void setNewPolyFrame(List<Polygon> polys,int imagewidth,int imageheight,int time){ BufferedImage image=new BufferedImage(imagewidth, imageheight, 1); Graphics g=image.getGraphics(); for(int i=0;i<polys.size();i++){ g.drawPolygon(polys.get(i)); } addFrame(image,time); g.dispose(); } }

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • terminate called after throwing an instance of 'std::length_error'

    - by mark
    hello all, this is my first post here. As i am newbie, the problem might be stupid. I was writing a piece of code while the following error message shown, terminate called after throwing an instance of 'std::length_error' what(): basic_string::_S_create /home/gcj/finals /home/gcj/quals where Aborted the following is the offending code especially Line 39 to Line 52. It is weired for me as this block of code is almost same as the Line64 to Line79. int main(){ std::vector<std::string> dirs, need; std::string tmp_str; std::ifstream fp_in("small.in"); std::ofstream fp_out("output"); std::string::iterator iter_substr_begin, iter_substr_end; std::string slash("/"); int T, N, M; fp_in>>T; for (int t = 0; t < T; t++){ std::cout<<" time "<< t << std::endl; fp_in >> N >> M; for (int n =0; n<N; n++){ fp_in>>tmp_str; dirs.push_back(tmp_str); tmp_str.clear(); } for (int m=0; m<M; m++){ fp_in>>tmp_str; need.push_back(tmp_str); tmp_str.clear(); } for (std::vector<std::string>::iterator iter = dirs.begin(); iter!=dirs.end(); iter++){ for (std::string::iterator iter_str = (*iter).begin()+1; iter_str<(*iter).end(); ++iter_str){ if ((*iter_str)=='/') { std::string tmp_str2((*iter).begin(), iter_str); if (find(dirs.begin(), dirs.end(), tmp_str2)==dirs.end()) { dirs.push_back(tmp_str2); } } } } for (std::vector<std::string>::iterator iter_tmp = dirs.begin(); iter_tmp!= dirs.end(); ++iter_tmp) std::cout<<*iter_tmp<<" "; dirs.clear(); std::cout<<std::endl; std::cout<<" need "<<std::endl; //processing the next for (std::vector<std::string>::iterator iter_tmp = need.begin(); iter_tmp!=need.end(); ++iter_tmp) std::cout<<*iter_tmp<<" "; std::cout<<" where "; for (std::vector<std::string>::iterator iter = need.begin(); iter!=need.end(); iter++){ for (std::string::iterator iter_str = (*iter).begin()+1; iter_str<(*iter).end(); ++iter_str){ if ((*iter_str)=='/') { std::string tmp_str2((*iter).begin(), iter_str); if (find(need.begin(), need.end(), tmp_str2)==need.end()) { need.push_back(tmp_str2); } } } } for (std::vector<std::string>::iterator iter_tmp = need.begin(); iter_tmp!= need.end(); ++iter_tmp) std::cout<<*iter_tmp<<" "; need.clear(); std::cout<<std::endl; //finish processing the next } for (std::vector<std::string>::iterator iter= dirs.begin(); iter!=dirs.end(); iter++) std::cout<<*iter<<" "; std::cout<<std::endl; for (std::vector<std::string>::iterator iter= need.begin(); iter!=need.end(); iter++) std::cout<<*iter<<" "; std::cout<<std::endl; fp_out.close(); } best regards, Mark

    Read the article

  • Divide a large amount of text on an arbitrary number of equal parts.

    - by kalininew
    I probably already fed up with their stupid questions, but I have one more question. I have a large piece of text <p> Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. Nemo enim ipsam voluptatem, quia voluptas sit, </p> <p> aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos, qui ratione voluptatem sequi nesciunt, neque porro quisquam est, qui dolorem ipsum, quia dolor sit, amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt, ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? </p> <p> Quis autem vel eum iure reprehenderit, qui in ea voluptate velit esse, quam nihil molestiae consequatur, vel illum, qui dolorem eum fugiat, quo voluptas nulla pariatur? At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum </p> <p> soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. </p> <p> Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. Nemo enim ipsam voluptatem, quia voluptas sit, </p> <p> Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. Nemo enim ipsam voluptatem, quia voluptas sit, </p> <p> aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos, qui ratione voluptatem sequi nesciunt, neque porro quisquam est, qui dolorem ipsum, quia dolor sit, amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt, ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? </p> <p> Quis autem vel eum iure reprehenderit, qui in ea voluptate velit esse, quam nihil molestiae consequatur, vel illum, qui dolorem eum fugiat, quo voluptas nulla pariatur? At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum </p> <p> soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. </p> <p> Quis autem vel eum iure reprehenderit, qui in ea voluptate velit esse, quam nihil molestiae consequatur, vel illum, qui dolorem eum fugiat, quo voluptas nulla pariatur? At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum </p> <p> Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. Nemo enim ipsam voluptatem, quia voluptas sit, </p> <p> soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat. </p> <p> Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. Nemo enim ipsam voluptatem, quia voluptas sit, </p> At the exit I need to divide the text on the "n" equal parts, so that in these parts was about the same amount of text. Then I these part are arranged in columns and the need for these columns look about the same height. Another condition: Tags you can break (I mean that if the tag "p" contains a lot of text, it can be divided into two parts, to bring in another column). I think this is a monumental task, I shall be grateful for any help.

    Read the article

  • How to simulate browser file upload with HttpWebRequest

    - by cucicov
    Hi, guys, first of all thanks for your contributions, I've found great responses here. Yet I've ran into a problem I can't figure out and if someone could provide any help, it would be greatly appreciated. I'm developing this application in C# that could upload an image from computer to user photoblog. For this I'm usig pixelpost platform for photoblogs that is written mainly in PHP. I've searched here and on other web pages, but the exmples provided there didn't work for me. Here is what I used in my example: (http://stackoverflow.com/questions/566462/upload-files-with-httpwebrequest-multipart-form-data) and (http://bytes.com/topic/c-sharp/answers/268661-how-upload-file-via-c-code) Once it is ready I will make it available for free on the internet and maybe also create a windows mobile version of it, since I'm a fan of pixelpost. here is the code I've used: string formUrl = "http://localhost/pixelpost/admin/index.php?x=login"; string formParams = string.Format("user={0}&password={1}", "user-String", "password-String"); string cookieHeader; HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(formUrl); req.ContentType = "application/x-www-form-urlencoded"; req.Method = "POST"; req.AllowAutoRedirect = false; byte[] bytes = Encoding.ASCII.GetBytes(formParams); req.ContentLength = bytes.Length; using (Stream os = req.GetRequestStream()) { os.Write(bytes, 0, bytes.Length); } HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); cookieHeader = resp.Headers["Set-Cookie"]; string pageSource; using (StreamReader sr = new StreamReader(resp.GetResponseStream())) { pageSource = sr.ReadToEnd(); Console.WriteLine(); } string getUrl = "http://localhost/pixelpost/admin/index.php"; HttpWebRequest getRequest = (HttpWebRequest)HttpWebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookieHeader); HttpWebResponse getResponse = (HttpWebResponse)getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); } // end first part: login to admin panel long length = 0; string boundary = "----------------------------" + DateTime.Now.Ticks.ToString("x"); HttpWebRequest httpWebRequest2 = (HttpWebRequest)WebRequest.Create("http://localhost/pixelpost/admin/index.php?x=save"); httpWebRequest2.ContentType = "multipart/form-data; boundary=" + boundary; httpWebRequest2.Method = "POST"; httpWebRequest2.AllowAutoRedirect = false; httpWebRequest2.KeepAlive = false; httpWebRequest2.Credentials = System.Net.CredentialCache.DefaultCredentials; httpWebRequest2.Headers.Add("Cookie", cookieHeader); Stream memStream = new System.IO.MemoryStream(); byte[] boundarybytes = System.Text.Encoding.ASCII.GetBytes("\r\n--" + boundary + "\r\n"); string formdataTemplate = "\r\n--" + boundary + "\r\nContent-Disposition: form-data; name=\"{0}\";\r\n\r\n{1}"; string formitem = string.Format(formdataTemplate, "headline", "image-name"); byte[] formitembytes = System.Text.Encoding.UTF8.GetBytes(formitem); memStream.Write(formitembytes, 0, formitembytes.Length); memStream.Write(boundarybytes, 0, boundarybytes.Length); string headerTemplate = "\r\nContent-Disposition: form-data; name=\"{0}\"; filename=\"{1}\"\r\nContent-Type: application/octet-stream\r\n\r\n"; string header = string.Format(headerTemplate, "userfile", "path-to-the-local-file"); byte[] headerbytes = System.Text.Encoding.UTF8.GetBytes(header); memStream.Write(headerbytes, 0, headerbytes.Length); FileStream fileStream = new FileStream("path-to-the-local-file", FileMode.Open, FileAccess.Read); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) != 0) { memStream.Write(buffer, 0, bytesRead); } memStream.Write(boundarybytes, 0, boundarybytes.Length); fileStream.Close(); httpWebRequest2.ContentLength = memStream.Length; Stream requestStream = httpWebRequest2.GetRequestStream(); memStream.Position = 0; byte[] tempBuffer = new byte[memStream.Length]; memStream.Read(tempBuffer, 0, tempBuffer.Length); memStream.Close(); requestStream.Write(tempBuffer, 0, tempBuffer.Length); requestStream.Close(); WebResponse webResponse2 = httpWebRequest2.GetResponse(); Stream stream2 = webResponse2.GetResponseStream(); StreamReader reader2 = new StreamReader(stream2); Console.WriteLine(reader2.ReadToEnd()); webResponse2.Close(); httpWebRequest2 = null; webResponse2 = null; and also here is the PHP: (http://dl.dropbox.com/u/3149888/index.php) and (http://dl.dropbox.com/u/3149888/new_image.php) the mandatory fields are headline and userfile so I can't figure out where the mistake is, as the format sent in right. I'm guessing there is something wrong with the octet-stream sent to the form. Maybe it's a stupid mistake I wasn't able to trace, in any case, if you could help me that would mean a lot. thanks,

    Read the article

  • PHP - not returning a count number for filled array...

    - by Phil Jackson
    Morning, this is eating me alive so Im hoping it's not something stupid, lol. $arrg = array(); if( str_word_count( $str ) > 1 ) { $input_arr = explode(' ', $str); die(print_r($input_arr)); $count = count($input_arr); die($count); above is part of a function. when i run i get; > Array ( > [0] => luke > [1] => snowden > [2] => create > [3] => develop > [4] => web > [5] => applications > [6] => sites > [7] => alse > [8] => dab > [9] => hand > [10] => design > [11] => love > [12] => helping > [13] => business > [14] => thrive > [15] => latest > [16] => industry > [17] => developer > [18] => act > [19] => designs > [20] => php > [21] => mysql > [22] => jquery > [23] => ajax > [24] => xhtml > [25] => css > [26] => de > [27] => montfont > [28] => award > [29] => advanced > [30] => programming > [31] => taught > [32] => development > [33] => years > [34] => experience > [35] => topic > [36] => fully > [37] => qualified > [38] => electrician > [39] => city > [40] => amp > [41] => guilds > [42] => level ) Which im expecting; run this however and nothing is returned!?!?! $arrg = array(); if( str_word_count( $str ) > 1 ) { $input_arr = explode(' ', $str); //die(print_r($input_arr)); $count = count($input_arr); die($count); can anyone see anything that my eyes cant?? regards, Phil

    Read the article

  • How should I change my Graph structure (very slow insertion)?

    - by Nazgulled
    Hi, This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile. Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner. I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion. This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while. How should I solve this? Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions? Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity. Any thoughts about this solution? Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html. Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex. So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID. What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to. Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly. I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile. But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution? Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go. Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change. P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

    Read the article

  • Is there any better way for creating a dynamic HTML table without using any javascript library like

    - by piemesons
    Dont worry we dont need to find out any bug in this code.. Its working perfectly.:-P My boss came to me and said "Hey just tell me whats the best of way of writing code for a dynamic HTML table (add row, delete row, update row).No need to add any CSS. Just javascript. No Jquery library etc. I was confused that in the middle of the project why he asking for some stupid exercise like this. What ever i wrote the following code and mailed him and after 15 mins i got a mail from him. " I was expecting much better code from a guy like you. Anyways good job monkey.(And with a picture of monkey as attachment.) thats was the mail. Line by line. I want to reply him but before that i want to know about the quality of my code. Is this really shitty...!!! Or he was just making fun of mine. I dont think that code is really shitty. Still correct me if you can.Code is working perfectly fine. Just copy paste it in a HTML file. <html> <head> <title> Exercise CSS </title> <script type="text/javascript"> function add_row() { var table = document.getElementById('table'); var rowCount = table.rows.length; var row = table.insertRow(rowCount); var cell1 = row.insertCell(0); var element1 = document.createElement("input"); element1.type = "text"; cell1.appendChild(element1); var cell2 = row.insertCell(1); var element2 = document.createElement("input"); element2.type = "text"; cell2.appendChild(element2); var cell3 = row.insertCell(2); cell3.innerHTML = ' <span onClick="edit(this)">Edit</span>/<span onClick="delete_row(this)">Delete</span>'; cell3.setAttribute("style", "display:none;"); var cell4 = row.insertCell(3); cell4.innerHTML = '<span onClick="save(this)">Save</span>'; } function save(e) { var elTableCells = e.parentNode.parentNode.getElementsByTagName("td"); elTableCells[0].innerHTML=elTableCells[0].firstChild.value; elTableCells[1].innerHTML=elTableCells[1].firstChild.value; elTableCells[2].setAttribute("style", "display:block;"); elTableCells[3].setAttribute("style", "display:none;"); } function edit(e) { var elTableCells = e.parentNode.parentNode.getElementsByTagName("td"); elTableCells[0].innerHTML='<input type="text" value="'+elTableCells[0].innerHTML+'">'; elTableCells[1].innerHTML='<input type="text" value="'+elTableCells[1].innerHTML+'">'; elTableCells[2].setAttribute("style", "display:none;"); elTableCells[3].setAttribute("style", "display:block;"); } function delete_row(e) { e.parentNode.parentNode.parentNode.removeChild(e.parentNode.parentNode); } </script> </head> <body > <div id="display"> <table id='table'> <tr id='id'> <td> Piemesons </td> <td> 23 </td> <td > <span onClick="edit(this)">Edit</span>/<span onClick="delete_row(this)">Delete</span> </td> <td style="display:none;"> <span onClick="save(this)">Save</span> </td> </tr> </table> <input type="button" value="Add new row" onClick="add_row();" /> </div> </body>

    Read the article

  • asp .net MVC 2.0 Validation

    - by ANDyW
    Hi I’m trying to do some validation in asp .net MVC 2.0 for my application. I want to have some nice client side validation. Validation should be done most time on model side with DataAnnotations with custom attributes( like CompareTo, StringLenght, MinPasswordLenght (from Membership.MinimumumpassworkdLenght value). For that purpose I tried to use xval with jquery.validation. Some specific thing is that most of forms will be working with ajax and most problems are when I want to validate form with ajax. Here is link for sample project http://www.sendspace.com/file/m9gl54 . I got two forms as controls ValidFormControl1.ascx, ValidFormControl2.ascx <% using (Ajax.BeginForm("CreateValidForm", "Test", new AjaxOptions { HttpMethod = "Post" })) {%> <div id="validationSummary1"> <%= Html.ValidationSummary(true)%> </div> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.Name)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Name)%> <%= Html.ValidationMessageFor(model => model.Name)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Email)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Email)%> <%= Html.ValidationMessageFor(model => model.Email)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Password)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Password)%> <%= Html.ValidationMessageFor(model => model.Password)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.ConfirmPassword)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.ConfirmPassword)%> <%= Html.ValidationMessageFor(model => model.ConfirmPassword)%> </div> <p> <input type="submit" value="Create" /> </p> </fieldset> <% } %> <%= Html.ClientSideValidation<ValidModel>() .UseValidationSummary("validationSummary1", "Please fix the following problems:") %> Both look same the difference is only validation summaryID (validationSummary1, validationSummary2). Both controls are rendered on one page : Form2 <%Html.RenderPartial("~/Views/Test/ValidFormControl2.ascx", null); %> Form1 <%Html.RenderPartial("~/Views/Test/ValidFormControl.ascx", null); %> Validation property First problem, when we have two controls with same type to validate it don’t work becosue html elements are rendered by field name ( so we have two element with same name “Password” ). Only first form will be validated by client side. The worst thing is that even if we have different types and their fields name is same validation won’t work too ( this thing is what I need to repair it will be stupid to name some unique properites for validation ). Is there any solution for this ? Custom attributes validation Next thing custom attributes validation ( All those error are when I use Ajax for on normal form validation is working without problem. ): CompareTo - Simple compare to that is done in mvc template for account model ( class attribute saying with two property will be compared ) , and it wasn’t show on page. To do it I created own CachingRulesProvider with compareRule and my Attribute. Maybe there is more easy way to do it? StringLenght with minimum and maximum value, I won’t describe how I done it but is there any easy whey to do it? Validation summary When I have two two control on page all summary validation information goes to first control validation summary element, even xval generated script say that elementID are different for summary. Any one know how to repair it? Validation Information Is there any option to turn on messages on place where is Html.ValidationMessageFor(model = model.ConfirmPassword). Becsoue for me it isn’t show up. I would like to have summary and near field information too not only red border. Any one know how to do it? Ajax submit Anyone know how to do easy without massive code in javascript to do submit via javascript. This will be used to change input submit to href element (a). Both look same the difference is only validation summaryID

    Read the article

  • PHP: How to automate building a 100 <UL>/<LI> menuitems, while keeping the Menu Structure File Flat / Simply Managable?

    - by Sam
    Above: current "stupid" menu. (entire ul/li menu for javascript menu system) + (some li lines as page-specific submenu) Hi folks! With passion for automation and elegancy, but limited knowledge/knowhow, im stuck with "my hands in my hair" as we Dutch say, for my current menu system works perfectly, but is a pain in the a*s to update! So, i would appreciate it greatly, if you can suggest how to automate this in php: how to let the php generate the html menu code basing on a flat menu input file with TABS indented. OLD SITUATION <ul> <!-- about 100 of these <li>....</li> lines --> <li><a href="carrot.php"><p class="mnu" style="background-position:0 -820px"><? echo __("carrot juice") ?></p></a></li> <!-- lots of data, with only little bit thats really the menu itself--> </ul a javascript file reads a ul/li structure as input to build menu of format in that ul/li, the items with a hyperlink and sprite-bg position represent webpages, (inside LI) while items without hyperlink and sprite-bg are just headers of that menusection, (inside H6) to highlight the current page in the menu, the javascript menumaker uses an id number. this number corresponds to the consequtive li that is a webpage, skips h6 headers correctly. these h6 headers are only there for when importing sections of the same menu as submenu. non-li headers are not shown in menu, nore counted by the javascript menu for their ID. to know which page should be shown, i have to count from ID 0, the li items till finding the current webpage in the li structure and then manually put it in each webpage! BUT: changing an item in li order, means stupidly re-counting their entire li again! each webpage has an icon (= sprite bg-position numer), which is also used in the webpage. INTENDED RESULT I dream of, once setting what the current webpage is (e.g carrot.php) the menu system automatically "finds" and "counts" the li's and returns the id nr (for proper highlight of main menu); generates the entire menu html, and depending on which headings are set for submenu, (e.g. meals, drinks) generates those submenu (entire section below each given header); ginally adds h5 highlight inside the li of that submenu item. For the menu, i wish an easily readable, simple plain txt menu that is indented with tabs, (each tab is one depth for example) and further tabs follow for url and sprite position of icon. MY DREAM MENU-MANAGEMENT FILE |>TAB SEPARATED/INDENTED FLATMENU FILE |MUST BE CALCULATED BY PHP: |>MENUTEXT============URL=============SPRITE=====|ID===TAG================== |>about "#" -520 |00 li |> INFORMATION |—— h6 |> physical state "physical.php" -920 |01 li |> mental health "mental.php" -10 |02 li |> |>apetite "#" -1290 |03 li |> meals "#" -600 |04 li |> COLD MEAL |—— h6 |> egg salade "salad.php" -1040 |05 li |> salmon fish "salmon.php" -540 |06 li |> HOT MEAL |—— h6 |> spare ribs "spareribs.php" -120 |07 li |> di macaroni "macaroni.php" -870 |08 li |> |> drinks "#" -230 |09 li |> JUCY DRINK |—— h6 |> carrot juice "carrot.php" -820 |10 li |> mango hive "mango.php" -270 |11 li DESIRED CHRONOLOGY php outputs the entire ul/li html so the javascript can show the menu: webpage items go inside li tags, and header items go inside h6 tags, e.g. <h6>JUCY DRINK</h6> Each website page has a url filename [eg: salad.php]. Based on this given fact, the php menu generator detects the pagename, gives the IDnr of the position of that page according to the li-item nr and sets variable for javascript to highlight current menu item. the menu items below the specified headers are loaded as submenu in which the current page.php is wrapped inside h5 to highlight current page in submenu: e.g. (<li><h5><a href="carrot.php"><p>..etc..</p></h5></li> Question Which methods / steps / (chronological)ways are there for doing this? I am no good in php programming, but am learning it so please dont write any code without a line of comment why I should use that method etc. Where do I start? If I am unclear in my question, please ask. Thanks. Much appreciated!! Concrete Task List from the provided Comments/Answers, sofar: (RobertB) First, get some PHP code working that can read through a tab-delimited file and put the data into an appropriate data structure. NOW WORKING AT THIS

    Read the article

< Previous Page | 66 67 68 69 70 71  | Next Page >