Search Results

Search found 2902 results on 117 pages for 'directed graph'.

Page 109/117 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Robust way to save/load objects with dependencies?

    - by mrteacup
    I'm writing an Android game in Java and I need a robust way to save and load application state quickly. The question seems to apply to most OO languages. To understand what I need to save: I'm using a Strategy pattern to control my game entities. The idea is I have a very general Entity class which e.g. stores the location of a bullet/player/enemy and I then attach a Behaviour class that tells the entity how to act: class Entiy { float x; float y; Behavior b; } abstract class Behavior { void update(Entity e); {} // Move about at a constant speed class MoveBehavior extends Behavior { float speed; void update ... } // Chase after another entity class ChaseBehavior extends Behavior { Entity target; void update ... } // Perform two behaviours in sequence class CombineBehavior extends Behavior { Behaviour a, b; void update ... } Essentially, Entity objects are easy to save but Behaviour objects can have a semi-complex graph of dependencies between other Entity objects and other Behaviour objects. I also have cases where a Behaviour object is shared between entities. I'm willing to change my design to make saving/loading state easier, but the above design works really well for structuring the game. Anyway, the options I've considered are: Use Java serialization. This is meant to be really slow in Android (I'll profile it sometime). I'm worried about robustness when changes are made between versions however. Use something like JSON or XML. I'm not sure how I would cope with storing the dependencies between objects however. Would I have to give each object a unique ID and then use these IDs on loading to link the right objects together? I thought I could e.g. change the ChaseBehaviour to store a ID to an entity, instead of a reference, that would be used to look up the Entity before performing the behaviour. I'd rather avoid having to write lots of loading/saving code myself as I find it really easy to make mistakes (e.g. forgetting to save something, reading things out in the wrong order). Can anyone give me any tips on good formats to save to or class designs that make saving state easier?

    Read the article

  • Problem in storing the dynamic data in NSMutableArray?

    - by Rajendra Bhole
    I want to develop an application in which firstly i was develop an structure code for storing X and y axis. struct TCo_ordinates { float x; float y; }; . Then in drawRect method i generating an object of structure like. struct TCo_ordinates *tCoordianates; Now I drawing the graph of Y-Axis its code is. fltX1 = 30; fltY1 = 5; fltX2 = fltX1; fltY2 = 270; CGContextMoveToPoint(ctx, fltX1, fltY1); CGContextAddLineToPoint(ctx, fltX2, fltY2); NSArray *hoursInDays = [[NSArray alloc] initWithObjects:@"1",@"2",@"3",@"4",@"5",@"6",@"7",@"8",@"9",@"10",@"11",@"12", nil]; for(int intIndex = 0 ; intIndex < [hoursInDays count] ; fltY2-=20, intIndex++) { CGContextSetRGBStrokeColor(ctx, 2, 2, 2, 1); //CGContextSetRGBStrokeColor(ctx, 1.0f/255.0f, 1.0f/255.0f, 1.0f/255.0f, 1.0f); CGContextMoveToPoint(ctx, fltX1-3 , fltY2-40); CGContextAddLineToPoint(ctx, fltX1+3, fltY2-40); CGContextSelectFont(ctx, "Helvetica", 14.0, kCGEncodingMacRoman); CGContextSetTextDrawingMode(ctx, kCGTextFill); CGContextSetRGBFillColor(ctx, 0, 255, 255, 1); CGAffineTransform xform = CGAffineTransformMake( 1.0, 0.0, 0.0, -1.0, 0.0, 0.0); CGContextSetTextMatrix(ctx, xform); const char *arrayDataForYAxis = [[hoursInDays objectAtIndex:intIndex] UTF8String]; float x1 = fltX1-23; float y1 = fltY2-37; CGContextShowTextAtPoint(ctx, x1, y1, arrayDataForYAxis, strlen(arrayDataForYAxis)); CGContextStrokePath(ctx); Now i want to store generated the values of x1 and y1 in NSMutableArray dynamically, for that i was written the code. NSMutableArray *yAxisCoordinates = [[NSMutableArray alloc] autorelease]; for(int yObject = 0; yObject < intIndex; yObject++) { [yAxisCoordinates insertObject:(tCoordianates->x = x1,tCoordianates->y = y1) atIndex:yObject]; } But it didn't working. How i store the x1 and y1 values in yAxisCoordinates object. The above code is correct?????????????

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • How do I make a GUI that behaves like this?

    - by Karl Knechtel
    This is difficult to explain without illustration, so - behold, an illustration, cobbled together from screenshots of a few hello-world examples and a lot of Paint work: I have started out using Windows Forms on .NET (via IronPython, but that shouldn't be important), and haven't been able to figure out very much. GUI libraries in general are very intimidating, simply because every class has so many possible attributes. Documentation is good at explaining what everything does, but not so good at helping you figure out what you need. I will be assembling the GUI dynamically, but I'm not expecting that to be the hard part. The sticking points for me right now are: How do I get text labels to size themselves automatically to the width of the contained text (so that the text doesn't clip, and I also don't reserve unnecessary space for them when resizing the window)? How do I make the vertical scrollbar always appear? Setting the VScroll property (why is this protected when AutoScroll is public, BTW?) doesn't seem to do anything. How come the horizontal scrollbar is not added by AutoScroll when contents are laid out vertically (via Dock = DockStyle.Top)? I can use a minimum size for panels to prevent the label and corresponding control from overlapping when the window is shrunk horizontally, but then the scrollbar doesn't appear and the control is inaccessible. How can I put limits on window resizing (e.g. set a minimum width) without disabling it completely? (Just set minimum/maximum sizes for the Form?) Related to that, is there any way to set minimum/maximum widths or heights without setting a minimum/maximum size (i.e. can I constrain the size in only one dimension)? Is there a built-in control suitable for hex editing or am I going to have to build something myself? ... And should I be using something else (perhaps something more capable?) I've heard WPF mentioned, but I understand that this involves XML and I really want to build a GUI from XML - I already have data in an object graph, and doing some kind of weird XML pseudo-serialization (in Python, no less!) in order to create a GUI seems incredibly roundabout.

    Read the article

  • Project Euler #18 - how to brute force all possible paths in tree-like structure using Python?

    - by euler user
    Am trying to learn Python the Atlantic way and am stuck on Project Euler #18. All of the stuff I can find on the web (and there's a LOT more googling that happened beyond that) is some variation on 'well you COULD brute force it, but here's a more elegant solution'... I get it, I totally do. There are really neat solutions out there, and I look forward to the day where the phrase 'acyclic graph' conjures up something more than a hazy, 1 megapixel resolution in my head. But I need to walk before I run here, see the state, and toy around with the brute force answer. So, question: how do I generate (enumerate?) all valid paths for the triangle in Project Euler #18 and store them in an appropriate python data structure? (A list of lists is my initial inclination?). I don't want the answer - I want to know how to brute force all the paths and store them into a data structure. Here's what I've got. I'm definitely looping over the data set wrong. The desired behavior would be to go 'depth first(?)' rather than just looping over each row ineffectually.. I read ch. 3 of Norvig's book but couldn't translate the psuedo-code. Tried reading over the AIMA python library for ch. 3 but it makes too many leaps. triangle = [ [75], [95, 64], [17, 47, 82], [18, 35, 87, 10], [20, 4, 82, 47, 65], [19, 1, 23, 75, 3, 34], [88, 2, 77, 73, 7, 63, 67], [99, 65, 4, 28, 6, 16, 70, 92], [41, 41, 26, 56, 83, 40, 80, 70, 33], [41, 48, 72, 33, 47, 32, 37, 16, 94, 29], [53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14], [70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57], [91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48], [63, 66, 4, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31], [04, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 4, 23], ] def expand_node(r, c): return [[r+1,c+0],[r+1,c+1]] all_paths = [] my_path = [] for i in xrange(0, len(triangle)): for j in xrange(0, len(triangle[i])): print 'row ', i, ' and col ', j, ' value is ', triangle[i][j] ??my_path = somehow chain these together??? if my_path not in all_paths all_paths.append(my_path) Answers that avoid external libraries (like itertools) preferred.

    Read the article

  • How to store array of NSManagedObjects in an NSManagedObject

    - by David Tay
    I am loading my app with a property list of data from a web site. This property list file contains an NSArray of NSDictionaries which itself contains an NSArray of NSDictionaries. Basically, I'm trying to load a tableView of restaurant menu categories each of which contains menu items. My property list file is fine. I am able to load the file and loop through the nodes structure creating NSEntityDescriptions and am able to save to Core Data. Everything works fine and expectedly except that in my menu category managed object, I have an NSArray of menu items for that category. Later on, when I fetch the categories, the pointers to the menu items in a category is lost and I get all the menu items. Am I suppose to be using predicates or does Core Data keep track of my object graph for me? Can anyone look at how I am loading Core Data and point out the flaw in my logic? I'm pretty good with either SQL and OOP by themselves, but am a little bewildered by ORM. I thought that I should just be able to use aggregation in my managed objects and that the framework would keep track of the pointers for me, but apparently not. NSError *error; NSURL *url = [NSURL URLWithString:@"http://foo.com"]; NSArray *categories = [[NSArray alloc] initWithContentsOfURL:url]; NSMutableArray *menuCategories = [[NSMutableArray alloc] init]; for (int i=0; i<[categories count]; i++){ MenuCategory *menuCategory = [NSEntityDescription insertNewObjectForEntityForName:@"MenuCategory" inManagedObjectContext:[self managedObjectContext]]; NSDictionary *category = [categories objectAtIndex:i]; menuCategory.name = [category objectForKey:@"name"]; NSArray *items = [category objectForKey:@"items"]; NSMutableArray *menuItems = [[NSMutableArray alloc] init]; for (int j=0; j<[items count]; j++){ MenuItem *menuItem = [NSEntityDescription insertNewObjectForEntityForName:@"MenuItem" inManagedObjectContext:[self managedObjectContext]]; NSDictionary *item = [items objectAtIndex:j]; menuItem.name = [item objectForKey:@"name"]; menuItem.price = [item objectForKey:@"price"]; menuItem.image = [item objectForKey:@"image"]; menuItem.details = [item objectForKey:@"details"]; [menuItems addObject:menuItem]; } [menuCategory setValue:menuItems forKey:@"menuItems"]; [menuCategories addObject:menuCategory]; [menuItems release]; } if (![[self managedObjectContext] save:&error]) { NSLog(@"An error occurred: %@", [error localizedDescription]); }

    Read the article

  • Object relationships

    - by Hammerstein
    This stems from a recent couple of posts I've made on events and memory management in general. I'm making a new question as I don't think the software I'm using has anything to do with the overall problem and I'm trying to understand a little more about how to properly manage things. This is ASP.NET. I've been trying to understand the needs for Dispose/Finalize over the past few days and believe that I've got to a stage where I'm pretty happy with when I should/shouldn't implement the Dispose/Finalize. 'If I have members that implement IDisposable, put explicit calls to their dispose in my dispose method' seems to be my understanding. So, now I'm thinking maybe my understanding of object lifetimes and what holds on to what is just wrong! Rather than come up with some sample code that I think will illustrate my point, I'm going to describe as best I can actual code and see if someone can talk me through it. So, I have a repository class, in it I have a DataContext that I create when the repository is created. I implement IDisposable, and when my calling object is done, I call Dispose on my repository and explicitly call DataContext.Dispose( ). Now, one of the methods of this class creates and returns a list of objects that's handed back to my front end. Front End - Controller - Repository - Controller - Front End. (Using Redgate Memory Profiler, I take a snapshot of my software when the page is first loaded). My front end creates a controller object on page load and then makes a request to the repository sending back a list of items. When the page is finished loading, I call Dispose on the controller which in turn calls dispose on the context. In my mind, that should mean that my connection is closed and that I have no instances of my controller class. If I then refresh the page, it jumps to two 'Live' instances of the controller class. If I look at the object retention graph, the objects I created in my call to the list are being held onto ultimately by what looks like Linq. The controller/repository aside, if I create a list of objects somewhere, or I create an object and return it somewhere, am I safe to just assume that .NET will eventually come and clean things up for me or is there a best practice? The 'Live' instances suggest to me that these are still in memory and active objects, the fact that RMP apparently forces GC doesn't mean anything?

    Read the article

  • prolog sets problem, stack overflow

    - by garm0nboz1a
    Hi. I'm gonna show some code and ask, what could be optimized and where am I sucked? sublist([], []). sublist([H | Tail1], [H | Tail2]) :- sublist(Tail1, Tail2). sublist(H, [_ | Tail]) :- sublist(H, Tail). less(X, X, _). less(X, Z, RelationList) :- member([X,Z], RelationList). less(X, Z, RelationList) :- member([X,Y], RelationList), less(Y, Z, RelationList), \+less(Z, X, RelationList). lessList(X, LessList, RelationList) :- findall(Y, less(X, Y, RelationList), List), list_to_set(List, L), sort(L, LessList), !. list_mltpl(List1, List2, List) :- findall(X, ( member(X, List1), member(X, List2)), List). chain([_], _). chain([H,T | Tail], RelationList) :- less(H, T, RelationList), chain([T|Tail], RelationList), !. have_inf(X1, X2, RelationList) :- lessList(X1, X1_cone, RelationList), lessList(X2, X2_cone, RelationList), list_mltpl(X1_cone, X2_cone, Cone), chain(Cone, RelationList), !. relations(List, E) :- findall([X1,X2], (member(X1, E), member(X2, E), X1 =\= X2), Relations), sublist(List, Relations). semilattice(List, E) :- forall( (member(X1, E), member(X2, E), X1 < X2), have_inf(X1, X2, List) ). main(E) :- relations(X, E), semilattice(X, E). I'm trying to model all possible graph sets of N elements. Predicate relations(List, E) connects list of possible graphs(List) and input set E. Then I'm describing semilattice predicate to check relations' List for some properties. So, what I have. 1) semilattice/2 is working fast and clear ?- semilattice([[1,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). true. ?- semilattice([[1,3],[1,4],[2,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). false. 2) relations/2 is working not well ?- findall(X, relations(X,[1,2,3,4]), List), length(List, Len), writeln(Len),fail. 4096 false. ?- findall(X, relations(X,[1,2,3,4,5]), List), length(List, Len), writeln(Len),fail. ERROR: Out of global stack ^ Exception: (11) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G263, user:relations(_G263, [1, 2, 3, 4|...]), 17852886, _G268, []), _G835, '$bags':'$destroy_findall_bag'(17852886)) ? abort % Execution Aborted 3) Mix of them to finding all possible semilattice does not work at all. ?- main([1,2]). ERROR: Out of local stack ^ Exception: (15) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G41, user:less(1, _G41, [[1, 2], [2, 1]]), 17852886, _G52, []), _G4767764, '$bags':'$destroy_findall_bag'(17852886)) ?

    Read the article

  • Move an object in the direction of a bezier curve?

    - by Sent1nel
    I have an object with which I would like to make follow a bezier curve and am a little lost right now as to how to make it do that based on time rather than the points that make up the curve. .::Current System::. Each object in my scene graph is made from position, rotation and scale vectors. These vectors are used to form their corresponding matrices: scale, rotation and translation. Which are then multiplied in that order to form the local transform matrix. A world transform (Usually the identity matrix) is then multiplied against the local matrix transform. class CObject { public: // Local transform functions Matrix4f GetLocalTransform() const; void SetPosition(const Vector3f& pos); void SetRotation(const Vector3f& rot); void SetScale(const Vector3f& scale); // Local transform Matrix4f m_local; Vector3f m_localPostion; Vector3f m_localRotation; // rotation in degrees (xrot, yrot, zrot) Vector3f m_localScale; } Matrix4f CObject::GetLocalTransform() { Matrix4f out(Matrix4f::IDENTITY); Matrix4f scale(), rotation(), translation(); scale.SetScale(m_localScale); rotation.SetRotationDegrees(m_localRotation); translation.SetTranslation(m_localTranslation); out = scale * rotation * translation; } The big question I have are 1) How do I orientate my object to face the tangent of the Bezier curve? 2) How do I move that object along the curve without just setting objects position to that of a point on the bezier cuve? Heres an overview of the function thus far void CNodeControllerPieceWise::AnimateNode(CObject* pSpatial, double deltaTime) { // Get object latest pos. Vector3f posDelta = pSpatial->GetWorldTransform().GetTranslation(); // Get postion on curve Vector3f pos = curve.GetPosition(m_t); // Get tangent of curve Vector3f tangent = curve.GetFirstDerivative(m_t); } Edit: sorry its not very clear. I've been working on this for ages and its making my brain turn to mush. I want the object to be attached to the curve and face the direction of the curve. As for movement, I want to object to follow the curve based on the time this way it creates smooth movement throughout the curve.

    Read the article

  • WP7 Return the last 7 days of data from an xml web service

    - by cvandal
    Hello, I'm trying to return the last 7 days of data from an xml web service but with no luck. Could someone please explain me to how I would accomplish this? The XML is as follows: <node> <api> <usagelist> <usage day="2011-01-01"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-02"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-03"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-04"> <traffic name="total" unit="bytes">23579797</traffic> </usage> </usagelist> </api> </node> EDIT The data I want to retrieve will be used to populate a line graph. Specificly I require the day attribute value and the traffic element value for the past 7 days. At the moment, I have the code below in place, howevewr it's only showing the first day 7 times and traffic for the first day 7 times. XDocument xDocument = XDocument.Parse(e.Result); var values = from query in xDocument.Descendants("usagelist") select new History { day = query.Element("usage").Attribute("day").Value, traffic = query.Element("usage").Element("traffic").Value }; foreach (History history in values) { ObservableCollection<LineGraphItem> Data = new ObservableCollection<LineGraphItem>() { new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, }; lineGraph1.DataSource = Data; }

    Read the article

  • Enable Php Fastcgi and Get 500 Internal Server Error (Lighttpd)

    - by skycrew
    anyone can help me? I just got this problem today. Before this my site running smooth with Fastcgi enable but now its show 500 internal server error with below logs. I need to disable php fastcgi in LxAdmin so that my visitor can access my site but when I disable php fastgi, my web performance is very slow with high load to server. I also include the performance screenshot. What should I do? This are the error log I got: 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 24055 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 21622 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 3342 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3207) child exited, pid: 3342 status: 0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 836 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 836 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 22325 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 852 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 24032 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 20402 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 3336 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:52: (mod_fastcgi.c.3207) child exited, pid: 3336 status: 0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 855 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.1231) cgi died ? 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.1731) connect failed: Connection refused on unix:/var/tmp/lighttpd/php.socket.lyrics-hub.com.3333-1 2010-06-16 21:59:53: (mod_fastcgi.c.2885) backend died; we'll disable it for 5 seconds and send the request to another backend instead: reconnects: 0 load: 1 2010-06-16 21:59:56: (server.c.1470) server stopped by UID = 0 PID = 24439 2010-06-16 22:00:23: (log.c.75) server started Performance Graph as below:- http://img404.imageshack.us/img404/3498/memorylxadmin.jpg

    Read the article

  • Enable Php Fastcgi and Get 500 Internal Server Error (Lighttpd)

    - by skycrew
    Hello everyone, anyone can help me? I just got this problem today. Before this my site running smooth with Fastcgi enable but now its show 500 internal server error with below logs. I need to disable php fastcgi in LxAdmin so that my visitor can access my site but when I disable php fastgi, my web performance is very slow with high load to server. I also include the performance screenshot. What should I do? This are the error log I got: 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 24055 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 21622 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 3342 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3207) child exited, pid: 3342 status: 0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 836 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 836 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 22325 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24447 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 852 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-1 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 24032 2010-06-16 21:59:52: (mod_cgi.c.584) cgi died, pid: 20402 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 3336 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:52: (mod_fastcgi.c.3207) child exited, pid: 3336 status: 0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 855 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:52: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:52: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:52: (mod_cgi.c.1231) cgi died ? 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 878 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.2462) unexpected end-of-file (perhaps the fastcgi process died): pid: 24448 socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 2010-06-16 21:59:53: (mod_fastcgi.c.3254) response not received, request sent: 860 on socket: unix:/var/tmp/lighttpd/php.socket.lyrics.skycrewz.net.3333-0 for /index.php , closing connection 2010-06-16 21:59:53: (mod_fastcgi.c.1731) connect failed: Connection refused on unix:/var/tmp/lighttpd/php.socket.lyrics-hub.com.3333-1 2010-06-16 21:59:53: (mod_fastcgi.c.2885) backend died; we'll disable it for 5 seconds and send the request to another backend instead: reconnects: 0 load: 1 2010-06-16 21:59:56: (server.c.1470) server stopped by UID = 0 PID = 24439 2010-06-16 22:00:23: (log.c.75) server started Performance Graph as below:- http://img404.imageshack.us/img404/3498/memorylxadmin.jpg

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Munin not creating HTML files in Ubuntu Server 14.04

    - by lepe
    I have used munin in several servers and this is the first time is taking me so much time to set it up. When I telnet munin directly, I can list the services, there is no error at the logs and munin its being updated every 5 minutes. However no html files are created. I'm using the default location (/var/cache/munin/www) and I can confirm the permissions of that directory are set to munin.munin (IP and domain has been changed) munin.conf: dbdir /var/lib/munin htmldir /var/cache/munin/www logdir /var/log/munin rundir /var/run/munin [example.com;] address 100.100.50.200 munin-node.conf: log_level 4 log_file /var/log/munin/munin-node.log pid_file /var/run/munin/munin-node.pid background 1 setsid 1 user root group root host_name example.com allow ^127\.0\.0\.1$ allow ^100\.100\.50\.200$ allow ^::1$ /etc/hosts : 100.100.50.200 example.com 127.0.0.1 localhost $ telnet example.com 4949 Trying 100.100.50.200... Connected to example.com. Escape character is '^]'. # munin node at example.com list apache_accesses apache_processes apache_volume cpu cpuspeed df df_inode entropy fail2ban forks fw_packets if_err_eth0 if_err_eth1 if_eth0 if_eth1 interrupts ipmi_fans ipmi_power ipmi_temp irqstats load memory munin_stats mysql_bin_relay_log mysql_commands mysql_connections mysql_files_tables mysql_innodb_bpool mysql_innodb_bpool_act mysql_innodb_insert_buf mysql_innodb_io mysql_innodb_io_pend mysql_innodb_log mysql_innodb_rows mysql_innodb_semaphores mysql_innodb_tnx mysql_myisam_indexes mysql_network_traffic mysql_qcache mysql_qcache_mem mysql_replication mysql_select_types mysql_slow mysql_sorts mysql_table_locks mysql_tmp_tables ntp_2001:e40:100:208::123 ntp_91.189.94.4 ntp_kernel_err ntp_kernel_pll_freq ntp_kernel_pll_off ntp_offset ntp_states open_files open_inodes postfix_mailqueue postfix_mailvolume proc_pri processes swap threads uptime users vmstat fetch df _dev_sda3.value 2.1762874086869 _sys_fs_cgroup.value 0 _run.value 0.0503536980635825 _run_lock.value 0 _run_shm.value 0 _run_user.value 0 _dev_sda5.value 0.0176986285727571 _dev_sda8.value 1.08464646179852 _dev_sda7.value 0.0346633563514803 _dev_sda9.value 6.81031810822797 _dev_sda6.value 9.0932802215469 . /var/log/munin/munin-node.log Process Backgrounded 2014/08/16-14:13:36 Munin::Node::Server (type Net::Server::Fork) starting! pid(19610) Binding to TCP port 4949 on host 100.100.50.200 with IPv4 2014/08/16-14:23:11 CONNECT TCP Peer: "[100.100.50.200]:55949" Local: "[100.100.50.200]:4949" 2014/08/16-14:36:16 CONNECT TCP Peer: "[100.100.50.200]:56209" Local: "[100.100.50.200]:4949" /var/log/munin/munin-update.log ... 2014/08/16 14:30:01 [INFO]: Starting munin-update 2014/08/16 14:30:01 [INFO]: Munin-update finished (0.00 sec) 2014/08/16 14:35:02 [INFO]: Starting munin-update 2014/08/16 14:35:02 [INFO]: Munin-update finished (0.00 sec) 2014/08/16 14:40:01 [INFO]: Starting munin-update 2014/08/16 14:40:01 [INFO]: Munin-update finished (0.00 sec) $ ls -la /var/cache/munin/www/ drwxr-xr-x 3 munin munin 19 Aug 16 13:55 . drwxr-xr-x 3 root root 16 Aug 16 13:54 .. drwxr-xr-x 2 munin munin 4096 Aug 16 13:55 static Any ideas on why it is not working? EDIT This is how /var/log/munin/ log looks like after some days: -rw-r----- 1 www-data 0 Aug 16 13:54 munin-cgi-graph.log -rw-r----- 1 www-data 0 Aug 16 13:54 munin-cgi-html.log -rw-rw-r-- 1 munin 0 Aug 16 13:55 munin-html.log -rw-r----- 1 munin 0 Aug 19 06:18 munin-limits.log -rw-r----- 1 munin 15K Aug 18 14:10 munin-limits.log.1 -rw-r----- 1 munin 1.8K Aug 18 06:15 munin-limits.log.2.gz -rw-rw-r-- 1 munin 1.3K Aug 17 06:15 munin-limits.log.3.gz -rw-r--r-- 1 root 6.5K Aug 16 13:55 munin-node-configure.log -rw-r--r-- 1 root 0 Aug 17 06:18 munin-node.log -rw-r--r-- 1 root 420 Aug 16 14:52 munin-node.log.1.gz -rw-r----- 1 munin 0 Aug 19 06:18 munin-update.log -rw-r----- 1 munin 11K Aug 18 14:10 munin-update.log.1 -rw-r----- 1 munin 1.6K Aug 18 06:15 munin-update.log.2.gz -rw-rw-r-- 1 munin 1.5K Aug 17 06:15 munin-update.log.3.gz

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • Does this prove a network bandwidth bottleneck?

    - by Yuji Tomita
    I've incorrectly assumed that my internal AB testing means my server can handle 1k concurrency @3k hits per second. My theory at at the moment is that the network is the bottleneck. The server can't send enough data fast enough. External testing from blitz.io at 1k concurrency shows my hits/s capping off at 180, with pages taking longer and longer to respond as the server is only able to return 180 per second. I've served a blank file from nginx and benched it: it scales 1:1 with concurrency. Now to rule out IO / memcached bottlenecks (nginx normally pulls from memcached), I serve up a static version of the cached page from the filesystem. The results are very similar to my original test; I'm capped at around 180 RPS. Splitting the HTML page in half gives me double the RPS, so it's definitely limited by the size of the page. If I internally ApacheBench from the local server, I get consistent results of around 4k RPS on both the Full Page and the Half Page, at high transfer rates. Transfer rate: 62586.14 [Kbytes/sec] received If I AB from an external server, I get around 180RPS - same as the blitz.io results. How do I know it's not intentional throttling? If I benchmark from multiple external servers, all results become poor which leads me to believe the problem is in MY servers outbound traffic, not a download speed issue with my benchmarking servers / blitz.io. So I'm back to my conclusion that my server can't send data fast enough. Am I right? Are there other ways to interpret this data? Is the solution/optimization to set up multiple servers + load balancing that can each serve 180 hits per second? I'm quite new to server optimization, so I'd appreciate any confirmation interpreting this data. Outbound traffic Here's more information about the outbound bandwidth: The network graph shows a maximum output of 16 Mb/s: 16 megabits per second. Doesn't sound like much at all. Due to a suggestion about throttling, I looked into this and found that linode has a 50mbps cap (which I'm not even close to hitting, apparently). I had it raised to 100mbps. Since linode caps my traffic, and I'm not even hitting it, does this mean that my server should indeed be capable of outputting up to 100mbps but is limited by some other internal bottleneck? I just don't understand how networks at this large of a scale work; can they literally send data as fast as they can read from the HDD? Is the network pipe that big? In conclusion 1: Based on the above, I'm thinking I can definitely raise my 180RPS by adding an nginx load balancer on top of a multi nginx server setup at exactly 180RPS per server behind the LB. 2: If linode has a 50/100mbit limit that I'm not hitting at all, there must be something I can do to hit that limit with my single server setup. If I can read / transmit data fast enough locally, and linode even bothers to have a 50mbit/100mbit cap, there must be an internal bottleneck that's not allowing me to hit those caps that I'm not sure how to detect. Correct? I realize the question is huge and vague now, but I'm not sure how to condense it. Any input is appreciated on any conclusion I've made.

    Read the article

  • Mongodb Slave replication lag

    - by Leonid Bugaev
    We using standard mongo setup: 2 replicas + 1 arbiter. Both replica servers use same AWS m1.medium with RAID10 EBS. We experiencing constantly growing replication lag on secondary replica. I tried to do full-resync, you can see it on graph, but it helped only for some hours. Our mongo usage is really low now, and frankly i can't understan why it can be. iostat 1 for secondary: avg-cpu: %user %nice %system %iowait %steal %idle 80.39 0.00 2.94 0.00 16.67 0.00 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn xvdap1 0.00 0.00 0.00 0 0 xvdb 0.00 0.00 0.00 0 0 xvdfp4 12.75 0.00 189.22 0 193 xvdfp3 12.75 0.00 189.22 0 193 xvdfp2 7.84 0.00 40.20 0 41 xvdfp1 7.84 0.00 40.20 0 41 md127 19.61 0.00 219.61 0 224 mongostat for secondary (why 100% locks? i guess its the problem): insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn set repl time *10 *0 *16 *0 0 2|4 0 30.9g 62.4g 1.65g 0 107 0 0|0 0|0 198b 1k 16 replset-01 SEC 06:55:37 *4 *0 *8 *0 0 12|0 0 30.9g 62.4g 1.65g 0 91.7 0 0|0 0|0 837b 5k 16 replset-01 SEC 06:55:38 *4 *0 *7 *0 0 3|0 0 30.9g 62.4g 1.64g 0 110 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:39 *4 *0 *8 *0 0 1|0 0 30.9g 62.4g 1.64g 0 82.9 0 0|0 0|0 62b 1k 16 replset-01 SEC 06:55:40 *3 *0 *7 *0 0 5|0 0 30.9g 62.4g 1.6g 0 75.2 0 0|0 0|0 466b 2k 16 replset-01 SEC 06:55:41 *4 *0 *7 *0 0 1|0 0 30.9g 62.4g 1.64g 0 138 0 0|0 0|1 62b 1k 16 replset-01 SEC 06:55:42 *7 *0 *15 *0 0 3|0 0 30.9g 62.4g 1.64g 0 95.4 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:43 *7 *0 *14 *0 0 1|0 0 30.9g 62.4g 1.64g 0 98 0 0|0 0|0 62b 1k 16 replset-01 SEC 06:55:44 *8 *0 *17 *0 0 3|0 0 30.9g 62.4g 1.64g 0 96.3 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:45 *7 *0 *14 *0 0 3|0 0 30.9g 62.4g 1.64g 0 96.1 0 0|0 0|0 186b 2k 16 replset-01 SEC 06:55:46 mongostat for primary insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn set repl time 12 30 20 0 0 3 0 30.9g 62.6g 641m 0 0.9 0 0|0 0|0 212k 619k 48 replset-01 M 06:56:41 5 17 10 0 0 2 0 30.9g 62.6g 641m 0 0.5 0 0|0 0|0 159k 429k 48 replset-01 M 06:56:42 9 22 16 0 0 3 0 30.9g 62.6g 642m 0 0.7 0 0|0 0|0 158k 276k 48 replset-01 M 06:56:43 6 18 12 0 0 2 0 30.9g 62.6g 640m 0 0.7 0 0|0 0|0 93k 231k 48 replset-01 M 06:56:44 6 12 8 0 0 3 0 30.9g 62.6g 640m 0 0.3 0 0|0 0|0 80k 125k 48 replset-01 M 06:56:45 8 21 14 0 0 9 0 30.9g 62.6g 641m 0 0.6 0 0|0 0|0 118k 419k 48 replset-01 M 06:56:46 10 34 20 0 0 6 0 30.9g 62.6g 640m 0 1.3 0 0|0 0|0 164k 527k 48 replset-01 M 06:56:47 6 21 13 0 0 2 0 30.9g 62.6g 641m 0 0.7 0 0|0 0|0 111k 477k 48 replset-01 M 06:56:48 8 21 15 0 0 2 0 30.9g 62.6g 641m 0 0.7 0 0|0 0|0 204k 336k 48 replset-01 M 06:56:49 4 12 8 0 0 8 0 30.9g 62.6g 641m 0 0.5 0 0|0 0|0 156k 530k 48 replset-01 M 06:56:50 Mongo version: 2.0.6

    Read the article

  • Oracle Enhances Oracle Social Cloud with Next-Generation User Experience

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Today’s enterprise must meet the technology standards of today’s consumer. According to a recent IDG Enterprise report, enterprises that invest in consumerized, easy-to-use technologies experience a 56 percent increase in employee productivity and a 46 percent increase in customer satisfaction. In order to deliver that simple and intuitive experience across even the most advanced social management capabilities, Oracle today introduced Social Station, an innovative new workspace within Oracle Social Cloud’s Social Relationship Management (SRM) platform. With Social Station, users benefit from a personalized and intuitive user experience that helps increase both the productivity and performance of social business practices. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} News Facts Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle today introduced Social Station, an innovative new workspace within Oracle Social Cloud’s Social Relationship Management (SRM) platform that helps organizations socially enable the way they do business. With an advanced yet intuitive user interface, Social Station delivers a compelling user experience that improves productivity and helps users more easily deliver on social objectives. To help users quickly and easily build out and configure their social workspaces, Social Station provides drag-and-drop capabilities that allow users to personalize their workspace with different social modules. With a new Custom Analytics module that mixes and matches more than 120 metrics with thousands of customizable reporting options, users can customize their view of social data and access constantly refreshed updates that support real-time understanding. One-click sharing capabilities and annotation functionality within the new Custom Analytics module also drives productivity by improving sharing and collaboration across teams, departments, and executives. Multiview layout capabilities further allows visibility into social insights by offering users the flexibility to monitor conversations by network, stream, metric, graph type, date range, and relative time period. Social Station also includes an Enhanced Calendar module that provides a clear visual representation of content, posts, networks, and views, helping users easily and efficiently understand information and toggle between various functions and views. To support different user personas and social business needs, Oracle plans to continue building out Social Station with additional modules, including content curation, influencer engagement, and command center creation. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Iphone SDK - adding UITableView to UIView

    - by Shashi
    Hi, I am trying to learn how to use different views, for this sample test app, i have a login page, upon successful logon, the user is redirected to a table view and then upon selection of an item in the table view, the user is directed to a third page showing details of the item. the first page works just fine, but the problem occurs when i go to the second page, the table shown doesn't have title and i cannot add title or toolbar or anything other than the content of the tables themselves. and when i click on the item, needless to say nothing happens. no errors as well. i am fairly new to programming and have always worked on Java but never on C(although i have some basic knowledge of C) and Objective C is new to me. Here is the code. import @interface NavigationTestAppDelegate : NSObject { UIWindow *window; UIViewController *viewController; IBOutlet UITextField *username; IBOutlet UITextField *password; IBOutlet UILabel *loginError; //UINavigationController *navigationController; } @property (nonatomic, retain) IBOutlet UIViewController *viewController; @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet UITextField *username; @property (nonatomic, retain) IBOutlet UITextField *password; @property (nonatomic, retain) IBOutlet UILabel *loginError; -(IBAction) login; -(IBAction) hideKeyboard: (id) sender; @end import "NavigationTestAppDelegate.h" import "RootViewController.h" @implementation NavigationTestAppDelegate @synthesize window; @synthesize viewController; @synthesize username; @synthesize password; @synthesize loginError; pragma mark - pragma mark Application lifecycle -(IBAction) hideKeyboard: (id) sender{ [sender resignFirstResponder]; } (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // Override point for customization after app launch //RootViewController *rootViewController = [[RootViewController alloc] init]; //[window addSubview:[navigationController view]]; [window addSubview:[viewController view]]; [window makeKeyAndVisible]; return YES; } -(IBAction) login { RootViewController *rootViewController = [[RootViewController alloc] init]; //NSString *user = [[NSString alloc] username. if([username.text isEqualToString:@"test"]&&[password.text isEqualToString:@"test"]){ [window addSubview:[rootViewController view]]; //[window addSubview:[navigationController view]]; [window makeKeyAndVisible]; //rootViewController.awakeFromNib; } else { loginError.text = @"LOGIN ERROR"; [window addSubview:[viewController view]]; [window makeKeyAndVisible]; } } (void)applicationWillTerminate:(UIApplication *)application { // Save data if appropriate } pragma mark - pragma mark Memory management (void)dealloc { //[navigationController release]; [viewController release]; [window release]; [super dealloc]; } @end import @interface RootViewController : UITableViewController { IBOutlet NSMutableArray *views; } @property (nonatomic, retain) IBOutlet NSMutableArray * views; @end // // RootViewController.m // NavigationTest // // Created by guest on 4/23/10. // Copyright MyCompanyName 2010. All rights reserved. // import "RootViewController.h" import "OpportunityOne.h" @implementation RootViewController @synthesize views; //@synthesize navigationViewController; pragma mark - pragma mark View lifecycle (void)viewDidLoad { views = [ [NSMutableArray alloc] init]; OpportunityOne *opportunityOneController; for (int i=1; i<=20; i++) { opportunityOneController = [[OpportunityOne alloc] init]; opportunityOneController.title = [[NSString alloc] initWithFormat:@"Opportunity %i",i]; [views addObject:[NSDictionary dictionaryWithObjectsAndKeys: [[NSString alloc] initWithFormat:@"Opportunity %i",i], @ "title", opportunityOneController, @"controller", nil]]; self.title=@"GPS"; } /*UIBarButtonItem *temporaryBarButtonItem = [[UIBarButtonItem alloc] init]; temporaryBarButtonItem.title = @"Back"; self.navigationItem.backBarButtonItem = temporaryBarButtonItem; [temporaryBarButtonItem release]; */ //self.title =@"Global Platform for Sales"; [super viewDidLoad]; //[temporaryBarButtonItem release]; //[opportunityOneController release]; // Uncomment the following line to display an Edit button in the navigation bar for this view controller. // self.navigationItem.rightBarButtonItem = self.editButtonItem; } pragma mark - pragma mark Table view data source // Customize the number of sections in the table view. - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } // Customize the number of rows in the table view. - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [views count]; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell. cell.textLabel.text = [[views objectAtIndex:indexPath.row] objectForKey:@"title"]; return cell; } pragma mark - pragma mark Table view delegate (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //UIViewController *targetViewController = [[views objectAtIndex:indexPath.row] objectForKey:@"controller"]; UIViewController *targetViewController = [[views objectAtIndex:indexPath.row] objectForKey:@"controller"]; [[self navigationController] pushViewController:targetViewController animated:YES]; } pragma mark - pragma mark Memory management (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Relinquish ownership any cached data, images, etc that aren't in use. } (void)viewDidUnload { // Relinquish ownership of anything that can be recreated in viewDidLoad or on demand. // For example: self.myOutlet = nil; } (void)dealloc { [views release]; [super dealloc]; } @end Wow, i was finding it real hard to post the code. i apologize for the bad formatting, but i just couldn't get past the formatting rules for this text editor. Thanks, Shashi

    Read the article

  • Coding With Windows Azure IaaS

    - by Hisham El-bereky
    This post will focus on some advanced programming topics concerned with IaaS (Infrastructure as a Service) which provided as windows azure virtual machine (with its related resources like virtual disk and virtual network), you know that windows azure started as PaaS cloud platform but regarding to some business cases which need to have full control over their virtual machine, so windows azure directed toward providing IaaS. Sometimes you will need to manage your cloud IaaS through code may be for these reasons: Working on hyper-cloud system by providing bursting connector to windows azure virtual machines Providing multi-tenant system which consume windows azure virtual machine Automated process on your on-premises or cloud service which need to utilize some virtual resources We are going to implement the following basic operation using C# code: List images Create virtual machine List virtual machines Restart virtual machine Delete virtual machine Before going to implement the above operations we need to prepare client side and windows azure subscription to communicate correctly by providing management certificate (x.509 v3 certificates) which permit client access to resources in your Windows Azure subscription, whilst requests made using the Windows Azure Service Management REST API require authentication against a certificate that you provide to Windows Azure More info about setting management certificate located here. And to install .cer on other client machine you will need the .pfx file, or if not exist by exporting .cer as .pfx Note: You will need to install .net 4.5 on your machine to try the code So let start This post built on the post sent by Michael Washam "Advanced Windows Azure IaaS – Demo Code", so I'm here to declare some points and to add new operation which is not exist in Michael's demo The basic C# class object used here as client to azure REST API for IaaS service is HttpClient (Provides a base class for sending HTTP requests and receiving HTTP responses from a resource identified by a URI) this object must be initialized with the required data like certificate, headers and content if required. Also I'd like to refer here that the code is based on using Asynchronous programming with calls to azure which enhance the performance and gives us the ability to work with complex calls which depends on more than one sub-call to achieve some operation The following code explain how to get certificate and initializing HttpClient object with required data like headers and content HttpClient GetHttpClient() { X509Store certificateStore = null; X509Certificate2 certificate = null; try { certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certificateStore.Open(OpenFlags.ReadOnly); string thumbprint = ConfigurationManager.AppSettings["CertThumbprint"]; var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false); if (certificates.Count > 0) { certificate = certificates[0]; } } finally { if (certificateStore != null) certificateStore.Close(); }   WebRequestHandler handler = new WebRequestHandler(); if (certificate!= null) { handler.ClientCertificates.Add(certificate); HttpClient httpClient = new HttpClient(handler); //And to set required headers lik x-ms-version httpClient.DefaultRequestHeaders.Add("x-ms-version", "2012-03-01"); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml")); return httpClient; } return null; }  Let us keep the object httpClient as reference object used to call windows azure REST API IaaS service. For each request operation we need to define: Request URI HTTP Method Headers Content body (1) List images The List OS Images operation retrieves a list of the OS images from the image repository Request URI https://management.core.windows.net/<subscription-id>/services/images] Replace <subscription-id> with your windows Id HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None.  C# Code List<String> imageList = new List<String>(); //replace _subscriptionid with your WA subscription String uri = String.Format("https://management.core.windows.net/{0}/services/images", _subscriptionid);  HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);  if (responseStream != null) {      XDocument xml = XDocument.Load(responseStream);      var images = xml.Root.Descendants(ns + "OSImage").Where(i => i.Element(ns + "OS").Value == "Windows");      foreach (var image in images)      {      string img = image.Element(ns + "Name").Value;      imageList.Add(img);      } } More information about the REST call (Request/Response) located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/jj157191.aspx (2) Create Virtual Machine Creating virtual machine required service and deployment to be created first, so creating VM should be done through three steps incase hosted service and deployment is not created yet Create hosted service, a container for service deployments in Windows Azure. A subscription may have zero or more hosted services Create deployment, a service that is running on Windows Azure. A deployment may be running in either the staging or production deployment environment. It may be managed either by referencing its deployment ID, or by referencing the deployment environment in which it's running. Create virtual machine, the previous two steps info required here in this step I suggest here to use the same name for service, deployment and service to make it easy to manage virtual machines Note: A name for the hosted service that is unique within Windows Azure. This name is the DNS prefix name and can be used to access the hosted service. For example: http://ServiceName.cloudapp.net// 2.1 Create service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/gg441304.aspx C# code The following method show how to create hosted service async public Task<String> NewAzureCloudService(String ServiceName, String Location, String AffinityGroup, String subscriptionid) { String requestID = String.Empty;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionid); HttpClient http = GetHttpClient();   System.Text.ASCIIEncoding ae = new System.Text.ASCIIEncoding(); byte[] svcNameBytes = ae.GetBytes(ServiceName);   String locationEl = String.Empty; String locationVal = String.Empty;   if (String.IsNullOrEmpty(Location) == false) { locationEl = "Location"; locationVal = Location; } else { locationEl = "AffinityGroup"; locationVal = AffinityGroup; }   XElement srcTree = new XElement("CreateHostedService", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("ServiceName", ServiceName), new XElement("Label", Convert.ToBase64String(svcNameBytes)), new XElement(locationEl, locationVal) ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } 2.2 Create Deployment Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deploymentslots/<deployment-slot-name> <deployment-slot-name> with staging or production, depending on where you wish to deploy your service package <service-name> provided as input from the previous step HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/ee460813.aspx C# code The following method show how to create hosted service deployment async public Task<String> NewAzureVMDeployment(String ServiceName, String VMName, String VNETName, XDocument VMXML, XDocument DNSXML) { String requestID = String.Empty;     String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments", _subscriptionid, ServiceName); HttpClient http = GetHttpClient(); XElement srcTree = new XElement("Deployment", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("Name", ServiceName), new XElement("DeploymentSlot", "Production"), new XElement("Label", ServiceName), new XElement("RoleList", null) );   if (String.IsNullOrEmpty(VNETName) == false) { srcTree.Add(new XElement("VirtualNetworkName", VNETName)); }   if(DNSXML != null) { srcTree.Add(new XElement("DNS", new XElement("DNSServers", DNSXML))); }   XDocument deploymentXML = new XDocument(srcTree); ApplyNamespace(srcTree, ns);   deploymentXML.Descendants(ns + "RoleList").FirstOrDefault().Add(VMXML.Root);     String fixedXML = deploymentXML.ToString().Replace(" xmlns=\"\"", ""); HttpContent content = new StringContent(fixedXML); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); }   return requestID; } 2.3 Create Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roles <cloudservice-name> and <deployment-name> are provided as input from the previous steps Http Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) located here http://msdn.microsoft.com/en-us/library/windowsazure/jj157186.aspx C# code async public Task<String> NewAzureVM(String ServiceName, String VMName, XDocument VMXML) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName);   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles", _subscriptionid, ServiceName, deployment);   HttpClient http = GetHttpClient(); HttpContent content = new StringContent(VMXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml"); HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } (3) List Virtual Machines To list virtual machine hosted on windows azure subscription we have to loop over all hosted services to get its hosted virtual machines To do that we need to execute the following operations: listing hosted services listing hosted service Virtual machine 3.1 Listing Hosted Services Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx C# Code async private Task<List<XDocument>> GetAzureServices(String subscriptionid) { String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices ", subscriptionid); List<XDocument> services = new List<XDocument>();   HttpClient http = GetHttpClient();   Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var svcs = xml.Root.Descendants(ns + "HostedService"); foreach (XElement r in svcs) { XDocument vm = new XDocument(r); services.Add(vm); } }   return services; }  3.2 Listing Hosted Service Virtual Machines Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name> HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157193.aspx C# Code async public Task<XDocument> GetAzureVM(String ServiceName, String VMName, String subscriptionid) { String deployment = await GetAzureDeploymentName(ServiceName); XDocument vmXML = new XDocument();   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles/{3}", subscriptionid, ServiceName, deployment, VMName);   HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri); if (responseStream != null) { vmXML = XDocument.Load(responseStream); }   return vmXML; }  So the final method which can be used to list all virtual machines is: async public Task<XDocument> GetAzureVMs() { List<XDocument> services = await GetAzureServices(); XDocument vms = new XDocument(); vms.Add(new XElement("VirtualMachines")); ApplyNamespace(vms.Root, ns); foreach (var svc in services) { string ServiceName = svc.Root.Element(ns + "ServiceName").Value;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}", _subscriptionid, ServiceName, "Production");   try { HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var roles = xml.Root.Descendants(ns + "RoleInstance"); foreach (XElement r in roles) { XElement svcnameel = new XElement("ServiceName", ServiceName); ApplyNamespace(svcnameel, ns); r.Add(svcnameel); // not part of the roleinstance vms.Root.Add(r); } } } catch (HttpRequestException http) { // no vms with cloud service } } return vms; }  (4) Restart Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name>/Operations HTTP Method POST (HTTP 1.1) Headers x-ms-version: 2012-03-01 Content-Type: application/xml Body <RestartRoleOperation xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <OperationType>RestartRoleOperation</OperationType> </RestartRoleOperation>  More details about this http request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157197.aspx  C# Code async public Task<String> RebootVM(String ServiceName, String RoleName) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName); String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roleInstances/{3}/Operations", _subscriptionid, ServiceName, deployment, RoleName);   HttpClient http = GetHttpClient();   XElement srcTree = new XElement("RestartRoleOperation", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("OperationType", "RestartRoleOperation") ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; }  (5) Delete Virtual Machine You can delete your hosted virtual machine by deleting its deployment, but I prefer to delete its hosted service also, so you can easily manage your virtual machines from code 5.1 Delete Deployment Request URI https://management.core.windows.net/< subscription-id >/services/hostedservices/< service-name >/deployments/<Deployment-Name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteDeployment( string deploymentName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}", _subscriptionid, deploymentName, deploymentName); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  5.2 Delete Hosted Service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteService(string serviceName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}", _subscriptionid, serviceName); Log.Info("Windows Azure URI (http DELETE verb): " + uri, typeof(VMManager)); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  And the following is the method which can used to delete both of deployment and service async public Task<string> DeleteVM(string vmName) { string responseString = string.Empty;   // as a convention here in this post, a unified name used for service, deployment and VM instance to make it easy to manage VMs HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await DeleteDeployment(vmName);   if (responseMessage != null) {   string requestID = responseMessage.Headers.GetValues("x-ms-request-id").FirstOrDefault(); OperationResult result = await PollGetOperationStatus(requestID, 5, 120); if (result.Status == OperationStatus.Succeeded) { responseString = result.Message; HttpResponseMessage sResponseMessage = await DeleteService(vmName); if (sResponseMessage != null) { OperationResult sResult = await PollGetOperationStatus(requestID, 5, 120); responseString += sResult.Message; } } else { responseString = result.Message; } } return responseString; }  Note: This article is subject to be updated Hisham  References Advanced Windows Azure IaaS – Demo Code Windows Azure Service Management REST API Reference Introduction to the Azure Platform Representational state transfer Asynchronous Programming with Async and Await (C# and Visual Basic) HttpClient Class

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • Xobni Free Powers Up Outlook’s Search and Contacts

    - by Matthew Guay
    Want to find out more about your contacts, discover email trends, and even sync Yahoo! email accounts in Outlook?  Here’s how you can do this and more with Xobni Free. Email is one of the most important communications mediums today, but even with all of the advances in Outlook over the years it can still be difficult to keep track of conversations, files, and contacts.  Xobni makes it easy by indexing your emails and organizing them by sender.  You can use its powerful search to quickly find any email, find related messages, and then view more information about that contact with information from social networks.  And, to top it off, it even lets you view your Yahoo! emails directly in Outlook without upgrading to a Yahoo! Plus account.  Xobni runs in Outlook 2003, 2007, and 2010, including the 64 bit version of Outlook 2010, and users of older versions will especially enjoy the new features Xobni brings for free. Getting started Download the Xobni Free installer (link below), and run to start the installation.  Make sure to exit Outlook before installing.  Xobni may need to download additional files which may take a few moments. When the download is finished, proceed with the install as normal.  You can opt out of the Product Improvement Program at the end of the installation by unchecking the box.  Additionally, you are asked to share Xobni with your friends on social networks, but this is not required.   Next time you open Outlook, you’ll notice the new Xobni sidebar in Outlook.  You can choose to watch an introduction video that will help you quickly get up to speed on how Xobni works. While this is playing, Xobni is working at indexing your email in the background.  Once the first indexing is finished, click Let’s Go! to start using Xobni. Here’s how Xobni looks in Outlook 2010: Advanced Email Information Select an email, and now you can see lots of info about it in your new Xobni sidebar.   On the top of the sidebar, select the graph icon to see when and how often you email with a contact.  Each contact is given an Xobni rank so you can quickly see who you email the most.   You can see all related emails sorted into conversations, and also all attachments in the conversation, not just this email. Xobni can also show you all scheduled appointments and links exchanged with a contact, but this is only available in the Plus version.  If you’d rather not see the tab for a feature you can’t use, click Don’t show this tab to banish it from Xobni for good.   Searching emails from the Xobni toolbar is very fast, and you can preview a message by simply hovering over it from the search pane. Get More Information About Your Contacts Xobni’s coolest feature is its social integration.  Whenever you select an email, you may see a brief bio, picture, and more, all pulled from social networks.   Select one of the tabs to find more information.  You may need to login to view information on your contacts from certain networks. The Twitter tab lets you see recent tweets.  Xobni will search for related Twitter accounts, and will ask you to confirm if the choice is correct.   Now you can see this contact’s recent Tweets directly from Outlook.   The Hoovers tab can give you interesting information about the businesses you’re in contact with. If the information isn’t correct, you can edit it and add your own information.  Click the Edit button, and the add any information you want.   You can also remove a network you don’t wish to see.  Right-click on the network tabs, select Manage Extensions, and uncheck any you don’t want to see. But sometimes online contact just doesn’t cut it.  For these times, click on the orange folder button to request a contact’s phone number or schedule a time with them. This will open a new email message ready to send with the information you want.  Edit as you please, and send. Add Yahoo! Email to Outlook for Free One of Xobni’s neatest features is that it let’s you add your Yahoo! email account to Outlook for free.  Click the gear icon in the bottom of the Xobni sidebar and select Options to set it up. Select the Integration tab, and click Enable to add Yahoo! mail to Xobni. Sign in with your Yahoo! account, and make sure to check the Keep me signed in box. Note that you may have to re-signin every two weeks to keep your Yahoo! account connected.  Select I agree to finish setting it up. Xobni will now download and index your recent Yahoo! mail. Your Yahoo! messages will only show up in the Xobni sidebar.  Whenever you select a contact, you will see related messages from your Yahoo! account as well.  Or, you can search from the sidebar to find individual messages from your Yahoo! account.  Note the Y! logo beside Yahoo! messages.   Select a message to read it in the Sidebar.  You can open the email in Yahoo! in your browser, or can reply to it using your default Outlook email account. If you have many older messages in your Yahoo! account, make sure to go back to the Integration tab and select Index Yahoo! Mail to index all of your emails. Conclusion Xobni is a great tool to help you get more out of your daily Outlook experience.  Whether you struggle to find attachments a coworker sent you or want to access Yahoo! email from Outlook, Xobni might be the perfect tool for you.  And with the extra things you learn about your contacts with the social network integration, you might boost your own PR skills without even trying! Link Download Xobni Similar Articles Productive Geek Tips Speed up Windows Vista Start Menu Search By Limiting ResultsFix for New Contact Group Button Not Displaying in VistaGet Maps and Directions to Your Contacts in Outlook 2007Backup Windows Mail Messages and Contacts in VistaHow to Import Gmail Contacts Into Outlook 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • A Guided Tour of Complexity

    - by JoshReuben
    I just re-read Complexity – A Guided Tour by Melanie Mitchell , protégé of Douglas Hofstadter ( author of “Gödel, Escher, Bach”) http://www.amazon.com/Complexity-Guided-Tour-Melanie-Mitchell/dp/0199798109/ref=sr_1_1?ie=UTF8&qid=1339744329&sr=8-1 here are some notes and links:   Evolved from Cybernetics, General Systems Theory, Synergetics some interesting transdisciplinary fields to investigate: Chaos Theory - http://en.wikipedia.org/wiki/Chaos_theory – small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible. System Dynamics / Cybernetics - http://en.wikipedia.org/wiki/System_Dynamics – study of how feedback changes system behavior Network Theory - http://en.wikipedia.org/wiki/Network_theory – leverage Graph Theory to analyze symmetric  / asymmetric relations between discrete objects Algebraic Topology - http://en.wikipedia.org/wiki/Algebraic_topology – leverage abstract algebra to analyze topological spaces There are limits to deterministic systems & to computation. Chaos Theory definitely applies to training an ANN (artificial neural network) – different weights will emerge depending upon the random selection of the training set. In recursive Non-Linear systems http://en.wikipedia.org/wiki/Nonlinear_system – output is not directly inferable from input. E.g. a Logistic map: Xt+1 = R Xt(1-Xt) Different types of bifurcations, attractor states and oscillations may occur – e.g. a Lorenz Attractor http://en.wikipedia.org/wiki/Lorenz_system Feigenbaum Constants http://en.wikipedia.org/wiki/Feigenbaum_constants express ratios in a bifurcation diagram for a non-linear map – the convergent limit of R (the rate of period-doubling bifurcations) is 4.6692016 Maxwell’s Demon - http://en.wikipedia.org/wiki/Maxwell%27s_demon - the Second Law of Thermodynamics has only a statistical certainty – the universe (and thus information) tends towards entropy. While any computation can theoretically be done without expending energy, with finite memory, the act of erasing memory is permanent and increases entropy. Life & thought is a counter-example to the universe’s tendency towards entropy. Leo Szilard and later Claude Shannon came up with the Information Theory of Entropy - http://en.wikipedia.org/wiki/Entropy_(information_theory) whereby Shannon entropy quantifies the expected value of a message’s information in bits in order to determine channel capacity and leverage Coding Theory (compression analysis). Ludwig Boltzmann came up with Statistical Mechanics - http://en.wikipedia.org/wiki/Statistical_mechanics – whereby our Newtonian perception of continuous reality is a probabilistic and statistical aggregate of many discrete quantum microstates. This is relevant for Quantum Information Theory http://en.wikipedia.org/wiki/Quantum_information and the Physics of Information - http://en.wikipedia.org/wiki/Physical_information. Hilbert’s Problems http://en.wikipedia.org/wiki/Hilbert's_problems pondered whether mathematics is complete, consistent, and decidable (the Decision Problem – http://en.wikipedia.org/wiki/Entscheidungsproblem – is there always an algorithm that can determine whether a statement is true).  Godel’s Incompleteness Theorems http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems  proved that mathematics cannot be both complete and consistent (e.g. “This statement is not provable”). Turing through the use of Turing Machines (http://en.wikipedia.org/wiki/Turing_machine symbol processors that can prove mathematical statements) and Universal Turing Machines (http://en.wikipedia.org/wiki/Universal_Turing_machine Turing Machines that can emulate other any Turing Machine via accepting programs as well as data as input symbols) that computation is limited by demonstrating the Halting Problem http://en.wikipedia.org/wiki/Halting_problem (is is not possible to know when a program will complete – you cannot build an infinite loop detector). You may be used to thinking of 1 / 2 / 3 dimensional systems, but Fractal http://en.wikipedia.org/wiki/Fractal systems are defined by self-similarity & have non-integer Hausdorff Dimensions !!!  http://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_dimension – the fractal dimension quantifies the number of copies of a self similar object at each level of detail – eg Koch Snowflake - http://en.wikipedia.org/wiki/Koch_snowflake Definitions of complexity: size, Shannon entropy, Algorithmic Information Content (http://en.wikipedia.org/wiki/Algorithmic_information_theory - size of shortest program that can generate a description of an object) Logical depth (amount of info processed), thermodynamic depth (resources required). Complexity is statistical and fractal. John Von Neumann’s other machine was the Self-Reproducing Automaton http://en.wikipedia.org/wiki/Self-replicating_machine  . Cellular Automata http://en.wikipedia.org/wiki/Cellular_automaton are alternative form of Universal Turing machine to traditional Von Neumann machines where grid cells are locally synchronized with their neighbors according to a rule. Conway’s Game of Life http://en.wikipedia.org/wiki/Conway's_Game_of_Life demonstrates various emergent constructs such as “Glider Guns” and “Spaceships”. Cellular Automatons are not practical because logical ops require a large number of cells – wasteful & inefficient. There are no compilers or general program languages available for Cellular Automatons (as far as I am aware). Random Boolean Networks http://en.wikipedia.org/wiki/Boolean_network are extensions of cellular automata where nodes are connected at random (not to spatial neighbors) and each node has its own rule –> they demonstrate the emergence of complex  & self organized behavior. Stephen Wolfram’s (creator of Mathematica, so give him the benefit of the doubt) New Kind of Science http://en.wikipedia.org/wiki/A_New_Kind_of_Science proposes the universe may be a discrete Finite State Automata http://en.wikipedia.org/wiki/Finite-state_machine whereby reality emerges from simple rules. I am 2/3 through this book. It is feasible that the universe is quantum discrete at the plank scale and that it computes itself – Digital Physics: http://en.wikipedia.org/wiki/Digital_physics – a simulated reality? Anyway, all behavior is supposedly derived from simple algorithmic rules & falls into 4 patterns: uniform , nested / cyclical, random (Rule 30 http://en.wikipedia.org/wiki/Rule_30) & mixed (Rule 110 - http://en.wikipedia.org/wiki/Rule_110 localized structures – it is this that is interesting). interaction between colliding propagating signal inputs is then information processing. Wolfram proposes the Principle of Computational Equivalence - http://mathworld.wolfram.com/PrincipleofComputationalEquivalence.html - all processes that are not obviously simple can be viewed as computations of equivalent sophistication. Meaning in information may emerge from analogy & conceptual slippages – see the CopyCat program: http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf Scale Free Networks http://en.wikipedia.org/wiki/Scale-free_network have a distribution governed by a Power Law (http://en.wikipedia.org/wiki/Power_law - much more common than Normal Distribution). They are characterized by hubs (resilience to random deletion of nodes), heterogeneity of degree values, self similarity, & small world structure. They grow via preferential attachment http://en.wikipedia.org/wiki/Preferential_attachment – tipping points triggered by positive feedback loops. 2 theories of cascading system failures in complex systems are Self-Organized Criticality http://en.wikipedia.org/wiki/Self-organized_criticality and Highly Optimized Tolerance http://en.wikipedia.org/wiki/Highly_optimized_tolerance. Computational Mechanics http://en.wikipedia.org/wiki/Computational_mechanics – use of computational methods to study phenomena governed by the principles of mechanics. This book is a great intuition pump, but does not cover the more mathematical subject of Computational Complexity Theory – http://en.wikipedia.org/wiki/Computational_complexity_theory I am currently reading this book on this subject: http://www.amazon.com/Computational-Complexity-Christos-H-Papadimitriou/dp/0201530821/ref=pd_sim_b_1   stay tuned for that review!

    Read the article

  • CodePlex Daily Summary for Thursday, March 11, 2010

    CodePlex Daily Summary for Thursday, March 11, 2010New ProjectsASP.NET Wiki Control: This ASP.NET user control allows you to embed a very useful wiki directly into your already existing ASP.NET website taking advantage of the popula...BabyLog: Log baby daily activity.buddyHome: buddyHome is a project that can make your home smarter. as good as your buddy. Cloud Community: Cloud Community makes it easier for organizations to have a simple to use community platform. Our mission is to create an easy to use community pl...Community Connectors for Microsoft CRM 4.0: Community Connectors for Microsoft CRM 4.0 allows Microsoft CRM 4.0 customers and partners to monitor and analyze customers’ interaction from their...Console Highlighter: Hightlights Microsoft Windows Command prompt (cmd.exe) by outputting ANSI VT100 Control sequences to color the output. These sequences are not hand...Cornell Store: This is IN NO WAY officially affiliated or related to the Cornell University store. Instead, this is a project that I am doing for a class. Ther...DevUtilities: This project is for creating some utility tools, and they will be useful during the development.DotNetNuke® Skin Maple: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by DyNNamite.co.uk. The package includes 4 color variations and sev...HRNet: HRNetIIS Web Site Monitoring: A software for monitor a particular web site on IIS, even if its IP is sharing between different web site.Iowa Code Camp: The source code for the Iowa Code Camp website.Leonidas: Leonidas is a virtual tutorLunch 'n Learn: The Lunch 'n Learn web application is an open source ASP.NET MVC application that allows you to setup lunch 'n learn presentations for your team, c...MNT Cryptography: A very simple cryptography classMooiNooi MVC2LINQ2SQL Web Databinder: mvc2linq2sql is a databinder for ASP.NET MVC that make able developer to clean bind object from HTML FORMS to Linq entities. Even 1 to N relations ...MoqBot: MoqBot is an auto mocking library for Moq and Ninject.mtExperience1: hoiMvcPager: MvcPager is a free paging component for ASP.NET MVC web application, it exposes a series of extension methods for using in ASP.NET MVC applications...OCal: OCal is based on object calisthenics to identify code smellsPex Custom Arithmetic Solver: Pex Custom Arithmetic Solver contains a collection of meta-heuristic search algorithms. The goal is to improve Pex's code coverage for code involvi...SetControls: Расширеные контролы для ASP.NET приложений. Полная информация ближе к релизу...shadowrage1597: CTC 195 Game Design classSharePoint Team-Mailer: A SharePoint 2007 solution that defines a generic CustomList for sending e-mails to SharePoint Groups.Sql Share: SQL Share is a collaboration tool used within the science to allow database engineers to work tightly with domain scientists.TechCalendar: Tech Events Calendar ASP.NET project.ZLYScript: A very simple script language compiler.New ReleasesALGLIB: ALGLIB 2.4.0: New ALGLIB release contains: improved versions of several linear algebra algorithms: QR decomposition, matrix inversion, condition number estimatio...AmiBroker Plug-Ins with C#: AmiBroker Plug-Ins v0.0.2: Source codes and a binaryAppFabric Caching UI Admin Tool: AppFabric Caching Beta 2 UI Admin Tool: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x)Autodocs - WCF REST Automatic API Documentation Generator: Autodocs.ServiceModel.Web: This archive contains the reference DLL, instructions and license.Compact Plugs & Compact Injection: Compact Injection and Compact Plugs 1.1 Beta: First release of Compact Plugs (CP). The solution includes a simple example project of CP, called "TestCompactPlugs1". Also some fixes where made ...Console Highlighter: Console Highlighter 0.9 (preview release): Preliminary release.Encrypted Notes: Encrypted Notes 1.3: This is the latest version of Encrypted Notes (1.3). It has an installer - it will create a directory 'CPascoe' in My Documents. The last one was ...Family Tree Analyzer: Version 1.0.2: Family Tree Analyzer Version 1.0.2 This early beta version implements loading a gedcom file and displaying some basic reports. These reports inclu...FRC1103 - FRC Dashboard viewer: 2010 Documentation v0.1: This is my current version of the control system documentation for 2010. It isn't complete, but it has the information required for a custom dashbo...jQuery.cssLess: jQuery.cssLess 0.5 (Even less release): NEW - support for nested special CSS classes (like :hover) MAIN RELEASE This release, code "Even less", is the one that will interpret cssLess wit...MooiNooi MVC2LINQ2SQL Web Databinder: MooiNooi MVC2LINQ2SQL DataBinder: I didn't try this... I just took it off from my project. Please, tell me any problem implementing in your own development and I'll be pleased to h...MvcPager: MvcPager 1.2 for ASP.NET MVC 1.0: MvcPager 1.2 for ASP.NET MVC 1.0Mytrip.Mvc: Mytrip 1.0 preview 1: Article Manager Blog Manager L2S Membership(.NET Framework 3.5) EF Membership(.NET Framework 4) User Manager File Manager Localization Captcha ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.117: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version adds ...Pex Custom Arithmetic Solver: PexCustomArithmeticSolver: This is the alpha release containing the Alternating Variable Method and Evolution Strategies to try and solve constraints over floating point vari...Scrum Sprint Monitor: v1.0.0.44877: What is new in this release? Major performance increase in animations (up to 50 fps from 2 fps) by replacing DropShadow effect with png bitmaps; ...sELedit: sELedit v1.0b: + Added support for empty strings / wstrings + Fixed: critical bug in configuration files (list 53)sPWadmin: pwAdmin v0.9_nightly: + Fixed: XML editor can now open and save character templates + Added: PWI item name database + Added: Plugin SupportTechCalendar: Events Calendar v.1.0: Initial release.The Silverlight Hyper Video Player [http://slhvp.com]: Beta 2: Beta 2.0 Some fixes from Beta 1, and a couple small enhancements. Intensive testing continues, and I will continue to update the code at least ever...ThreadSafe.Caching: 2010.03.10.1: Updates to the scavanging behaviour since last release. Scavenging will now occur every 30 seconds by default and all objects in the cache will be ...VCC: Latest build, v2.1.30310.0: Automatic drop of latest buildVisual Studio DSite: Email Sender (C++): The same Email Sender program that I but made in visual c plus plus 2008 instead of visual basic 2008.Web Forms MVP: Web Forms MVP CTP7: The release can be considered stable, and is in use behind several high traffic, public websites. It has been marked as a CTP release as it is not ...White Tiger: 0.0.3.1: Now you can load or create files with whatever root element you want *check f or sets file permisionsMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesASP.NET Ajax LibraryMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsN2 CMSFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web ServicesBlogEngine.NETFarseer Physics Enginepatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • Is Social Media The Vital Skill You Aren’t Tracking?

    - by HCM-Oracle
    By Mark Bennett - Originally featured in Talent Management Excellence The ever-increasing presence of the workforce on social media presents opportunities as well as risks for organizations. While on the one hand, we read about social media embarrassments happening to organizations, on the other we see that social media activities by workers and candidates can enhance a company’s brand and provide insight into what individuals are, or can become, influencers in the social media sphere. HR can play a key role in helping organizations make the most value out of the activities and presence of workers and candidates, while at the same time also helping to manage the risks that come with the permanence and viral nature of social media. What is Missing from Understanding Our Workforce? “If only HP knew what HP knows, we would be three-times more productive.”  Lew Platt, Former Chairman, President, CEO, Hewlett-Packard  What Lew Platt recognized was that organizations only have a partial understanding of what their workforce is capable of. This lack of understanding impacts the company in several negative ways: 1. A particular skill that the company needs to access in one part of the organization might exist somewhere else, but there is no record that the skill exists, so the need is unfulfilled. 2. As market conditions change rapidly, the company needs to know strategic options, but some options are missed entirely because the company doesn’t know that sufficient capability already exists to enable those options. 3. Employees may miss out on opportunities to demonstrate how their hidden skills could create new value to the company. Why don’t companies have that more complete picture of their workforce capabilities – that is, not know what they know? One very good explanation is that companies put most of their efforts into rating their workforce according to the jobs and roles they are filling today. This is the essence of two important talent management processes: recruiting and performance appraisals.  In recruiting, a set of requirements is put together for a job, either explicitly or indirectly through a job description. During the recruiting process, much of the attention is paid towards whether the candidate has the qualifications, the skills, the experience and the cultural fit to be successful in the role. This makes a lot of sense.  In the performance appraisal process, an employee is measured on how well they performed the functions of their role and in an effort to help the employee do even better next time, they are also measured on proficiency in the competencies that are deemed to be key in doing that job. Again, the logic is impeccable.  But in both these cases, two adages come to mind: 1. What gets measured is what gets managed. 2. You only see what you are looking for. In other words, the fact that the current roles the workforce are performing are the basis for measuring which capabilities the workforce has, makes them the only capabilities to be measured. What was initially meant to be a positive, i.e. identify what is needed to perform well and measure it, in order that it can be managed, comes with the unintended negative consequence of overshadowing the other capabilities the workforce has. This also comes with an employee engagement price, for the measurements and management of workforce capabilities is to typically focus on where the workforce comes up short. Again, it makes sense to do this, since improving a capability that appears to result in improved performance benefits, both the individual through improved performance ratings and the company through improved productivity. But this is based on the assumption that the capabilities identified and their required proficiencies are the only attributes of the individual that matter. Anything else the individual brings that results in high performance, while resulting in a desired performance outcome, often goes unrecognized or underappreciated at best. As social media begins to occupy a more important part in current and future roles in organizations, businesses must incorporate social media savvy and innovation into job descriptions and expectations. These new measures could provide insight into how well someone can use social media tools to influence communities and decision makers; keep abreast of trends in fast-moving industries; present a positive brand image for the organization around thought leadership, customer focus, social responsibility; and coordinate and collaborate with partners. These measures should demonstrate the “social capital” the individual has invested in and developed over time. Without this dimension, “short cut” methods may generate a narrow set of positive metrics that do not have real, long-lasting benefits to the organization. How Workforce Reputation Management Helps HR Harness Social Media With hundreds of petabytes of social media data flowing across Facebook, LinkedIn and Twitter, businesses are tapping technology solutions to effectively leverage social for HR. Workforce reputation management technology helps organizations discover, mobilize and retain talent by providing insight into the social reputation and influence of the workforce while also helping organizations monitor employee social media policy compliance and mitigate social media risk.  There are three major ways that workforce reputation management technology can play a strategic role to support HR: 1. Improve Awareness and Decisions on Talent Many organizations measure the skills and competencies that they know they need today, but are unaware of what other skills and competencies their workforce has that could be essential tomorrow. How about whether your workforce has the reputation and influence to make their skills and competencies more effective? Many organizations don’t have insight into the social media “reach” their workforce has, which is becoming more critical to business performance. These features help organizations, managers, and employees improve many talent processes and decision making, including the following: Hiring and Assignments. People and teams with higher reputations are considered more valuable and effective workers. Someone with high reputation who refers a candidate also can have high credibility as a source for hires.   Training and Development. Reputation trend analysis can impact program decisions regarding training offerings by showing how reputation and influence across the workforce changes in concert with training. Worker reputation impacts development plans and goal choices by helping the individual see which development efforts result in improved reputation and influence.   Finding Hidden Talent. Managers can discover hidden talent and skills amongst employees based on a combination of social profile information and social media reputation. Employees can improve their personal brand and accelerate their career development.  2. Talent Search and Discovery The right technology helps organizations find information on people that might otherwise be hidden. By leveraging access to candidate and worker social profiles as well as their social relationships, workforce reputation management provides companies with a more complete picture of what their knowledge, skills, and attributes are and what they can in turn access. This more complete information helps to find the right talent both outside the organization as well as the right, perhaps previously hidden talent, within the organization to fill roles and staff projects, particularly those roles and projects that are required in reaction to fast-changing opportunities and circumstances. 3. Reputation Brings Credibility Workforce reputation management technology provides a clearer picture of how candidates and workers are viewed by their peers and communities across a wide range of social reputation and influence metrics. This information is less subject to individual bias and can impact critical decision-making. Knowing the individual’s reputation and influence enables the organization to predict how well their capabilities and behaviors will have a positive effect on desired business outcomes. Many roles that have the highest impact on overall business performance are dependent on the individual’s influence and reputation. In addition, reputation and influence measures offer a very tangible source of feedback for workers, providing them with insight that helps them develop themselves and their careers and see the effectiveness of those efforts by tracking changes over time in their reputation and influence. The following are some examples of the different reputation and influence measures of the workforce that Workforce Reputation Management could gather and analyze: Generosity – How often the user reposts other’s posts. Influence – How often the user’s material is reposted by others.  Engagement – The ratio of recent posts with references (e.g. links to other posts) to the total number of posts.  Activity – How frequently the user posts. (e.g. number per day)  Impact – The size of the users’ social networks, which indicates their ability to reach unique followers, friends, or users.   Clout – The number of references and citations of the user’s material in others’ posts.  The Vital Ingredient of Workforce Reputation Management: Employee Participation “Nothing about me, without me.” Valerie Billingham, “Through the Patient’s Eyes”, Salzburg Seminar Session 356, 1998 Since data resides primarily in social media, a question arises: what manner is used to collect that data? While much of social media activity is publicly accessible (as many who wished otherwise have learned to their chagrin), the social norms of social media have developed to put some restrictions on what is acceptable behavior and by whom. Disregarding these norms risks a repercussion firestorm. One of the more recognized norms is that while individuals can follow and engage with other individual’s public social activity (e.g. Twitter updates) fairly freely, the more an organization does this unprompted and without getting permission from the individual beforehand, the more likely the organization risks a totally opposite outcome from the one desired. Instead, the organization must look for permission from the individual, which can be met with resistance. That resistance comes from not knowing how the information will be used, how it will be shared with others, and not receiving enough benefit in return for granting permission. As the quote above about patient concerns and rights succinctly states, no one likes not feeling in control of the information about themselves, or the uncertainty about where it will be used. This is well understood in consumer social media (i.e. permission-based marketing) and is applicable to workforce reputation management. However, asking permission leaves open the very real possibility that no one, or so few, will grant permission, resulting in a small set of data with little usefulness for the company. Connecting Individual Motivation to Organization Needs So what is it that makes an individual decide to grant an organization access to the data it wants? It is when the individual’s own motivations are in alignment with the organization’s objectives. In the case of workforce reputation management, when the individual is motivated by a desire for increased visibility and career growth opportunities to advertise their skills and level of influence and reputation, they are aligned with the organizations’ objectives; to fill resource needs or strategically build better awareness of what skills are present in the workforce, as well as levels of influence and reputation. Individuals can see the benefit of granting access permission to the company through multiple means. One is through simple social awareness; they begin to discover that peers who are getting more career opportunities are those who are signed up for workforce reputation management. Another is where companies take the message directly to the individual; we think you would benefit from signing up with our workforce reputation management solution. Another, more strategic approach is to make reputation management part of a larger Career Development effort by the company; providing a wide set of tools to help the workforce find ways to plan and take action to achieve their career aspirations in the organization. An effective mechanism, that facilitates connecting the visibility and career growth motivations of the workforce with the larger context of the organization’s business objectives, is to use game mechanics to help individuals transform their career goals into concrete, actionable steps, such as signing up for reputation management. This works in favor of companies looking to use workforce reputation because the workforce is more apt to see how it fits into achieving their overall career goals, as well as seeing how other participation brings additional benefits.  Once an individual has signed up with reputation management, not only have they made themselves more visible within the organization and increased their career growth opportunities, they have also enabled a tool that they can use to better understand how their actions and behaviors impact their influence and reputation. Since they will be able to see their reputation and influence measurements change over time, they will gain better insight into how reputation and influence impacts their effectiveness in a role, as well as how their behaviors and skill levels in turn affect their influence and reputation. This insight can trigger much more directed, and effective, efforts by the individual to improve their ability to perform at a higher level and become more productive. The increased sense of autonomy the individual experiences, in linking the insight they gain to the actions and behavior changes they make, greatly enhances their engagement with their role as well as their career prospects within the company. Workforce reputation management takes the wide range of disparate data about the workforce being produced across various social media platforms and transforms it into accessible, relevant, and actionable information that helps the organization achieve its desired business objectives. Social media holds untapped insights about your talent, brand and business, and workforce reputation management can help unlock them. Imagine - if you could find the hidden secrets of your businesses, how much more productive and efficient would your organization be? Mark Bennett is a Director of Product Strategy at Oracle. Mark focuses on setting the strategic vision and direction for tools that help organizations understand, shape, and leverage the capabilities of their workforce to achieve business objectives, as well as help individuals work effectively to achieve their goals and navigate their own growth. His combination of a deep technical background in software design and development, coupled with a broad knowledge of business challenges and thinking in today’s globalized, rapidly changing, technology accelerated economy, has enabled him to identify and incorporate key innovations that are central to Oracle Fusion’s unique value proposition. Mark has over the course of his career been in charge of the design, development, and strategy of Talent Management products and the design and development of cutting edge software that is better equipped to handle the increasingly complex demands of users while also remaining easy to use. Follow him @mpbennett

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >