Search Results

Search found 93612 results on 3745 pages for 'inquisitive one'.

Page 305/3745 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Spritegroups and colorkeys

    - by Fristi
    I have a problem using spritegroups in pygame. In my situation I have 2 spritegroups, one for humans, one for "infected". A human is represented by a blue circle: image = pygame.Surface((32,32)) image.fill((255,255,255)) pygame.draw.circle(image,(0,0,255),(16,16),16) image = image.convert() image.set_colorkey((255,255,255)) An infected by a red one (same code, different color). I update my spritegroups as follows: self.humans.clear(self.screen, self.bg) self.humans.update(time_passed) self.humans.draw(self.screen) self.infected.clear(self.screen, self.bg) self.infected.update(time_passed) self.infected.draw(self.screen) Self.bg is defined: self.bg = pygame.Surface((SCREEN_WIDTH, SCREEN_HEIGHT)) self.bg.fill((255,255,255)) self.bg.convert() This all works, except that when a red circle overlaps with a blue one, you can see the white corners of the bounding box around the actual circle. Within a spritegroup it works, using the set_colorkey function. This does not happen with overlapping blue circles or overlapping red circles. I tried adding a colorkey to self.bg but that did not work. Same for adding a colorkey to self.screen.

    Read the article

  • Why don't windows of the same application behave as they should in Oneiric?

    - by Yuttadhammo
    Somewhere along the upgrade path, Unity has developed some strange logic behind window layering. First, before Oneiric, there was a way to see all the windows of an application - I think it was when you click on the icon in the launcher. Now, clicking on the icon often does nothing. Suppose I have two terminals open, one behind this Firefox window, and one in front of it. Clicking on the launcher does nothing - the only way to find the second terminal, afaics, is to move the Firefox window or use the task switcher (which has a whole slew of problems of its own). Secondly, once I have both terminals on top, then I decide to close one of them, suddenly they both disappear (the second one, for some reason, has gone into hiding behind the Firefox window). Third (though I can't pin it down now), sometimes when a window is on top, focus is still on a window in back; I click on the top x to close the window in front, only to find I've closed an important window in the back. I can't really believe these are bugs, since they seem too obvious to not have been fixed by now. My question is, am I missing something? Some compiz option I can set to make it act like it used to? Or is this really how Unity is supposed to act?

    Read the article

  • Mirroring of Apps across servers

    - by user1038814
    We wish to host multiple apps across multiple servers. What we are looking for (ideally) is an existing solution which will work. For example, normally to do it we'd follow a route (for failover) like: App is installed on one server along with mysql database App is also installed on a second server. Rsync is used to mirror the files over to the second server and ensure consistency MySQL is installed with a Master-Slave setup. We use a service such as DNS Made Easy which has a DNS failover. If one server goes down it automatically routes traffic to the backup server We have done the above a few times and generally its fine. The issue I have here is that the above is for one app. What I would like to look at is how we can manage for multiple apps and if there is a layer (such as VMWare) that has complete mirroring built in at the OS level? For example how do web hosts currently do it when they ensure that more than one machine is running a bunch of hosted websites. If you were running hosting and you had 200 clients on a server you would want the same clients across 2 or more servers and want everything mirrored. Any advice would be much appreciated.

    Read the article

  • Do there exist programming languages where a variable can truly know its own name?

    - by Job
    In PHP and Python one can iterate over the local variables and, if there is only once choice where the value matches, you could say that you know what the variable's name is, but this does not always work. Machine code does not have variable names. C compiles to assembly and does not have any native reflection capabilities, so it would not know it's name. (Edit: per Anton's answer the pre-processor can know the variable's name). Do there exist programming languages where a variable would know it's name? It gets tricky if you do something like b = a and b does not become a copy of a but a reference to the same place. EDIT: Why in the world would you want this? I can think of one example: error checking that can survive automatic refactoring. Consider this C# snippet: private void CheckEnumStr(string paramName, string paramValue) { if (paramName != "pony" && paramName != "horse") { string exceptionMessage = String.Format( "Unexpected value '{0}' of the parameter named '{1}'.", paramValue, paramName); throw new ArgumentException(exceptionMessage); } } ... CheckEnumStr("a", a); // Var 'a' does not know its name - this will not survive naive auto-refactoring There are other libraries provided by Microsoft and others that allow to check for errors (sorry the names have escaped me). I have seen one library which with the help of closures/lambdas can accomplish error checking that can survive refactoring, but it does not feel idiomatic. This would be one reason why I might want a language where a variable knows its name.

    Read the article

  • Dual boot :Windows 7 partition deleted after Kubuntu 14.04 install...Weird!

    - by user292152
    I've bought two new SSD's in order to install Kubuntu on one and Win 7 on the other one. Before I had Linux Mint and Win7 together one just one SSD. So first I installed win7 as recommended, and then used the guided installer of Kubuntu to install Kubuntu. I selected the second SSD, chose the option "use entire disk and install", but to my surprise after rebooting and selecting win7 boot loader from grub2, I got a prompt that my windows installation is damaged, and I need to run the repair option from the installation disk. So I booted into Kubuntu again, fired up kparted and saw that indeed my windows partition got deleted, except the recovery partition. I don't understand what happened. I am not new to this topic, and this was not my first time installing Ubuntu alongside windows. I have never ever had that problem. What can I do to make sure this won't happen again, so I won't waste another 2 hours of my life? ?? Thanks a lot !

    Read the article

  • Attempting to install ubuntu 11.10

    - by Orin
    I installed version 9 sometime ago and since have forgotten the process for partitioning, or the layout is different. I have 5 partitions but only have windows xp installed on the pc in question with that being the one of those 5 partitions which is ntfs 34444 mb its - a 40gig hard-drive. My first question is... is there a way to get a screen shot of the partitioner when I am running the demo session straight from disc... these 5 partitions are fragmenting the other 4ish gig needed to install.. I get an error message which says go back and make sure 1 partition has at least 2.5 gig or so. But I have no idea what I am supposed to set these remaining 4 partitions to in order to proceed.. I have read up on install guides and understand that one must be "/" root and another as swap... but to no avail thus far have the correct combo. A few screen shots will no doubt help you guys answer as I'm baffled as to what specific details to give as each one has various settings on inspection, and I don't really feel like writing it all down manually then posting specs for each one

    Read the article

  • Where do I read more about building an architecture like Google has? [on hold]

    - by user107148
    I want to develop a program that watches and traverses a rather big network for data. This data should then be available to search through with my program, maybe through a web interface or something. My intuitive thought at the moment is to build an architecture kind of like the one Google has. One "front-end" (the Google Search page) which is essentially a regular web application and one "back-end" (which in Google's case traverses the web). Now for the hard part: If I decide to make such a system, how should communication be done between these parts? One idea I had is to use some kind of database that both the back-end and front-end can access, but then comes the issues of concurrent writes and reads. Another issue with just using a database to communicate is that it makes it hard to "notify" the other part when something changes. Let's say that I want the "front-end" part to push changes to the UI when a change is noticed in the back-end. Then the back-end would have to have some way of notifying the front-end of this.

    Read the article

  • Java single Array best choice for accessing pixels for manipulation?

    - by Petrol
    I am just watching this tutorial https://www.youtube.com/watch?v=HwUnMy_pR6A and the guy (who seems to be pretty competent) is using a single array to store and access the pixels of his to-be-rendered image. I was wondering if this really is the best way to do this. The alternative of Multi-Array does have one pointer more, but Arrays do have an O(1) for accessing each index and calculating the index in a single array seems to take one addition and one multiplication operation per pixel. And if Multi-Arrays really are bad, can't you use something with Hashing to avoid those addition and multiplication operations? EDIT: here is his code... public class Screen { private int width, height; public int[] pixels; public Screen(int width, int height) { this.width = width; this.height = height; // creating array the size of one index/int for every pixel // single array has better performance than multi-array pixels = new int[width * height]; } public void render() { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { pixels[x + y * width] = 0xff00ff; } } } }

    Read the article

  • How to keep track of user images when using a CDN? [closed]

    - by Programmer
    We are considering moving our user profile images from the local server to the Rackspace CDN (Cloud Files). However, how do you keep track of where each user's profile image is located on the CDN? Wouldn't you have to store the CDN URL for each user image in the local Database and query it every time you display a user image? Isn't that slower than accessing a user image directly on the local server which requires no such DB query to retrieve since you already know where it is stored based on the user's User ID? What if a user has an album of pics? How would you keep track of all those images that belong just to that one user? What about the order of those pics? In the case of the Rackspace CDN, we're looking at using a Container for each individual user to help keep things more logically organized, but we don't know what the best way to track all of it is since the CDN provides a seemingly random URL for each image. To make matters worse, you can't even delete a non-empty Container belonging to a user when they delete their account, you actually have to delete each object inside the Container one-by-one before deleting the Container itself. It doesn't end there, you can't even have nested Containers or "sub-folders", and you can't rename a file (you must copy it with a new name and delete the old one manually). It just sounds so incredibly more complicated then we thought it would be, and it certainly does not feel "intuitive" compared to local storage, so we don't know what to do. Please help.

    Read the article

  • Characteristics, what's the inverse of (x*(x+1))/2? [closed]

    - by Valmond
    In my game you can spend points to upgrade characteristics. Each characteristic has a formula like: A) out = in : for one point spent, one pont gained (you spend 1 point on Force so your force goes from 5 to 6) B) out = last level (starting at 1) : so the first point spent earns you 1 point, the next point spent earns you an additional 2 and so on (+3,+4,+5...) C) The inverse of B) : You need to spend 1 point to earn one, then you need to spend 2 to earn another one and so on. I have already found the formula for calculating the actual level of B when points spent = x : charac = (x*(x+1))/2 But I'd like to know what the "reverse" version of B) (usable for C) is, ie. if I have spent x points, how many have I earned if 1 spent gives 1, 1+2=3 gives 2, 1+2+3=6 gives 3 and so on. I know I can just calculate the numbers but I'd like to have the formula because its neater and so that I can stick it in an excel sheet for example... Thanks! ps. I think I have nailed it down to something like charac = sqrt( x*m +k) but then I'm stuck doing number guessing for k and m and I feel I might be wrong anyway as I get close but never hits the spot.

    Read the article

  • Multiple "pages" in GWT with human friendly URLs

    - by Andreas Borglin
    Hi. I'm playing with a GWT/GAE project which will have three different "pages", although it is not really pages in a GWT sense. The top views (one for each page) will have completely different layouts, but some of the widgets will be shared. One of the pages is the main page which is loaded by the default url (http://www.site.com), but the other two needs additional URL information to differentiate the page type. They also need a name parameter, (like http://www.site.com/project/project-name. There are at least two solutions to this that I'm aware of. Use GWT history mechanism and let page type and parameters (such as project name) be part of the history token. Use servlets with url-mapping patterns (like /project/*) The first choice might seem obvious at first, but it has several drawbacks. First, a user should be able to easily remember and type URL directly to a project. It is hard to produce a human friendly URL with history tokens. Second, I'm using gwt-presenter and this approach would mean that we need to support subplaces in one token, which I'd rather avoid. Third, a user will typically stay at one page, so it makes more sense that the page information is part of the "static" URL. Using servlets solves all these problems, but also creates other ones. So my first questions is, what is the best solution here? If I would go for the servlet solution, new questions pop up. It might make sense to split the GWT app into three separate modules, each with an entry point. Each servlet that is mapped to a certain page would then simply forward the request to the GWT module that handles that page. Since a user typically stays at one page, the browser only needs to load the js for that page. Based on what I've read, this solution is not really recommended. I could also stick with one module, but then GWT needs to find out which page it should display. It could either query the server or parse the URL itself. If I stick with one GWT module, I need to keep the page information stored on server side. Naturally I thought about sessions, but I'm not sure if its a good idea to mix page information with user data. A session usually lives between user login and logout, but in this case it would need different behavior. Would it be bad practise to handle this via sessions? The one GWT module + servlet solution also leads to another problem. If a user goes from a project page to the main page, how will GWT know that this has happened? The app will not be reloaded, so it will be treated as a simple state change. It seems rather ineffecient to have to check page info for every state change. Anyone care to guide me out of the foggy darkness that surrounds me? :-)

    Read the article

  • jquerymobile - include .js and .html

    - by Girija
    Hi, I have described the my problem in the following lines. Kindly clarify me. In my application,I am using more than html page for displaying the content and each page have own .js file. When I call the html page then .js file also included. In the .js,I am using $('div').live('pageshow',function(){}). I am calling the html file from the .js(using $.mobile.changePage("htmlpage")). My problem : consider, I have two html files. second.html file is called with in the one.js. when I show the second.html, that time one.js is reload again. I am getting the alert "one.js" then "second.js" Please help me. Thanks in advance. Please point out my mistake. I have attached the code. one.html <!DOCTYPE html> <html> <head> <title>Page Title</title> <link rel="stylesheet" href="jquery.mobile-1.0a2.min.css" /> <script src="jquery-1.4.3.min.js"></script> <script src="jquery.mobile-1.0a2.min.js"></script> <script src="Scripts/one.js"></script> </head> <body> <div data-role="page"> </div> </body> </html> Second.html <!DOCTYPE html> <html> <head> <title>Sample </title> <link rel="stylesheet" href="../jquery.mobile-1.0a2.min.css" /> <script src="../jquery-1.4.3.min.js"></script> <script src="../jquery.mobile-1.0a2.min.js"></script> <script type="text/javascript" src="Scripts/second.js"></script> </head> <body> <div data-role="page"> <div data-role="button" id="link" >Second</div> </div><!-- /page --> </body> </html> one.js $('div').live('pageshow',function() { alert("one.js"); //AJAX Calling //success result than call the second.html $.mobile.changePage("second.html"); }); second.js $('div').live('pageshow',function(){ { alert('second.js'); //AJAX Calling //success result than call the second.html $.mobile.changePage("third.html"); }); Note : When I show forth.html that time the following files are reload(one.js,second.js,third,js and fourth.js. But I need fourth.js alone). I tried to use the $.document.ready(function(){}); but that time .js did not call. :( :( :(

    Read the article

  • Limiting TCP sends with a "to-be-sent" queue and other design issues.

    - by Poni
    Hello all! This question is the result of two other questions I've asked in the last few days. I'm creating a new question because I think it's related to the "next step" in my understanding of how to control the flow of my send/receive, something I didn't get a full answer to yet. The other related questions are: http://stackoverflow.com/questions/3028376/an-iocp-documentation-interpretation-question-buffer-ownership-ambiguity http://stackoverflow.com/questions/3028998/non-blocking-tcp-buffer-issues In summary, I'm using Windows I/O Completion Ports. I have several threads that process notifications from the completion port. I believe the question is platform-independent and would have the same answer as if to do the same thing on a *nix, *BSD, Solaris system. So, I need to have my own flow control system. Fine. So I send send and send, a lot. How do I know when to start queueing the sends, as the receiver side is limited to X amount? Let's take an example (closest thing to my question): FTP protocol. I have two servers; One is on a 100Mb link and the other is on a 10Mb link. I order the 100Mb one to send to the other one (the 10Mb linked one) a 1GB file. It finishes with an average transfer rate of 1.25MB/s. How did the sender (the 100Mb linked one) knew when to hold the sending, so the slower one wouldn't be flooded? Another way to ask this: Can I get a "hold-your-sendings" notification from the remote side? Is it built-in in TCP or the so called "reliable network protocol" needs me to do so? Again, I have a loop with many sends to a remote server, and at some point, within that loop I'll have to determine if I should queue that send or I can pass it on to the transport layer (TCP). How do I do that? What would you do? Of course that when I get a completion notification from IOCP that the send was done I'll issue other pending sends, that's clear. Another design question related to this: Since I am to use a custom buffers with a send queue, and these buffers are being freed to be reused (thus not using the "delete" keyword) when a "send-done" notification has been arrived, I'll have to use a mutual exlusion on that buffer pool. Using a mutex slows things down, so I've been thinking; Why not have each thread have its own buffers pool, thus accessing it , at least when getting the required buffers for a send operation, will require no mutex, because it belongs to that thread only. The buffers pool is located at the thread local storage (TLS) level. No mutual pool implies no lock needed, implies faster operations BUT also implies more memory used by the app, because even if one thread already allocated 1000 buffers, the other one that is sending right now and need 1000 buffers to send something will need to allocated these to its own. This is a long question and I hope none got hurt (: Thank you all!

    Read the article

  • XSD: how to use 'unique' & 'key'/'keyref' with element values?

    - by Koohoolinn
    I trying to use and / with element values but I just can't get it to work. If I do it with attrubute values it works like a charm. Test.xml <test:config xmlns:test="http://www.example.org/Test" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.example.org/Test Test.xsd "> <test:location id="id1" path="/path2"> <test:roles> <test:role>role1</test:role> <test:role>role2</test:role> <test:role>role2</test:role> <!-- DUPLICATE: FAIL VALIDATION --> </test:roles> <test:action name="action1"> <test:roles> <test:role>role1</test:role> <test:role>role1</test:role> <!-- DUPLICATE: FAIL VALIDATION --> <test:role>role3</test:role> <!-- NOT DEFINED: FAIL VALIDATION --> </test:roles> </test:action> </test:location> </test:config> I want ensure that roles are only defined once and that the roles defined under the action element are only those defined at the upper level. Test.xsd <xs:element name="config"> <xs:complexType> <xs:sequence> <xs:element ref="test:location" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="location" type="test:LocationType"> <xs:key name="keyRole"> <xs:selector xpath="test:roles" /> <xs:field xpath="test:role" /> </xs:key> <xs:keyref name="keyrefRole" refer="test:keyRole"> <xs:selector xpath="test:action/test:roles" /> <xs:field xpath="test:role" /> </xs:keyref> </xs:element> <xs:complexType name="LocationType"> <xs:sequence> <xs:element ref="test:roles" minOccurs="0" /> <xs:element name="action" type="test:ActionType" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="id" type="xs:string" use="required"/> <xs:attribute name="path" type="xs:string" use="required"/> </xs:complexType> <xs:element name="roles" type="test:RolesType"> <xs:unique name="uniqueRole"> <xs:selector xpath="." /> <xs:field xpath="test:role" /> </xs:unique> </xs:element> <xs:complexType name="RolesType"> <xs:sequence> <xs:element name="role" type="xs:string" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType name="ActionType"> <xs:sequence> <xs:element ref="test:roles" /> </xs:sequence> <xs:attribute name="name" type="xs:string" use="required" /> </xs:complexType> The validation fails with these messages: Description Resource Path Location Type cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyrefRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 15 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyrefRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 16 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 9 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 10 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 9 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 10 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 15 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 16 XML Problem cvc-identity-constraint.4.1: Duplicate unique value [role1] declared for identity constraint "uniqueRole" of element "roles". Test.xml /filebrowser-ejb/src/test/resources line 9 XML Problem cvc-identity-constraint.4.1: Duplicate unique value [role1] declared for identity constraint "uniqueRole" of element "roles". Test.xml /filebrowser-ejb/src/test/resources line 15 XML Problem cvc-identity-constraint.4.2.2: Duplicate key value [role1] declared for identity constraint "keyRole" of element "location". Test.xml /filebrowser-ejb/src/test/resources line 9 XML Problem cvc-identity-constraint.4.3: Key 'keyrefRole' with value 'role3' not found for identity constraint of element 'location'. Test.xml /filebrowser-ejb/src/test/resources line 19 XML Problem If I comment out the lines that should fail, validation still fails now with these messages: Description Resource Path Location Type cvc-identity-constraint.3: Field "./test:role" of identity constraint "keyRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 10 XML Problem cvc-identity-constraint.3: Field "./test:role" of identity constraint "uniqueRole" matches more than one value within the scope of its selector; fields must match unique values. Test.xml /filebrowser-ejb/src/test/resources line 10 XML Problem What am I doing wrong?

    Read the article

  • Using javascript to limit survey choices to three unique values

    - by leanne
    I'm required to use a limited survey application, and have to adapt the provided code to meet more advanced functionality. I need to create a weighted ranking question, so users can select their top three choices and the data will go into the survey application and be accessible in the survey reports. The application only supports 2 types of questions (text fill & multiple choice) but I can alter the code, as long as it still sends the form data back to the survey application. The code is set up so it will show a drop-down menu of 0-3 for each option. Now I want to limit the user's choices so they can only select one "1" "2" or "3", three choices total. Ideally, if the user already had "2" selected for one option and they tried to select it for another option, it would set the first "2" as "0" or blank. Is this possible to do with javascript? If so, does anyone know of a site that might show code like this, or provide similar enough examples that I could adapt it? Current code here: <html> <head><title>Survey</title></head> <!-- Changes - remove br to put dropdown next to text for each item. Switch text & dropdown order for each item. - add comments to separate each question - removed blue title font - add instructions Goals - limit choices to one 1 one 2 and one 3, three choices total. --> <link href="---" rel="stylesheet" type="text/css"> <body bgcolor="#3c76a3"> <!-- TRANSITIONAL DIALOG BOX --> <table border="0" align="center" cellpadding="0" cellspacing="0" style="background-attachment: scroll; background-color: #3c76a3; background-repeat: no-repeat; background-position: left top;" bgcolor="#3c76a3" topmargin="0" marginwidth="0" marginheight="0" width="100%" height="100%"> <tr> <td> <table border="0" align="center" cellpadding="0" cellspacing="0" id="survey"> <tr> <td><p>&nbsp;</p> <!-- HEADER END --> <!-- FORM START TAG --><form name="survey" action="---" method="POST"> <FONT face="Verdana, Arial, Helvetica, sans-serif"> <b>survey</b><hr> <!-- 1 --> <input type=hidden name="Buy R.J. a DeLorean_multiple_answers" value="one"> <font size=2><select name="Buy R.J. a DeLorean" SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Buy R.J. a DeLorean_help" value=""> <b><font size=2>Buy R.J. a DeLorean</font></b> <hr size=1> <!-- 2 --> <input type=hidden name="Fill Lisa's office with marshmallows._multiple_answers" value="one"> <font size=2><select name="Fill Lisa's office with marshmallows." SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Fill Lisa's office with marshmallows._help" value=""> <b><font size=2>Fill Lisa's office with marshmallows.</font></b> <hr size=1> <!-- 3 --> <input type=hidden name="Install a beer fridge in everyone's filing cabinets._multiple_answers" value="one"> <font size=2><select name="Install a beer fridge in everyone's filing cabinets." SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Install a beer fridge in everyone's filing cabinets._help" value=""> <b><font size=2>Install a beer fridge in everyone's filing cabinets.</font></b> <hr size=1> <!-- 4 --> <input type=hidden name="Buy a company Cessna_multiple_answers" value="one"> <font size=2><select name="Buy a company Cessna" SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Buy a company Cessna_help" value=""> <b><font size=2>Buy a company Cessna</font></b><br> <hr size=1> <!-- 5 --> <input type=hidden name="Replace Conf2's chairs with miniature ponies._multiple_answers" value="one"> <font size=2><select name="Replace Conf2's chairs with miniature ponies." SIZE=1> <option value=""> <option value="0">0 <option value="1">1 <option value="2">2 <option value="3">3 </select></font> <input type="hidden" name="Replace Conf2's chairs with miniature ponies._help" value=""> <b><font size=2>Replace Conf2's chairs with miniature ponies.</font></b> <hr size=1> <input type="hidden" name="question_names" value="{Buy R.J. a DeLorean} {Fill Lisa's office with marshmallows.} {Install a beer fridge in everyone's filing cabinets.} {Buy a company Cessna} {Replace Conf2's chairs with miniature ponies.}"> <p align="right"><input type="image" BORDER=0 title="Save Changes" alt="Save Changes" src="---" name="button_save_changes"> <input type="hidden" name="showconfirm" value="T"> <input type="hidden" name="showresults" value="F"> <input type="hidden" name="preventdupesmemberid" value="T"> <input type="hidden" name="preventdupesip" value="F"> <input type="hidden" name="numberquestions" value="F"> <input type="hidden" name="destinationurl" value=""> <input type="hidden" name="original_survey_id" value="62"> <!-- FORM END TAG --></form> <!-- FOOTER START --> </td> </tr> </table> </td> </tr> </table> <!-- END HEADER --> </body> </html>

    Read the article

  • The Complete List of iPad Tips, Tricks, and Tutorials

    - by Ross
    The Apple iPad is the latest new toy, and we’ve put together a comprehensive list of every tip, trick, and tutorial that we could find to help you get the most out of it—and we’re even giving one away to one lucky reader. So read on! Note: We’ll be keeping this page updated as we find more great articles, so you should bookmark this page for future reference. Want Your Own iPad? How-To Geek is Giving One Away! All you have to do to enter is become a fan of our Facebook page, and we’ll pick a random fan to win the prize. Win an iPad on the How-To Geek Facebook Fan Page Disable the “clicking sound” on the iPad Keyboard Does the clicking sound when you tap the iPad keyboard bother you? Thankfully it’s easy to disable with a couple of taps. How to disable the “clicking sound” on your iPad’s keyboard Enable and add bookmarks to the Safari Bookmarks Bar on your iPad By default, Safari doesn’t display the Bookmarks Bar. This tip shows you how to change that. How to enable and add bookmarks to the Safari Bookmarks Bar on your iPad Clear the Cache, History and Cookies in Safari for the iPad You’re probably used to clearing this kind of data right from within the browser. Not so with Safari on the iPad – but here’s how you can. How to clear the cache, history and cookies in Safari for iPad How to add more Apps to your iPad Dock The iPad has four icons in its ‘dock’. Did you know it can hold 6? How to add more Apps to your iPad Dock Convert PDF files to ePub files to read on your iPad with iBooks ePub is the format that iBooks are in. So for those of you with large eBook collections in PDF, here’s how you convert them to read in iBooks. How to convert PDF files to ePub files to read on your iPad with iBooks How to force your iPad to restart Has an app caused your iPad to freeze up, and you can’t escape? This tip shows you how to force your iPad to restart. How to force your iPad to restart How to export Keynote for iPad presentations to your Mac or PC Exporting Keynote presentations from your iPad to your Mac or PC isn’t as straight forward as you might have expected. This tutorial shows you how. How to export Keynote for iPad presentations to your Mac or PC How to import presentations to Keynote on your iPad Having trouble getting your presentations onto your iPad? How to import presentations to Keynote on your iPad How to import documents to Pages on your iPad This guide shows you how to transfer documents (MS Word or Pages) from your Mac/PC to your iPad. How to import documents to Pages on your iPad How to insert photos in a Pages document using iPad and share it as a PDF Want to spice up that doc with a picture you just took? This tutorial will show you how – and how to export that document as a PDF. How to insert photos in a Pages document using iPad and share it as a PDF How to lock your iPad If you have kids or co-workers/friends who think it’s funny to mess with your iPad – lock it. How to lock your iPad How to remove the “Sent from my iPad” signature from outgoing email on your iPad Does everyone need to know you just sent that email from your iPad? Probably not. This guide shows you how to remove the “Sent from my iPad” signature and replace it with your own (or none). How to remove the “Sent from my iPad” signature from outgoing email on your iPad How To Sync Multiple Calendars to the iPad With Google Sync This tutorial will show you a workaround on how to sync multiple calendars on your iPad using Google Sync. How to Sync Multiple Calendars to the iPad With Google Sync How to determine the MAC address of your iPad If your network restricts connections via MAC address – this guide will show you how to determine what yours is. How to determine the MAC address of your iPad How to take a screenshot of your iPad Do you need to take a screenshot of your iPad? This quick tip shows you how to do just that. How to take a screenshot of your iPad How to delete apps from your iPod Touch, iPhone or iPad Anyone who had an iPod Touch or iPhone before they had an iPad won’t need this tutorial. But if you’re new to the experience, this one will help. How to delete apps from your iPod Touch, iPhone or iPad How to determine the iPad ECID on Windows and Mac iPadintosh shows us how to determine the iPad’s ECID code – something you’ll want to have come Jailbreak time. How to grab the iPad ECID in Windows or OS X iPad Apps: Twitter and social networking essentials Enggadget has you covered with reviews of the first slew of iPad specific Twitter and other social networking apps. iPad Apps: Twitter and social networking essentials What does your website look like on an iPad? iPad Peek is a web based tool that allows you to enter any given URL, and it will display that page the same way Safari on the iPad does. Great for web site owners who don’t have access to an iPad. iPadPeek Stream Music and Videos to your iPad Gizmodo reviews the iPad app StreamToMe, which allows you to stream media from your Mac to your iPad across your local network. Their feelings in a nutshell – worth the $3, but not perfect. Review: StreamToMe for the iPad Apple iPad : Change links in Google Reader to point to full HTML webpage How to change links in Safari for iPad so that Google Reader points to a full HTML webpage How to connect an iPad to your existing wireless keyboard This video will show you how to connect your iPad to a wireless keyboard if you’re having any problems – and from the sound of things, quite a few folks are. via TUAW How to get started with the iPad Mashable has a very entry-level guide that will help you set up your iPad for the first time. Mashable’s Guide to Setting up the iPad Essential iPad Apps Downloadsquad gives mini-reviews to 8 iPad apps that you should install as soon as you get your iPad. iPad App Buyers Guide: Essential Apps you should get on day one Videos: The Official iPad Guided Tours From none other than Apple! Great getting started videos for all the included iPad apps. The Official iPad Guided Tours The Official iPad Manual When you buy an iPad, you don’t get a manual. But that’s not to say there isn’t one. Apple provides a 150 guide for your iPad in PDF format. The Official iPad Manual (pdf) How to print from your iPad Sure, it’s actually just an App (PrintCentral – $9.99 USD), but as of right now, it’s the only way. PrintCentral How to make your own iPad Wallpaper A perfectly detailed tutorial on how to make your own wallpaper for your iPad. The author also provides a really nice sample wallpaper, published under the Attribution-Noncommercial 2.0 Generic license. How to make your own iPad Wallpaper Got any more tips? Share them in the comments, and we’ll update the post with the links, or just the tip itself. Similar Articles Productive Geek Tips Want an iPad? How-To Geek is Giving One Away!Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]Clear the Auto-Complete Email Address Cache in OutlookAsk the Readers: Share Your Tips for Defeating Viruses and MalwareStupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • Replacing jQuery.live() with jQuery.on()

    - by Rick Strahl
    jQuery 1.9 and 1.10 have introduced a host of changes, but for the most part these changes are mostly transparent to existing application usage of jQuery. After spending some time last week with a few of my projects and going through them with a specific eye for jQuery failures I found that for the most part there wasn't a big issue. The vast majority of code continues to run just fine with either 1.9 or 1.10 (which are supposed to be in sync but with 1.10 removing support for legacy Internet Explorer pre-9.0 versions). However, one particular change in the new versions has caused me quite a bit of update trouble, is the removal of the jQuery.live() function. This is my own fault I suppose - .live() has been deprecated for a while, but with 1.9 and later it was finally removed altogether from jQuery. In the past I had quite a bit of jQuery code that used .live() and it's one of the things that's holding back my upgrade process, although I'm slowly cleaning up my code and switching to the .on() function as the replacement. jQuery.live() jQuery.live() was introduced a long time ago to simplify handling events on matched elements that exist currently on the document and those that are are added in the future and also match the selector. jQuery uses event bubbling, special event binding, plus some magic using meta data attached to a parent level element to check and see if the original target event element matches the selected selected elements (for more info see Elijah Manor's comment below). An Example Assume a list of items like the following in HTML for example and further assume that the items in this list can be appended to at a later point. In this app there's a smallish initial list that loads to start, and as the user scrolls towards the end of the initial small list more items are loaded dynamically and added to the list.<div id="PostItemContainer" class="scrollbox"> <div class="postitem" data-id="4z6qhomm"> <div class="post-icon"></div> <div class="postitemheader"><a href="show/4z6qhomm" target="Content">1999 Buick Century For Sale!</a></div> <div class="postitemprice rightalign">$ 3,500 O.B.O.</div> <div class="smalltext leftalign">Jun. 07 @ 1:06am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> <div class="postitem" data-id="2jtvuu17"> <div class="postitemheader"><a href="show/2jtvuu17" target="Content">Toyota VAN 1987</a></div> <div class="postitemprice rightalign">$950</div> <div class="smalltext leftalign">Jun. 07 @ 12:29am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> … </div> With the jQuery.live() function you could easily select elements and hook up a click handler like this:$(".postitem").live("click", function() {...}); Simple and perfectly readable. The behavior of the .live handler generally was the same as the corresponding simple event handlers like .click(), except that you have to explicitly name the event instead of using one of the methods. Re-writing with jQuery.on() With .live() removed in 1.9 and later we have to re-write .live() code above with an alternative. The jQuery documentation points you at the .on() or .delegate() functions to update your code. jQuery.on() is a more generic event handler function, and it's what jQuery uses internally to map the high level event functions like .click(),.change() etc. that jQuery exposes. Using jQuery.on() however is not a one to one replacement of the .live() function. While .on() can handle events directly and use the same syntax as .live() did, you'll find if you simply switch out .live() with .on() that events on not-yet existing elements will not fire. IOW, the key feature of .live() is not working. You can use .on() to get the desired effect however, but you have to change the syntax to explicitly handle the event you're interested in on the container and then provide a filter selector to specify which elements you are actually interested in for handling the event for. Sounds more complicated than it is and it's easier to see with an example. For the list above hooking .postitem clicks, using jQuery.on() looks like this:$("#PostItemContainer").on("click", ".postitem", function() {...}); You specify a container that can handle the .click event and then provide a filter selector to find the child elements that trigger the  the actual event. So here #PostItemContainer contains many .postitems, whose click events I want to handle. Any container will do including document, but I tend to use the container closest to the elements I actually want to handle the events on to minimize the event bubbling that occurs to capture the event. With this code I get the same behavior as with .live() and now as new .postitem elements are added the click events are always available. Sweet. Here's the full event signature for the .on() function: .on( events [, selector ] [, data ], handler(eventObject) ) Note that the selector is optional - if you omit it you essentially create a simple event handler that handles the event directly on the selected object. The filter/child selector required if you want life-like - uh, .live() like behavior to happen. While it's a bit more verbose than what .live() did, .on() provides the same functionality by being more explicit on what your parent container for trapping events is. .on() is good Practice even for ordinary static Element Lists As a side note, it's a good practice to use jQuery.on() or jQuery.delegate() for events in most cases anyway, using this 'container event trapping' syntax. That's because rather than requiring lots of event handlers on each of the child elements (.postitem in the sample above), there's just one event handler on the container, and only when clicked does jQuery drill down to find the matching filter element and tries to match it to the originating element. In the early days of jQuery I used manually build handlers that did this and manually drilled from the event object into the originalTarget to determine if it's a matching element. With later versions of jQuery the various event functions in jQuery essentially provide this functionality out of the box with functions like .on() and .delegate(). All of this is nothing new, but I thought I'd write this up because I have on a few occasions forgotten what exactly was needed to replace the many .live() function calls that litter my code - especially older code. This will be a nice reminder next time I have a memory blank on this topic. And maybe along the way I've helped one or two of you as well to clean up your .live() code…© Rick Strahl, West Wind Technologies, 2005-2013Posted in jQuery   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

  • The Partner Perspective from Oracle OpenWorld 2012 - IDC’s Darren Bibby report

    - by Richard Lefebvre
    Below is IDC’s Darren Bibby report on ‘The Partner Perspective from Oracle OpenWorld 2012’. If you missed the 2012 edition, I trust this will give you the willingness to attend next year one! October 26, 2012 I attended my fourth Oracle OpenWorld earlier in October. I always go in with the lens of, "What's in it for partners this year?" Although it's primarily thought of as a customer event - and yes, the bulk of the almost 50,000 attendees are customers - this year's conference was clearly the largest and most important partner event Oracle has ever run. Oracle PartnerNetwork (OPN) Exchange There were more partner attendees than ever, with Oracle citing somewhere around 5000. But the format for partners this year was different. And it was better. Traditionally, Oracle hosts a one-day only Partner Forum on the Sunday before the customer-focused conference begins. This year, the partner content still began on the Sunday, but the worldwide alliances and channels group created an exclusive track throughout the week, just for partners. It featured content specifically targeted towards partners, and was anchored at a nearby hotel. This was a great move for Oracle. The Oracle PartnerNetwork (OPN) team has been in a tricky position for years in that they have enough partners that they need a landmark event in the year, but perhaps not enough to justify a separate, worldwide, large, partner-only event. Coinciding a four day event with Oracle OpenWorld, where anybody who's anybody in the Oracle world attends anyway, is a good solution. The channels leadership team can build from this success for an even better conference next year. It's expected that they will follow a similar strategy. Cloud Announcements for Partners As for the content, it was primarily about the Cloud. For customers, for VARs, for ISVs, for everyone. There were five key Cloud related announcements for partners at the event: Cloud Builder Specialization. This is one of the first broader Specializations that isn't focused on one unique product. It is a designation for partners that offer design and implementation services for private cloud solutions. As such, it will surely be something that nearly every partner will consider, and many will pursue. New Specializations for Cloud Services. Unlike the broad, almost "strategy-level" Specialization above, there are a group of new product-based "merit badges" for many of the new Cloud offerings. Think about a Specialization for the Cloud version of HCM, for instance. Each of these particular specializations will also have Rapid Start implementation methodologies that allow a partner to offer a fixed scope and fixed price bid to customers. Based on the learnings from Oracle Consulting, this means a partner might be able to deliver Cloud HCM in six weeks for a fixed price. In the end, this means more consistent experiences for Oracle customers. Cloud Resale Program. For those partners who achieve one of these Cloud Specializations, it will mean they can actually resell the subscription-based Cloud product. This is important because it has been somewhat of a rarity in the emerging Cloud channel for partners to be able to "take the paper", take the revenue, do the billing, be first line of support etc. This is an important step for Oracle and one the partners will be happy to see. Cloud Referral Program. For those partners who are not as engaged with these specific Cloud products that the Specializations revolve around, there is a new referral program that provides an incentive to recommend Oracle Cloud products. This one-two punch of referral and resale programs is similar in many ways to other vendors who allow more committed partners to resell, while more casual partners can collect fees. It's the model that seems to work. The key to allow a company to resell a subscription product - something that is inherently delivered directly between the vendor and customer - is trust. Achieving a specialization is a good bar to have to meet. Platform as a Service for ISVs. Leveraging some of the overall announcements made by CEO Larry Ellison around a cloud version of its famous database, Oracle also outlined a new ability for ISVs to build cloud services on its new PaaS offering. Details were less available for this announcement, though it's an expected and fitting play for ISVs comfortable with Oracle technology who can now more easily build out cloud applications. There wasn't much talk of an app store to go along with this, but surely it's in the works. Specializations And "The Gap" Coming back to Specializations, Oracle PartnerNetwork (OPN) has 4600 partners worldwide that hold 20,000 Specializations. These are impressive numbers just three years into the new OPN framework. The actual number of Specializations has also grown significantly, up to 111 today and soon around 125 or so with the new Cloud designations. Oracle may need to look at grouping some of these and creating higher level, broader designations that partners could achieve by earning several Specializations in that group. At 125 and growing, this is a lot. On the top of the pyramid, Hitachi Ltd. successfully became the eleventh partner to make it to the highly prestigious Diamond level. Partner programs partially exist in order to recognize capable partners. And it's more than abundantly clear that the Diamond level does this. But I think Oracle has a gap. Specializations show capability in a very specific product area, and all sizes of partners can achieve these. The next level at which to show a level of expertise is the Advanced Specialization. However, this is a massive step up from the regular Specialization. The advanced level requires 50 people to have certification in that particular product area. Most other industry programs have similar higher level statuses, but none are even close to that number. Whereas a customer who sees an Oracle partner with an advanced specialization can be very sure of capability, there is a gap in that there are hundreds or even thousands of 20-50 person solution providers who are top notch in their area of expertise. They will never get to Advanced due to numbers alone. These boutique partners don't really have a way of showing off their talents in the current program. Advanced may not need to be so high to really show that a company has deep expertise. Overall it was a very successful Oracle OpenWorld for Oracle partners of all sizes. There was progress made on making it a bigger and more relevant event. And also on catching up and maybe even leading in some cases with cloud opportunities for partners.

    Read the article

  • Another Marketing Conference, part two – the afternoon

    - by Roger Hart
    In my previous post, I’ve covered the morning sessions at AMC2012. Here’s the rest of the write-up. I’ve skipped Charles Nixon’s session which was a blend of funky futurism and professional development advice, but you can see his slides here. I’ve also skipped the Google presentation, as it was a little thin on insight. 6 – Brand ambassadors: Getting universal buy in across the organisation, Vanessa Northam Slides are here This was the strongest enforcement of the idea that brand and campaign values need to be delivered throughout the organization if they’re going to work. Vanessa runs internal communications at e-on, and shared her experience of using internal comms to align an organization and thereby get the most out of a campaign. She views the purpose of internal comms as: “…to help leaders, to communicate the purpose and future of an organization, and support change.” This (and culture) primes front line staff, which creates customer experience and spreads brand. You ensure a whole organization knows what’s going on with both internal and external comms. If everybody is aligned and informed, if everybody can clearly articulate your brand and campaign goals, then you can turn everybody into an advocate. Alignment is a powerful tool for delivering a consistent experience and message. The pathological counter example is the one in which a marketing message goes out, which creates inbound customer contacts that front line contact staff haven’t been briefed to handle. The NatWest campaign was again mentioned in this context. The good example was e-on’s cheaper tariff campaign. Building a groundswell of internal excitement, and even running an internal launch meant everyone could contribute to a good customer experience. They found that meter readers were excited – not a group they’d considered as obvious in providing customer experience. But they were a group that has a lot of face-to-face contact with customers, and often were asked questions they may not have been briefed to answer. Being able to communicate a simple new message made it easier for them, and also let them become a sales and marketing asset to the organization. 7 – Goodbye Internet, Hello Outernet: the rise and rise of augmented reality, Matt Mills I wasn’t going to write this up, because it was essentially a sales demo for Aurasma. But the technology does merit some discussion. Basically, it replaces QR codes with visual recognition, and provides a simple-looking back end for attaching content. It’s quite sexy. But here’s my beef with it: QR codes had a clear visual language – when you saw one you knew what it was and what to do with it. They were clunky, but they had the “getting started” problem solved out of the box once you knew what you were looking at. However, they fail because QR code reading isn’t native to the platform. You needed an app, which meant you needed to know to download one. Consequentially, you can’t use QR codes with and ubiquity, or depend on them. This means marketers, content providers, etc, never pushed them, and they remained and awkward oddity, a minority sport. Aurasma half solves problem two, and re-introduces problem one, making it potentially half as useful as a QR code. It’s free, and you can apparently build it into your own apps. Add to that the likelihood of it becoming native to the platform if it takes off, and it may have legs. I guess we’ll see. 8 – We all need to code, Helen Mayor Great title – good point. If there was anybody in the room who didn’t at least know basic HTML, and if Helen’s presentation inspired them to learn, that’s fantastic. However, this was a half hour sales pitch for a basic coding training course. Beyond advocating coding skills it contained no useful content. Marketers may also like to consider some of these resources if they’re looking to learn code: Code Academy – free interactive tutorials Treehouse – learn web design, web dev, or app dev WebPlatform.org – tutorials and documentation for web tech  11 – Understanding our inner creativity, Margaret Boden This session was the most theoretical and probably least actionable of the day. It also held my attention utterly. Margaret spoke fluently, fascinatingly, without slides, on the subject of types of creativity and how they work. It was splendid. Yes, it raised a wry smile whenever she spoke of “the content of advertisements” and gave an example from 1970s TV ads, but even without the attempt to meet the conference’s theme this would have been thoroughly engaging. There are, Margaret suggested, three types of creativity: Combinatorial creativity The most common form, and consisting of synthesising ideas from existing and familiar concepts and tropes. Exploratory creativity Less common, this involves exploring the limits and quirks of a particular constraint or style. Transformational creativity This is uncommon, and arises from finding a way to do something that the existing rules would hold to be impossible. In essence, this involves breaking one of the constraints that exploratory creativity is composed from. Combinatorial creativity, she suggested, is particularly important for attaching favourable ideas to existing things. As such is it probably worth developing for marketing. Exploratory creativity may then come into play in something like developing and optimising an idea or campaign that now has momentum. Transformational creativity exists at the edges of this exploration. She suggested that products may often be transformational, but that marketing seemed unlikely to in her experience. This made me wonder about Listerine. Crucially, transformational creativity is characterised by there being some element of continuity with the strictures of previous thinking. Once it has happened, there may be  move from a revolutionary instance into an explored style. Again, from a marketing perspective, this seems to chime well with the thinking in Youngme Moon’s book: Different Talking about the birth of Modernism is visual art, Margaret pointed out that transformational creativity has historically risked a backlash, demanding what is essentially an education of the market. This is best accomplished by referring back to the continuities with the past in order to make the new familiar. Thoughts The afternoon is harder to sum up than the morning. It felt less concrete, and was troubled by a short run of poor presentations in the middle. Mainly, I found myself wrestling with the internal comms issue. It’s one of those things that seems astonishingly obvious in hindsight, but any campaign – particularly any large one – is doomed if the people involved can’t believe in it. We’ve run things here that haven’t gone so well, of course we have; who hasn’t? I’m not going to air any laundry, but people not being informed (much less aligned) feels like a common factor. It’s tough though. Managing and anticipating information needs across an organization of any size can’t be easy. Even the simple things like ensuring sales and support departments know what’s in a product release, and what messages go with it are easy to botch. The thing I like about framing this as a brand and campaign advocacy problem is that it makes it likely to get addressed. Better is always sexier than less-worse. Any technical communicator who’s ever felt crowded out by a content strategist or marketing copywriter  knows this – increasing revenue gets a seat at the table far more readily than reducing support costs, even if the financial impact is identical. So that’s it from AMC. The big thought-provokers were social buying behaviour and eliciting behaviour change, and the value of internal communications in ensuring successful campaigns and continuity of customer experience. I’ll be chewing over that for a while, and I’d definitely return next year.      

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • Class-Level Model Validation with EF Code First and ASP.NET MVC 3

    - by ScottGu
    Earlier this week the data team released the CTP5 build of the new Entity Framework Code-First library.  In my blog post a few days ago I talked about a few of the improvements introduced with the new CTP5 build.  Automatic support for enforcing DataAnnotation validation attributes on models was one of the improvements I discussed.  It provides a pretty easy way to enable property-level validation logic within your model layer. You can apply validation attributes like [Required], [Range], and [RegularExpression] – all of which are built-into .NET 4 – to your model classes in order to enforce that the model properties are valid before they are persisted to a database.  You can also create your own custom validation attributes (like this cool [CreditCard] validator) and have them be automatically enforced by EF Code First as well.  This provides a really easy way to validate property values on your models.  I showed some code samples of this in action in my previous post. Class-Level Model Validation using IValidatableObject DataAnnotation attributes provides an easy way to validate individual property values on your model classes.  Several people have asked - “Does EF Code First also support a way to implement class-level validation methods on model objects, for validation rules than need to span multiple property values?”  It does – and one easy way you can enable this is by implementing the IValidatableObject interface on your model classes. IValidatableObject.Validate() Method Below is an example of using the IValidatableObject interface (which is built-into .NET 4 within the System.ComponentModel.DataAnnotations namespace) to implement two custom validation rules on a Product model class.  The two rules ensure that: New units can’t be ordered if the Product is in a discontinued state New units can’t be ordered if there are already more than 100 units in stock We will enforce these business rules by implementing the IValidatableObject interface on our Product class, and by implementing its Validate() method like so: The IValidatableObject.Validate() method can apply validation rules that span across multiple properties, and can yield back multiple validation errors. Each ValidationResult returned can supply both an error message as well as an optional list of property names that caused the violation (which is useful when displaying error messages within UI). Automatic Validation Enforcement EF Code-First (starting with CTP5) now automatically invokes the Validate() method when a model object that implements the IValidatableObject interface is saved.  You do not need to write any code to cause this to happen – this support is now enabled by default. This new support means that the below code – which violates one of our above business rules – will automatically throw an exception (and abort the transaction) when we call the “SaveChanges()” method on our Northwind DbContext: In addition to reactively handling validation exceptions, EF Code First also allows you to proactively check for validation errors.  Starting with CTP5, you can call the “GetValidationErrors()” method on the DbContext base class to retrieve a list of validation errors within the model objects you are working with.  GetValidationErrors() will return a list of all validation errors – regardless of whether they are generated via DataAnnotation attributes or by an IValidatableObject.Validate() implementation.  Below is an example of proactively using the GetValidationErrors() method to check (and handle) errors before trying to call SaveChanges(): ASP.NET MVC 3 and IValidatableObject ASP.NET MVC 2 included support for automatically honoring and enforcing DataAnnotation attributes on model objects that are used with ASP.NET MVC’s model binding infrastructure.  ASP.NET MVC 3 goes further and also honors the IValidatableObject interface.  This combined support for model validation makes it easy to display appropriate error messages within forms when validation errors occur.  To see this in action, let’s consider a simple Create form that allows users to create a new Product: We can implement the above Create functionality using a ProductsController class that has two “Create” action methods like below: The first Create() method implements a version of the /Products/Create URL that handles HTTP-GET requests - and displays the HTML form to fill-out.  The second Create() method implements a version of the /Products/Create URL that handles HTTP-POST requests - and which takes the posted form data, ensures that is is valid, and if it is valid saves it in the database.  If there are validation issues it redisplays the form with the posted values.  The razor view template of our “Create” view (which renders the form) looks like below: One of the nice things about the above Controller + View implementation is that we did not write any validation logic within it.  The validation logic and business rules are instead implemented entirely within our model layer, and the ProductsController simply checks whether it is valid (by calling the ModelState.IsValid helper method) to determine whether to try and save the changes or redisplay the form with errors. The Html.ValidationMessageFor() helper method calls within our view simply display the error messages our Product model’s DataAnnotations and IValidatableObject.Validate() method returned.  We can see the above scenario in action by filling out invalid data within the form and attempting to submit it: Notice above how when we hit the “Create” button we got an error message.  This was because we ticked the “Discontinued” checkbox while also entering a value for the UnitsOnOrder (and so violated one of our business rules).  You might ask – how did ASP.NET MVC know to highlight and display the error message next to the UnitsOnOrder textbox?  It did this because ASP.NET MVC 3 now honors the IValidatableObject interface when performing model binding, and will retrieve the error messages from validation failures with it. The business rule within our Product model class indicated that the “UnitsOnOrder” property should be highlighted when the business rule we hit was violated: Our Html.ValidationMessageFor() helper method knew to display the business rule error message (next to the UnitsOnOrder edit box) because of the above property name hint we supplied: Keeping things DRY ASP.NET MVC and EF Code First enables you to keep your validation and business rules in one place (within your model layer), and avoid having it creep into your Controllers and Views.  Keeping the validation logic in the model layer helps ensure that you do not duplicate validation/business logic as you add more Controllers and Views to your application.  It allows you to quickly change your business rules/validation logic in one single place (within your model layer) – and have all controllers/views across your application immediately reflect it.  This help keep your application code clean and easily maintainable, and makes it much easier to evolve and update your application in the future. Summary EF Code First (starting with CTP5) now has built-in support for both DataAnnotations and the IValidatableObject interface.  This allows you to easily add validation and business rules to your models, and have EF automatically ensure that they are enforced anytime someone tries to persist changes of them to a database.  ASP.NET MVC 3 also now supports both DataAnnotations and IValidatableObject as well, which makes it even easier to use them with your EF Code First model layer – and then have the controllers/views within your web layer automatically honor and support them as well.  This makes it easy to build clean and highly maintainable applications. You don’t have to use DataAnnotations or IValidatableObject to perform your validation/business logic.  You can always roll your own custom validation architecture and/or use other more advanced validation frameworks/patterns if you want.  But for a lot of applications this built-in support will probably be sufficient – and provide a highly productive way to build solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >