Search Results

Search found 878 results on 36 pages for 'pairs'.

Page 32/36 | < Previous Page | 28 29 30 31 32 33 34 35 36  | Next Page >

  • EF4: ObjectContext inconsistent when inserting into a view with triggers

    - by user613567
    I get an Invalid Operation Exception when inserting records in a View that uses “Instead of” triggers in SQL Server with ADO.NET Entity Framework 4. The error message says: {"The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: The key-value pairs that define an EntityKey cannot be null or empty. Parameter name: record"} @ at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) at System.Data.Objects.ObjectContext.SaveChanges() In this simplified example I created two tables, Contacts and Employers, and one view Contacts_x_Employers which allows me to insert or retrieve rows into/from these two tables at once. The Tables only have a Name and an ID attributes and the view is based on a join of both: CREATE VIEW [dbo].[Contacts_x_Employers] AS SELECT dbo.Contacts.ContactName, dbo.Employers.EmployerName FROM dbo.Contacts INNER JOIN dbo.Employers ON dbo.Contacts.EmployerID = dbo.Employers.EmployerID And has this trigger: Create TRIGGER C_x_E_Inserts ON Contacts_x_Employers INSTEAD of INSERT AS BEGIN SET NOCOUNT ON; insert into Employers (EmployerName) select i.EmployerName from inserted i where not i.EmployerName in (select EmployerName from Employers) insert into Contacts (ContactName, EmployerID) select i.ContactName, e.EmployerID from inserted i inner join employers e on i.EmployerName = e.EmployerName; END GO The .NET Code follows: using (var Context = new TriggersTestEntities()) { Contacts_x_Employers CE1 = new Contacts_x_Employers(); CE1.ContactName = "J"; CE1.EmployerName = "T"; Contacts_x_Employers CE2 = new Contacts_x_Employers(); CE1.ContactName = "W"; CE1.EmployerName = "C"; Context.Contacts_x_Employers.AddObject(CE1); Context.Contacts_x_Employers.AddObject(CE2); Context.SaveChanges(); //? line with error } SSDL and CSDL (the view nodes): <EntityType Name="Contacts_x_Employers"> <Key> <PropertyRef Name="ContactName" /> <PropertyRef Name="EmployerName" /> </Key> <Property Name="ContactName" Type="varchar" Nullable="false" MaxLength="50" /> <Property Name="EmployerName" Type="varchar" Nullable="false" MaxLength="50" /> </EntityType> <EntityType Name="Contacts_x_Employers"> <Key> <PropertyRef Name="ContactName" /> <PropertyRef Name="EmployerName" /> </Key> <Property Name="ContactName" Type="String" Nullable="false" MaxLength="50" Unicode="false" FixedLength="false" /> <Property Name="EmployerName" Type="String" Nullable="false" MaxLength="50" Unicode="false" FixedLength="false" /> </EntityType> The Visual Studio solution and the SQL Scripts to re-create the whole application can be found in the TestViewTrggers.zip at ftp://JulioSantos.com/files/TriggerBug/. I appreciate any assistance that can be provided. I already spent days working on this problem.

    Read the article

  • Code golf - hex to (raw) binary conversion

    - by Alnitak
    In response to this question asking about hex to (raw) binary conversion, a comment suggested that it could be solved in "5-10 lines of C, or any other language." I'm sure that for (some) scripting languages that could be achieved, and would like to see how. Can we prove that comment true, for C, too? NB: this doesn't mean hex to ASCII binary - specifically the output should be a raw octet stream corresponding to the input ASCII hex. Also, the input parser should skip/ignore white space. edit (by Brian Campbell) May I propose the following rules, for consistency? Feel free to edit or delete these if you don't think these are helpful, but I think that since there has been some discussion of how certain cases should work, some clarification would be helpful. The program must read from stdin and write to stdout (we could also allow reading from and writing to files passed in on the command line, but I can't imagine that would be shorter in any language than stdin and stdout) The program must use only packages included with your base, standard language distribution. In the case of C/C++, this means their respective standard libraries, and not POSIX. The program must compile or run without any special options passed to the compiler or interpreter (so, 'gcc myprog.c' or 'python myprog.py' or 'ruby myprog.rb' are OK, while 'ruby -rscanf myprog.rb' is not allowed; requiring/importing modules counts against your character count). The program should read integer bytes represented by pairs of adjacent hexadecimal digits (upper, lower, or mixed case), optionally separated by whitespace, and write the corresponding bytes to output. Each pair of hexadecimal digits is written with most significant nibble first. The behavior of the program on invalid input (characters besides [a-fA-F \t\r\n], spaces separating the two characters in an individual byte, an odd number of hex digits in the input) is undefined; any behavior (other than actively damaging the user's computer or something) on bad input is acceptable (throwing an error, stopping output, ignoring bad characters, treating a single character as the value of one byte, are all OK) The program may write no additional bytes to output. Code is scored by fewest total bytes in the source file. (Or, if we wanted to be more true to the original challenge, the score would be based on lowest number of lines of code; I would impose an 80 character limit per line in that case, since otherwise you'd get a bunch of ties for 1 line).

    Read the article

  • DCVS + hosting for a startup commercial multiplatform phone app

    - by AG
    I'm in lean startup mode, working on a simple phone app that will be published initially as a iThingy app and an Android app with, possibly, Blackberry and Symbian versions to follow. I'm about to go from no repository to needing a central repository that up to 4 very part-time resources will be sharing. Two of us have no version control background, one has used Subversion, and I've used most of the major centralized VCS systems. I'm not going to be pushing the technical limitations of any VCS for a long time; I'm sure that any of the major systems would work fine. And the hosting accounts I've looked at seem reasonable. So I'm really focussed on minimizing the downside risks. That is, I'd like to find a stable setup that is easy to learn in general, easy to use from Windows/Eclipse, and won't paint me into any obvious corners for the next 12 months or so. A quick search of the web had led me to consider the following pairs of DVCS and hosting service, with what I think I'm hearing as their strengths and weaknesses (for my purposes): Bazaar/Launchpad -- My initial choice since I need to get more familiar with this pair for the Google Summer of Code mentoring I'm doing. But, whatever the technical merits, a non-starter for me because they are purely open source, no private repositories plans to purchase that I can see. Git/GitHub -- Git: Fast, light, ultimately flexible, but relatively less Windows friendly, Eclipse plugin (eGit) available but relatively young, GitHub: widely used, pricing is fine Mercurial/BitBucket -- Mercurial: a little less flexible, a little more Windows friendly, Eclipse plugin seems a bit more mature, BitBucket: widely used, pricing is fine, includes a wiki and an issue tracker that we might be able to use instead of something like BaseCamp, at least for a while. Mercurial/BitBucket seem like the winning pair so far for my particular situation; at least two of us are definitely going to be working mostly from Eclipse on Windows and reducing my own learning curve is a priority. ;-) But I have two specific questions: 1) Am I wrong about Bazaar/Launchpad and is there a viable, secure way to use them for proprietary code? 2) Any reason to think that the Mecurial/Bitbucket pair will end up being a headache for my Mac developer, soon, or for Blackberry or Symbian developers a little later? ag

    Read the article

  • C++ using cdb_read returns extra characters on some reads

    - by Moe Be
    Hi All, I am using the following function to loop through a couple of open CDB hash tables. Sometimes the value for a given key is returned along with an additional character (specifically a CTRL-P (a DLE character/0x16/0o020)). I have checked the cdb key/value pairs with a couple of different utilities and none of them show any additional characters appended to the values. I get the character if I use cdb_read() or cdb_getdata() (the commented out code below). If I had to guess I would say I am doing something wrong with the buffer I create to get the result from the cdb functions. Any advice or assistance is greatly appreciated. char* HashReducer::getValueFromDb(const string &id, vector <struct cdb *> &myHashFiles) { unsigned char hex_value[BUFSIZ]; size_t hex_len; //construct a real hex (not ascii-hex) value to use for database lookups atoh(id,hex_value,&hex_len); char *value = NULL; vector <struct cdb *>::iterator my_iter = myHashFiles.begin(); vector <struct cdb *>::iterator my_end = myHashFiles.end(); try { //while there are more databases to search and we have not found a match for(; my_iter != my_end && !value ; my_iter++) { //cerr << "\n looking for this MD5:" << id << " hex(" << hex_value << ") \n"; if (cdb_find(*my_iter, hex_value, hex_len)){ //cerr << "\n\nI found the key " << id << " and it is " << cdb_datalen(*my_iter) << " long\n\n"; value = (char *)malloc(cdb_datalen(*my_iter)); cdb_read(*my_iter,value,cdb_datalen(*my_iter),cdb_datapos(*my_iter)); //value = (char *)cdb_getdata(*my_iter); //cerr << "\n\nThe value is:" << value << " len is:" << strlen(value)<< "\n\n"; }; } } catch (...){} return value; }

    Read the article

  • How to discover classes with [Authorize] attributes using Reflection in C#? (or How to build Dynamic

    - by Pretzel
    Maybe I should back-up and widen the scope before diving into the title question... I'm currently writing a web app in ASP.NET MVC 1.0 (although I do have MVC 2.0 installed on my PC, so I'm not exactly restricted to 1.0) -- I've started with the standard MVC project which has your basic "Welcome to ASP.NET MVC" and shows both the [Home] tab and [About] tab in the upper-right corner. Pretty standard, right? I've added 4 new Controller classes, let's call them "Astronomer", "Biologist", "Chemist", and "Physicist". Attached to each new controller class is the [Authorize] attribute. For example, for the BiologistController.cs [Authorize(Roles = "Biologist,Admin")] public class BiologistController : Controller { public ActionResult Index() { return View(); } } These [Authorize] tags naturally limit which user can access different controllers depending on Roles, but I want to dynamically build a Menu at the top of my website in the Site.Master Page based on the Roles the user is a part of. So for example, if JoeUser was a member of Roles "Astronomer" and "Physicist", the navigation menu would say: [Home] [Astronomer] [Physicist] [About] And naturally, it would not list links to "Biologist" or "Chemist" controller Index page. Or if "JohnAdmin" was a member of Role "Admin", links to all 4 controllers would show up in the navigation bar. Ok, you prolly get the idea... Starting with the answer from this StackOverflow topic about Dynamic Menu building in ASP.NET, I'm trying to understand how I would fully implement this. (I'm a newbie and need a little more guidance, so please bare with me.) The answer proposes Extending the Controller class (call it "ExtController") and then have each new WhateverController inherit from ExtController. My conclusion is that I would need to use Reflection in this ExtController Constructor to determine which Classes and Methods have [Authorize] attributes attached to them to determine the Roles. Then using a Static Dictionary, store the Roles and Controllers/Methods in key-value pairs. I imagine it something like this: public class ExtController : Controller { protected static Dictionary<Type,List<string>> ControllerRolesDictionary; protected override void OnActionExecuted(ActionExecutedContext filterContext) { // build list of menu items based on user's permissions, and add it to ViewData IEnumerable<MenuItem> menu = BuildMenu(); ViewData["Menu"] = menu; } private IEnumerable<MenuItem> BuildMenu() { // Code to build a menu SomeRoleProvider rp = new SomeRoleProvider(); foreach (var role in rp.GetRolesForUser(HttpContext.User.Identity.Name)) { } } public ExtController() { // Use this.GetType() to determine if this Controller is already in the Dictionary if (!ControllerRolesDictionary.ContainsKey(this.GetType())) { // If not, use Reflection to add List of Roles to Dictionary // associating with Controller } } } Is this doable? If so, how do I perform Reflection in the ExtController constructor to discover the [Authorize] attribute and related Roles (if any) ALSO! Feel free to go out-of-scope on this question and suggest an alternate way of solving this "Dynamic Site.Master Menu based on Roles" problem. I'm the first to admit that this may not be the best approach.

    Read the article

  • Sessions not persisting between requests

    - by klonq
    My session objects are only stored within the request scope on google app engine and I can't figure out how to persist objects between requests. The docs are next to useless on this matter and I can't find anyone who's experienced a similar problem. Please help. When I store session objects in the servlet and forward the request to a JSP using: getServletContext().getRequestDispatcher("/example.jsp").forward(request,response); Everything works like it should. But when I store objects to the session and redirect the request using: response.sendRedirect("/example/url"); The session objects are lost to the ether. In fact when I dump session key/value pairs on new requests there is absolutely nothing, session objects only appear within the request scope of servlets which create session objects. It appears to me that the objects are not being written to Memcache or Datastore. In terms of configuring sessions for my application I have set <sessions-enabled>true</sessions-enabled> In appengine-web.xml. Is there anything else I am missing? The single paragraph of documentation on sessions also notes that only objects which implement Serializable can be stored in the session between requests. I have included an example of the code which is not working below. The obvious solution is to not use redirects, and this might be ok for the example given below but some application data does need to be stored in the session between requests so I need to find a solution to this problem. EXAMPLE: The class FlashMessage gives feedback to the user from server-side operations. if (email.send()) { FlashMessage flash = new FlashMessage(FlashMessage.SUCCESS, "Your message has been sent."); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message will not be available in the session object in the next request response.sendRedirect(URL.HOME); } else { FlashMessage flash = new FlashMessage(FlashMessage.ERROR, FlashMessage.INVALID_FORM_DATA); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message is displayed without problem getServletContext().getRequestDispatcher(Templates.CONTACT_FORM).forward(request,response); } FlashMessage.java import java.io.Serializable; public class FlashMessage implements Serializable { private static final long serialVersionUID = 8109520737272565760L; // I have tried using different, default and no serialVersionUID public static final String SESSION_KEY = "flashMessage"; public static final String ERROR = "error"; public static final String SUCCESS = "success"; public static final String INVALID_FORM_DATA = "Your request failed to validate."; private String message; private String type; public FlashMessage (String type, String message) { this.type = type; this.message = message; } public String display(){ return "<div id='flash' class='" + type + "'>" + message + "</div>"; } }

    Read the article

  • plane bombing problems- help

    - by peiska
    I'm training code problems, and on this one I am having problems to solve it, can you give me some tips how to solve it please. The problem is something like this: Your task is to find the sequence of points on the map that the bomber is expected to travel such that it hits all vital links. A link from A to B is vital when its absence isolates completely A from B. In other words, the only way to go from A to B (or vice versa) is via that link. Notice that if we destroy for example link (d,e), it becomes impossible to go from d to e,m,l or n in any way. A vital link can be hit at any point that lies in its segment (e.g. a hit close to d is as valid as a hit close to e). Of course, only one hit is enough to neutralize a vital link. Moreover, each bomb affects an exact circle of radius R, i.e., every segment that intersects that circle is considered hit. Due to enemy counter-attack, the plane may have to retreat at any moment, so the plane should follow, at each moment, to the closest vital link possible, even if in the end the total distance grows larger. Given all coordinates (the initial position of the plane and the nodes in the map) and the range R, you have to determine the sequence of positions in which the plane has to drop bombs. This sequence should start (takeoff) and finish (landing) at the initial position. Except for the start and finish, all the other positions have to fall exactly in a segment of the map (i.e. it should correspond to a point in a non-hit vital link segment). The coordinate system used will be UTM (Universal Transverse Mercator) northing and easting, which basically corresponds to a Euclidian perspective of the world (X=Easting; Y=Northing). Input Each input file will start with three floating point numbers indicating the X0 and Y0 coordinates of the airport and the range R. The second line contains an integer, N, indicating the number of nodes in the road network graph. Then, the next N (<10000) lines will each contain a pair of floating point numbers indicating the Xi and Yi coordinates (1 No two links will ever cross with each other. Output The program will print the sequence of coordinates (pairs of floating point numbers with exactly one decimal place), each one at a line, in the order that the plane should visit (starting and ending in the airport). Sample input 1 102.3 553.9 0.2 14 342.2 832.5 596.2 638.5 479.7 991.3 720.4 874.8 744.3 1284.1 1294.6 924.2 1467.5 659.6 1802.6 659.6 1686.2 860.7 1548.6 1111.2 1834.4 1054.8 564.4 1442.8 850.1 1460.5 1294.6 1485.1 17 1 2 1 3 2 4 3 4 4 5 4 6 6 7 7 8 8 9 8 10 9 10 10 11 6 11 5 12 5 13 12 13 13 14 Sample output 1 102.3 553.9 720.4 874.8 850.1 1460.5 102.3 553.9

    Read the article

  • Marshalling non-Blittable Structs from C# to C++

    - by Greggo
    I'm in the process of rewriting an overengineered and unmaintainable chunk of my company's library code that interfaces between C# and C++. I've started looking into P/Invoke, but it seems like there's not much in the way of accessible help. We're passing a struct that contains various parameters and settings down to unmanaged codes, so we're defining identical structs. We don't need to change any of those parameters on the C++ side, but we do need to access them after the P/Invoked function has returned. My questions are: What is the best way to pass strings? Some are short (device id's which can be set by us), and some are file paths (which may contain Asian characters) Should I pass an IntPtr to the C# struct or should I just let the Marshaller take care of it by putting the struct type in the function signature? Should I be worried about any non-pointer datatypes like bools or enums (in other, related structs)? We have the treat warnings as errors flag set in C++ so we can't use the Microsoft extension for enums to force a datatype. Is P/Invoke actually the way to go? There was some Microsoft documentation about Implicit P/Invoke that said it was more type-safe and performant. For reference, here is one of the pairs of structs I've written so far: C++ /** Struct used for marshalling Scan parameters from managed to unmanaged code. */ struct ScanParameters { LPSTR deviceID; LPSTR spdClock; LPSTR spdStartTrigger; double spinRpm; double startRadius; double endRadius; double trackSpacing; UINT64 numTracks; UINT32 nominalSampleCount; double gainLimit; double sampleRate; double scanHeight; LPWSTR qmoPath; //includes filename LPWSTR qzpPath; //includes filename }; C# /// <summary> /// Struct used for marshalling scan parameters between managed and unmanaged code. /// </summary> [StructLayout(LayoutKind.Sequential)] public struct ScanParameters { [MarshalAs(UnmanagedType.LPStr)] public string deviceID; [MarshalAs(UnmanagedType.LPStr)] public string spdClock; [MarshalAs(UnmanagedType.LPStr)] public string spdStartTrigger; public Double spinRpm; public Double startRadius; public Double endRadius; public Double trackSpacing; public UInt64 numTracks; public UInt32 nominalSampleCount; public Double gainLimit; public Double sampleRate; public Double scanHeight; [MarshalAs(UnmanagedType.LPWStr)] public string qmoPath; [MarshalAs(UnmanagedType.LPWStr)] public string qzpPath; }

    Read the article

  • How can a C/C++ program put itself into background?

    - by Larry Gritz
    What's the best way for a running C or C++ program that's been launched from the command line to put itself into the background, equivalent to if the user had launched from the unix shell with '&' at the end of the command? (But the user didn't.) It's a GUI app and doesn't need any shell I/O, so there's no reason to tie up the shell after launch. But I want a shell command launch to be auto-backgrounded without the '&' (or on Windows). Ideally, I want a solution that would work on any of Linux, OS X, and Windows. (Or separate solutions that I can select with #ifdef.) It's ok to assume that this should be done right at the beginning of execution, as opposed to somewhere in the middle. One solution is to have the main program be a script that launches the real binary, carefully putting it into the background. But it seems unsatisfying to need these coupled shell/binary pairs. Another solution is to immediately launch another executed version (with 'system' or CreateProcess), with the same command line arguments, but putting the child in the background and then having the parent exit. But this seems clunky compared to the process putting itself into background. Edited after a few answers: Yes, a fork() (or system(), or CreateProcess on Windows) is one way to sort of do this, that I hinted at in my original question. But all of these solutions make a SECOND process that is backgrounded, and then terminate the original process. I was wondering if there was a way to put the EXISTING process into the background. One difference is that if the app was launched from a script that recorded its process id (perhaps for later killing or other purpose), the newly forked or created process will have a different id and so will not be controllable by any launching script, if you see what I'm getting at. Edit #2: fork() isn't a good solution for OS X, where the man page for 'fork' says that it's unsafe if certain frameworks or libraries are being used. I tried it, and my app complains loudly at runtime: "The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec()." I was intrigued by daemon(), but when I tried it on OS X, it gave the same error message, so I assume that it's just a fancy wrapper for fork() and has the same restrictions. Excuse the OS X centrism, it just happens to be the system in front of me at the moment. But I am indeed looking for a solution to all three platforms.

    Read the article

  • KeyNotFound Exception in Dictionary(of T)

    - by C Patton
    I'm about ready to bang my head against the wall I have a class called Map which has a dictionary called tiles. class Map { public Dictionary<Location, Tile> tiles = new Dictionary<Location, Tile>(); public Size mapSize; public Map(Size size) { this.mapSize = size; } //etc... I fill this dictionary temporarily to test some things.. public void FillTemp(Dictionary<int, Item> itemInfo) { Random r = new Random(); for(int i =0; i < mapSize.Width; i++) { for(int j=0; j<mapSize.Height; j++) { Location temp = new Location(i, j, 0); int rint = r.Next(0, (itemInfo.Count - 1)); Tile t = new Tile(new Item(rint, rint)); tiles[temp] = t; } } } and in my main program code Map m = new Map(10, 10); m.FillTemp(iInfo); Tile t = m.GetTile(new Location(2, 2, 0)); //The problem line now, if I add a breakpoint in my code, I can clearly see that my instance (m) of the map class is filled with pairs via the function above, but when I try to access a value with the GetTile function: public Tile GetTile(Location location) { if(this.tiles.ContainsKey(location)) { return this.tiles[location]; } else { return null; } } it ALWAYS returns null. Again, if I view inside the Map object and find the Location key where x=2,y=2,z=0 , I clearly see the value being a Tile that FillTemp generated.. Why is it doing this? I've had no problems with a Dictionary such as this so far. I have no idea why it's returning null. and again, when debugging, I can CLEARLY see that the Map instance contains the Location key it says it does not... very frustrating. Any clues? Need any more info? Help would be greatly appreciated :)

    Read the article

  • Rendering of graphics different depending on position

    - by jedierikb
    When drawing parallel vertical lines with a fixed distance between them (1.75 pixels) with a non-integer x-value-offset to both lines, the lines are drawn differently based on the offset. In the picture below are two pairs of very close together vertical lines. As you can see, they look very different. This is frustrating, especially when animating the sprite. Any ideas how ensure that sprites-with-non-integer-positions' graphics will visually display the same? package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; public class tmpa extends Sprite { private var _sp1:Sprite; private var _sp2:Sprite; private var _num:Number; public function tmpa( ):void { stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; _sp1 = new Sprite( ); drawButt( _sp1, 0 ); _sp1.x = 100; _sp1.y = 100; _num = 0; _sp2 = new Sprite( ); drawButt( _sp2, _num ); _sp2.x = 100; _sp2.y = 200; addChild( _sp1 ); addChild( _sp2 ); addEventListener( Event.ENTER_FRAME, efCb, false, 0, true ); } private function efCb( evt:Event ):void { _num += .1; if (_num > 400) { _num = 0; } drawButt( _sp2, _num ); } private function drawButt( sp:Sprite, offset:Number ):void { var px1:Number = 1 + offset; var px2:Number = 2.75 + offset; sp.graphics.clear( ); sp.graphics.lineStyle( 1, 0, 1, true ); sp.graphics.moveTo( px1, 1 ); sp.graphics.lineTo( px1, 100 ); sp.graphics.lineStyle( 1, 0, 1, true ); sp.graphics.moveTo( px2, 1 ); sp.graphics.lineTo( px2, 100 ); } } } edit from original post which thought the problem was tied to the x-position of the sprite.

    Read the article

  • OpenGL ES Polygon with Normals rendering (Note the 'ES!')

    - by MarqueIV
    Ok... imagine I have a relatively simple solid that has six distinct normals but actually has close to 48 faces (8 faces per direction) and there are a LOT of shared vertices between faces. What's the most efficient way to render that in OpenGL? I know I can place the vertices in an array, then use an index array to render them, but I have to keep breaking my rendering steps down to change the normals (i.e. set normal 1... render 8 faces... set normal 2... render 8 faces, etc.) Because of that I have to maintain an array of index arrays... one for each normal! Not good! The other way I can do it is to use separate normal and vertex arrays (or even interleave them) but that means I need to have a one-to-one ratio for normals to vertices and that means the normals would be duplicated 8 times more than they need to be! On something with a spherical or even curved surface, every normal most likely is different, but for this, it really seems like a waste of memory. In a perfect world I'd like to have my vertex and normal arrays have different lengths, then when I go to draw my triangles or quads To specify the index to each array for that vertex. Now the OBJ file format lets you specify exactly that... a vertex array and a normal array of different lengths, then when you specify the face you are rendering, you specify a vertex and a normal index (as well as a UV coord if you are using textures too) which seems like the perfect solution! 48 vertices but only 8 normals, then pairs of indexes defining the shapes' faces. But I'm not sure how to render that in OpenGL ES (again, note the 'ES'.) Currently I have to 'denormalize' (sorry for the SQL pun there) the normals back to a 1-to-1 with the vertex array, then render. Just wastes memory to me. Anyone help? I hope I'm missing something very simple here. Mark

    Read the article

  • when get pagecontent from URL the connect alway return nopermistion ?

    - by tiendv
    I have a methor to return pagecontent of link but when it run, alway return "Do not perrmisson ", plesea check it here is code to return string pagecontent public static String getPageContent(String targetURL) throws Exception { Hashtable contentHash = new Hashtable(); URL url; URLConnection conn; // The data streams used to read from and write to the URL connection. DataOutputStream out; DataInputStream in; // String returned as the result . String returnString = ""; // Create the URL object and make a connection to it. url = new URL(targetURL); conn = url.openConnection(); // check out permission of acess URL if (conn.getPermission() != null) { returnString = "Do not Permission access URL "; } else { // Set connection parameters. We need to perform input and output, // so set both as true. conn.setDoInput(true); conn.setDoOutput(true); // Disable use of caches. conn.setUseCaches(false); // Set the content type we are POSTing. We impersonate it as // encoded form data conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded"); // get the output stream . out = new DataOutputStream(conn.getOutputStream()); String content = ""; // Create a single String value pairs for all the keys // in the Hashtable passed to us. Enumeration e = contentHash.keys(); boolean first = true; while (e.hasMoreElements()) { // For each key and value pair in the hashtable Object key = e.nextElement(); Object value = contentHash.get(key); // If this is not the first key-value pair in the hashtable, // concantenate an "&" sign to the constructed String if (!first) content += "&"; // append to a single string. Encode the value portion content += (String) key + "=" + URLEncoder.encode((String) value); first = false; } // Write out the bytes of the content string to the stream. out.writeBytes(content); out.flush(); out.close(); // check if can't read from URL // Read input from the input stream. in = new DataInputStream(conn.getInputStream()); String str; while (null != ((str = in.readLine()))) { returnString += str + "\n"; } in.close(); } // return the string that was read. return returnString; }

    Read the article

  • KeyNotFound Exception in CSsharp

    - by C Patton
    I'm about ready to bang my head against the wall I have a class called Map which has a dictionary called tiles. class Map { public Dictionary<Location, Tile> tiles = new Dictionary<Location, Tile>(); public Size mapSize; public Map(Size size) { this.mapSize = size; } //etc... I fill this dictionary temporarily to test some things.. public void FillTemp(Dictionary<int, Item> itemInfo) { Random r = new Random(); for(int i =0; i < mapSize.Width; i++) { for(int j=0; j<mapSize.Height; j++) { Location temp = new Location(i, j, 0); int rint = r.Next(0, (itemInfo.Count - 1)); Tile t = new Tile(new Item(rint, rint)); tiles[temp] = t; } } } and in my main program code Map m = new Map(10, 10); m.FillTemp(iInfo); Tile t = m.GetTile(new Location(2, 2, 0)); //The problem line now, if I add a breakpoint in my code, I can clearly see that my instance (m) of the map class is filled with pairs via the function above, but when I try to access a value with the GetTile function: public Tile GetTile(Location location) { if(this.tiles.ContainsKey(location)) { return this.tiles[location]; } else { return null; } } it ALWAYS returns null. Again, if I view inside the Map object and find the Location key where x=2,y=2,z=0 , I clearly see the value being a Tile that FillTemp generated.. Why is it doing this? I've had no problems with a Dictionary such as this so far. I have no idea why it's returning null. and again, when debugging, I can CLEARLY see that the Map instance contains the Location key it says it does not... very frustrating. Any clues? Need any more info? Help would be greatly appreciated :)

    Read the article

  • Making an efficient algorithm

    - by James P.
    Here's my recent submission for the FB programming contest (qualifying round only requires to upload program output so source code doesn't matter). The objective is to find two squares that add up to a given value. I've left it as it is as an example. It does the job but is too slow for my liking. Here's the points that are obviously eating up time: List of squares is being recalculated for each call of getNumOfDoubleSquares(). This could be precalculated or extended when needed. Both squares are being checked for when it is only necessary to check for one (complements). There might be a more efficient way than a double-nested loop to find pairs. Other suggestions? Besides this particular problem, what do you look for when optimizing an algorithm? public static int getNumOfDoubleSquares( Integer target ){ int num = 0; ArrayList<Integer> squares = new ArrayList<Integer>(); ArrayList<Integer> found = new ArrayList<Integer>(); int squareValue = 0; for( int j=0; squareValue<=target; j++ ){ squares.add(j, squareValue); squareValue = (int)Math.pow(j+1,2); } int squareSum = 0; System.out.println( "Target=" + target ); for( int i = 0; i < squares.size(); i++ ){ int square1 = squares.get(i); for( int j = 0; j < squares.size(); j++ ){ int square2 = squares.get(j); squareSum = square1 + square2; if( squareSum == target && !found.contains( square1 ) && !found.contains( square2 ) ){ found.add(square1); found.add(square2); System.out.println( "Found !" + square1 +"+"+ square2 +"="+ squareSum); num++; } } } return num; }

    Read the article

  • Why does my if statement always evaluate to true?

    - by Pobe
    I need to go through the months of the year and find out if the last day of the month is 28, 29, 30 or 31. My problem is that the first if statement always evaluates to true: MOIS_I = 31 if (mois == "Janvier" || "Mars" || "Mai" || "Juillet" || "Août" || "Octobre" || "Décembre" || "1" || "3" || "5" || "7" || "8" || "10" || "12" || "01" || "03" || "05" || "07" || "08") { window.alert("Le mois " + mois + " de l'année " + annee + " compte " + MOIS_I + " jours "); } Also, it seems like it is necessary to do if (mois == "Janver" || mois == "Février" || ... ) and so on, but I wanted to know if there was a better way to do it. Here is the full code: var mois, annee, test4, test100, test400; const MOIS_P = 30; const MOIS_I = 31; const FEV_NORM = 28; const FEV_BISSEX = 29; const TEST_4 = 4; const TEST_100 = 100; const TEST_400 = 400; mois = window.prompt("Entrez un mois de l'année", ""); annee = window.prompt("Entrez l'année de ce mois", ""); /* MOIS IMPAIRS */ if (mois == "Janvier" || "Mars" || "Mai" || "Juillet" || "Août" || "Octobre" || "Décembre" || "1" || "3" || "5" || "7" || "8" || "10" || "12" || "01" || "03" || "05" || "07" || "08") { window.alert("Le mois " + mois + " de l'année " + annee + " compte " + MOIS_I + " jours "); /* MOIS PAIRS */ } else if (mois == "Février" || "Avril" || "Juin" || "Septembre" || "Novembre" || "2" || "4" || "6" || "9" || "11" || "02" || "04" || "06" || "09") { if (mois == "Février") { test4 = parseInt(annee) % TEST_4; test100 = parseInt(annee) % TEST_100; test400 = parseInt(annee) % TEST_400; if (test4 == 0) { if (test100 != 0) { window.alert("Le mois " + mois + " de l'année " + annee + " compte " + FEV_BISSEX + " jours "); } else { window.alert("Le mois " + mois + " de l'année " + annee + " compte " + FEV_NORM + " jours "); } } else if (test400 == 0) { window.alert("Le mois " + mois + " de l'année " + annee + " compte " + FEV_BISSEX + " jours "); } else { window.alert("Le mois " + mois + " de l'année " + annee + " compte " + FEV_NORM + " jours "); } } else { window.alert("Le mois " + mois + " de l'année " + annee + " compte " + MOIS_P + " jours "); } } else { window.alert("Apocalypse!"); } TEST_4, TEST_100, TEST_400 are to test if the year is a leap year (which means february has 29 days instead of 28). Thank you!

    Read the article

  • How can I represent a line of music notes in a way that allows fast insertion at any index?

    - by chairbender
    For "fun", and to learn functional programming, I'm developing a program in Clojure that does algorithmic composition using ideas from this theory of music called "Westergaardian Theory". It generates lines of music (where a line is just a single staff consisting of a sequence of notes, each with pitches and durations). It basically works like this: Start with a line consisting of three notes (the specifics of how these are chosen are not important). Randomly perform one of several "operations" on this line. The operation picks randomly from all pairs of adjacent notes that meet a certain criteria (for each pair, the criteria only depends on the pair and is independent of the other notes in the line). It inserts 1 or several notes (depending on the operation) between the chosen pair. Each operation has its own unique criteria. Continue randomly performing these operations on the line until the line is the desired length. The issue I've run into is that my implementation of this is quite slow, and I suspect it could be made faster. I'm new to Clojure and functional programming in general (though I'm experienced with OO), so I'm hoping someone with more experience can point out if I'm not thinking in a functional paradigm or missing out on some FP technique. My current implementation is that each line is a vector containing maps. Each map has a :note and a :dur. :note's value is a keyword representing a musical note like :A4 or :C#3. :dur's value is a fraction, representing the duration of the note (1 is a whole note, 1/4 is a quarter note, etc...). So, for example, a line representing the C major scale starting on C3 would look like this: [ {:note :C3 :dur 1} {:note :D3 :dur 1} {:note :E3 :dur 1} {:note :F3 :dur 1} {:note :G3 :dur 1} {:note :A4 :dur 1} {:note :B4 :dur 1} ] This is a problematic representation because there's not really a quick way to insert into an arbitrary index of a vector. But insertion is the most frequently performed operation on these lines. My current terrible function for inserting notes into a line basically splits the vector using subvec at the point of insertion, uses conj to join the first part + notes + last part, then uses flatten and vec to make them all be in a one-dimensional vector. For example if I want to insert C3 and D3 into the the C major scale at index 3 (where the F3 is), it would do this (I'll use the note name in place of the :note and :dur maps): (conj [C3 D3 E3] [C3 D3] [F3 G3 A4 B4]), which creates [C3 D3 E3 [C3 D3] [F3 G3 A4 B4]] (vec (flatten previous-vector)) which gives [C3 D3 E3 C3 D3 F3 G3 A4 B4] The run time of that is O(n), AFAIK. I'm looking for a way to make this insertion faster. I've searched for information on Clojure data structures that have fast insertion but haven't found anything that would work. I found "finger trees" but they only allow fast insertion at the start or end of the list. Edit: I split this into two questions. The other part is here.

    Read the article

  • What is the best way to reduce code and loop through a hierarchial commission script?

    - by JM4
    I have a script which currently "works" but is nearly 3600 lines of code and makes well over 50 database calls within a single script. From my experience, there is no way to really "loop" the script and minimize it because each call to the database is a subquery of the ones before based on referral ids. Perhaps I can give a very simple example of what I am trying to accomplish and see if anybody has experience with something similar. In my example, there are three tables: Table 1 - Sellers ID | Comm_level | Parent ----------------------------------- 1 | 4 | NULL 2 | 3 | 1 3 | 2 | 1 4 | 2 | 2 5 | 2 | 2 6 | 1 | 3 Where ID is the id of one of our sales agents, comm_level will determine what his commission percentage is for each product he sells, parent indicates the ID for whom recruited that particular agent. In the example above, 1 is the top agent, he recruited two agents, 2 and 3. 2 recruited two agents, 4 and 5. 3 recruited one agent, 6. NOTE: An agent can NEVER recruit anybody equal to or higher than their own level. Table 2 - Commissions Level | Item 1 | Item 2 | Item 3 ----------------------------------------------------- 4 | .5 | .4 | .3 3 | .45 | .35 | .25 2 | .4 | .3 | .2 1 | .35 | .25 | .15 This table lays out the commission percentages for each agent based on their actual comm_level (if an agent is at a level 4, he will receive 50% on every item 1 sold, 40% on every item 2, 30% on every item 3 and so on. Table 3 - Items Sold ID | Item --------------------- 4 | item_1 4 | item_2 1 | item_1 2 | item_3 6 | item_2 1 | item_3 This table pairs the actual item sold with the seller who sold the item. When generating the commission report, calculating individual values is very simple. Calculating their commission based on their sub_sellers however is very difficult. In this example, Seller ID 1 gets a piece of every single item sold. The commission percentages indicate individual sales or the height of their commission. For example: When seller ID 6 sold one of item_2 above, the tree for commissions will look like the following: -ID 6 - 25% of cost(item_1) -ID 3 - 5% of cost(item_1) - (30% is his comm - 25% comm of seller id 6) -ID 1 - 10% of cost(item_1) - (40% is his comm - 30% of seller id 3) This must be calculated for every agent in the system from the top down (hence the DB calls within while loops throughout my enormous script). Anybody have a good suggestion or samples they may have used in the past?

    Read the article

  • How to use jQuery to generate 2 new associated objects in a nested form?

    - by mind.blank
    I have a model called Pair, which has_many :questions, and each Question has_one :answer. I've been following this railscast on creating nested forms, however I want to generate both a Question field and it's Answer field when clicking on an "Add Question" link. After following the railscast this is what I have: ..javascripts/common.js.coffee: window.remove_fields = (link)-> $(link).closest(".question_remove").remove() window.add_fields = (link, association, content)-> new_id = new Date().getTime() regexp = new RegExp("new_" + association, "g") $(link).before(content.replace(regexp, new_id)) application_helper.rb: def link_to_add_fields(name, f, association) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.simple_fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(association.to_s.singularize + "_fields", :f => builder) end link_to_function(name, "window.add_fields(this, \"#{association}\", \"#{escape_javascript(fields)}\")", class: "btn btn-inverse") end views/pairs/_form.html.erb: <%= simple_form_for(@pair) do |f| %> <div class="row"> <div class="well span4"> <%= f.input :sys_heading, label: "System Heading", placeholder: "required", input_html: { class: "span4" } %> <%= f.input :heading, label: "User Heading", input_html: { class: "span4" } %> <%= f.input :instructions, as: :text, input_html: { class: "span4 input_text" } %> </div> </div> <%= f.simple_fields_for :questions do |builder| %> <%= render 'question_fields', f: builder %> <% end %> <%= link_to_add_fields "<i class='icon-plus icon-white'></i> Add Another Question".html_safe, f, :questions %> <%= f.button :submit, "Save Pair", class: "btn btn-success" %> <% end %> _question_fields.html.erb partial: <div class="question_remove"> <div class="row"> <div class="well span4"> <%= f.input :text, label: "Question", input_html: { class: "span4" }, placeholder: "your question...?" %> <%= f.simple_fields_for :answer do |builder| %> <%= render 'answer_fields', f: builder %> <% end %> </div> </div> </div> _answer_fields.html.erb partial: <%= f.input :text, label: "Answer", input_html: { class: "span4" }, placeholder: "your answer" %> <%= link_to_function "remove", "remove_fields(this)", class: "float-right" %> I'm especially confused by the reflect_on_association part, for example how does calling .new there create an association? I usually need to use .build Also for a has_one I use .build_answer rather than answers.build - so what does this mean for the jQuery part?

    Read the article

  • Embedding ADF UI Components into OAF regions

    - by Juan Camilo Ruiz
    Having finished the 2 Webcast on ADF integration with Oracle E-Business Suite, Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team and I are going to continue adding entries to the series on this topic, trying to cover as many use cases as possible. In this entry, Sara created an overview on how Oracle ADF pages can be embedded into an Oracle Application Framework region. This is a very interesting approach that will enable those of you who are exploring ADF as a technology stack to enhanced some of the Oracle E-Business Suite flows and leverage your skill on Oracle Applications Framework (OAF). In upcoming entries we will start unveiling the internals needed to achieve session sharing between the regions. Stay tuned for more entries and enjoy this new post.   Document Scope This document only covers information that is specific to embedding an Oracle ADF page in an Oracle Application Framework–based page. It assumes knowledge of Oracle ADF and Oracle Application Framework development. It also assumes knowledge of the material in My Oracle Support Note 974949.1, “Oracle E-Business Suite SDK for Java” and My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Prerequisite Patch Download Patch 12726556:R12.FND.B from My Oracle Support and install it. The implementation described below requires Patch 12726556:R12.FND.B to provide the accessors for the ADF page. This patch is required in addition to the Oracle E-Business Suite SDK for Java patch described in My Oracle Support Note 974949.1. Development Environments You need two different JDeveloper environments: Oracle ADF and OA Framework. Oracle ADF Development Environment You build your Oracle ADF page using JDeveloper 11g. You should use JDeveloper 11g R1 (the latest is 11.1.1.6.0) if you need to use other products in the Oracle Fusion Middleware Stack, such as Oracle WebCenter, Oracle SOA Suite, or BI. You should use JDeveloper 11g R2 (the latest is 11.1.2.3.0) if you do not need other Oracle Fusion Middleware products. JDeveloper 11g R2 is an Oracle ADF-specific release that supports the latest Java EE standards and has various core improvements. Oracle Application Framework Development Environment Build your OA Framework page using a development environment corresponding to your Oracle E-Business Suite version. You must use Release 12.1.2 or later because the rich content container was introduced in Release 12.1.2. See “OA Framework - How to find the correct version of JDeveloper to use with eBusiness Suite 11i or Release 12.x” (My Oracle Support Doc ID 416708.1). Building your Oracle ADF Page Typically you build your ADF page using the session management feature of the Oracle E-Business Suite SDK for Java as described in My Oracle Support Note 974949.1. Also see My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Building an ADF Page with the Hierarchy Viewer If you are using the ADF hierarchy viewer, you should set up the structure and settings of the ADF page as follows or the hierarchy viewer may not fill the entire area it is supposed to fill (especially a problem in Firefox). Create a stretchable component as the parent component for the hierarchy viewer, such as af:panelStretchLayout (underneath the af:form component in the structure). Use af:panelStretchLayout for Oracle ADF 11.1.1.6 and earlier. For later versions of Oracle ADF, use af:panelGridLayout. Create your hierarchy viewer component inside the stretchable component. Create Function in Oracle E-Business Suite Instance In your Oracle E-Business Suite instance, create a function for your ADF page with the following parameters. You can use either the Functions window in the System Administrator responsibility or the Functions page in the Functional Administrator responsibility. Function Function Name Type=External ADF Function (ADFX) HTML Call=GWY.jsp?targetPage=faces/<your ADF page> ">You must also add your function to an Oracle E-Business Suite menu or permission set and set up function security or role-based access control (RBAC) so that the user has authorization to access the function. If you do not want the function to appear on the navigation menu, add the function without a menu prompt. See the Oracle E-Business Suite System Administrator's Guide Documentation Set for more information. Testing the Function from the Oracle E-Business Suite Home Page It’s a good idea to test launching your ADF page from the Oracle E-Business Suite Home Page. Add your function to the navigation menu for your responsibility with a prompt and try launching it. If your ADF page expects parameters from the surrounding page, those might not be available, however. Setting up the Oracle Application Framework Rich Container Once you have built your Oracle ADF 11g page, you need to embed it in your Oracle Application Framework page. Create Rich Content Container in your OA Framework JDeveloper environment In the OA Extension Structure pane for your OAF page, select the region where you want to add the rich content, and add a richContainer item to the region. Set the following properties on the richContainer item: id Content Type=Others (for Release 12.1.3. This property value may change in a future release.) Destination Function=[function code] Width (in pixels or percent, such as 100%) Height (in pixels) Parameters=[any parameters your Oracle ADF page is expecting to receive from the Oracle Application Framework page] Parameters In the Parameters property, specify parameters that will be passed to the embedded content as a list of comma-separated, name-value pairs. Dynamic parameters may be specified as paramName={@viewAttr}. Dynamic Rich Content Container Properties If you want your rich content container to display a different Oracle ADF page depending on other information, you would set up a different function for each different Oracle ADF page. You would then set the Destination Function and Parameters properties programmatically, instead of setting them in the Property Inspector. In the processRequest() method of your Oracle Application Framework page controller, where OAFRichContentPage is the ID of your richContainer item and the parameters are whatever parameters your ADF page expects, your code might look similar to this code fragment: OARichContainerBean richBean = (OARichContainerBean) webBean.findChildRecursive("OAFRichContentPage"); if(richBean != null){ if(isFirstCondition){ richBean.setFunctionName("ADF_EXAMPLE_EMBEDDED"); richBean.setParameters("ParamLoginPersonId="+loginPersonId +"&ParamPersonId="+personId+"&ParamUserId="+userId +"&ParamRespId="+respId+"&ParamRespApplId="+respApplId +"&ParamFromOA=Y"+"&ParamSecurityGroupId="+securityGroupId); } else if(isSecondCondition){ richBean.setFunctionName("ADF_EXAMPLE_OTHER_FUNCTION"); richBean.setParameters("ParamLoginPersonId=" +loginPersonId+"&ParamPersonId="+personId +"&ParamUserId="+userId+"&ParamRespId="+respId +"&ParamRespApplId="+respApplId +"&ParamFromOA=Y" +"&ParamSecurityGroupId="+securityGroupId); } }

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Analytic functions – they’re not aggregates

    - by Rob Farley
    SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables. The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012. This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it. The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC. The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition). Consider the following query: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     LAST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same. Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days). The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at. At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING) FROM Sales.SalesOrderHeader GROUP BY YEAR(OrderDate) ; But you can’t. You get that age-old error: Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Hmm. You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group. Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate. And so we end up grouping by doing a DISTINCT. SELECT DISTINCT     YEAR(OrderDate),         FIRST_VALUE(TotalDue)              OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),         LAST_VALUE(TotalDue)             OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)          WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do. If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results. You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented. Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • CodePlex Daily Summary for Monday, May 31, 2010

    CodePlex Daily Summary for Monday, May 31, 2010New ProjectsAndrew's XNA Helpers: A collection of simple, yet useful methods and ways of accessing crucial variables such as the ContentManager or SpriteBatch from anywhere in your ...BASIC-DOS: BASIC-DOS OS Makes It Easier For People To Use DOS As It Comes With An Graphical User Interface That Loads Up During Boot Up. No Need To Type Any C...Chirpy - Visual Studio Add In For Handling Js, Css, and DotLess Files: Mashes, minifies, and validates your javascript, stylesheet, and dotless files.fprparser: Fortify XML Report parser in the form of an Excel Add-inHL7ToXmlConverter: Class library to transform HL7 Version 2.x to HL7Xml Version 2 depends on the used HL7 grammar.imdb movie downloader: basically download info from imdb and it s realy FAST ! need some development about thread pooling and webclient issues.. just try ;) for it run ...IMIfmoOptimisation: Задание по оптимизацииMarketView: MarketViewMediaStreamSources: 这是 CodePlex 上第一个可以呈现视频的 MediaStreamSource 项目。Migrate User Profile Values: A tool to move values between SharePoint 2007 User Profiles. The console application MigrateUserProfileValues.exe will export User Profile values ...Nexus6Studio Development Space: This is a working repository for development efforst of the Nexus6studio team.Project BlueLabel: BlueLabelSergioTools: SergioTools is a collection of tools and sample codes to help C# developers to improve your productivity and skills.SharePoint Property Bag Settings 2010: The Property Bag Settings can store any metadata as Key-Value pairs such as connection strings, server names, file paths, and other miscellaneous s...Silverlight Isolated Storage Cache: IsoCache is a small framework the make it easy to store dll's and xap files in the isolated storage so they can be use to speed up the startup of t...StackPivot: StackPivot is an app which can generate Microsoft Pivot "Collections" on-the-fly based on the data collected from the Stack Exchange APIs. Its d...Suspension Calculator: The Suspension Calculator aims to help people who are building race cars perform suspension related calculations. The calculations vary from motion...TFS Timesheets: A custom work item control for Team Foundation Server that allows timesheet data to be captured against a work item.Umbraco Membership infrastructure: Improvements to the Umbraco membership API implemented as a package. Includes user properties and improved membership API support.VolgaTransTelecomClient: VolgaTransTelecomClient makes it easier for clients of "Volga TransTelecom" company to get info about account. It's developed in C#.Wouter's SharePoint Demo Land: This site contains many of the SharePoint demos that I create while training or hobbying, both for the 2007 and the 2010 release. Please click the...New ReleasesAgUnit - Silverlight unit testing with ReSharper: AgUnit 0.1: Initial release of AgUnit. Copy the extracted files from AgUnit-0.1-ReSharper5.0.zip into the "Bin\Plugins\" folder of your ReSharper installatio...Andrew's XNA Helpers: Andrew's XNA Helpers v1: My first version of my Library of XNA Helpers. Includes: Variables class - A static class that can be accessed anywhere throughout your project -...Clean your Database: Database Cleaner Setup: Database Cleaner SetupClean your Database: Database Cleaner Source: Database Cleaner Source codeCommunity Forums NNTP bridge: Community Forums NNTP Bridge V16: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V17: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release is prim...Folder Bookmarks: Folder Bookmarks 1.5.8: The latest version of Folder Bookmarks (1.5.8), with new GUI improvements and 'Help' feature - all the instructions needed to use the software (If ...HL7ToXmlConverter: HL7ToXmlConverter Version 0.0.0.9: First distribution on codepleximdb movie downloader: 0.9 Fist Tryout of MyImdb SourceCode: Fist Tryout of MyImdb SourceCodeLightweight Fluent Workflow: Objectflow 1.0.0.2: The features of this release take advantage of .Net 3.5 featrures; Lamda support is back for Constraints and Functions can now be used in workflows...MDownloader: MDownloader-0.15.16.59384: Fixed password detector in context of .rar files. Fixed FileFactory provider implementation. Fixed detection of internet connection failures.MediaStreamSources: MediaStreamSources 1.0 Beta1: 1.0 Beta1Mongodb Management Studio: Mongodb Management Studio v1.1: MongodbManagementStudio v1.1 1.服务器管理功能 添加服务器,删除服务器 2.服务器,数据库,表,列,索引,树形显示和状态信息查看 3.查询分析器功能. 支持select,insert,Delete,update 支持自定义分页函数 $rowid(1,5) 查询...Multiplayer Quiz: Release 1_7_1_0: Latest Release. .NET 4.0 required to run server, and recommended for client. Download: http://www.microsoft.com/downloads/details.aspx?FamilyID=9cf...SCSM PowerShell Cmdlets: SCSM PowerShell Cmdlets Version 0.1.1: First release! This is a minimal build with limited funcitonallity. Should be handled as a preview of what's to come. Included Cmdlets are: New-...SharePoint Property Bag Settings 2010: PropertyBagSettings2010.wsp: The SharePoint Property Bag Settings is a reusable component that you can include in your own SharePoint applications. It can store simple types, s...Sharp Tests Ex: Sharp Tests Ex 1.0.0RTM: Project Description #TestsEx (Sharp Tests Extensions) is a set of extensible extensions. The main target is write short assertions where the Visual...Silverlight Testing Automation Tool: StatLight V1.1: FeaturesApplied some UnitDriven specific changes from justncase80/StatLight Updated to the 0.0.5 release over on rul:UnitDriven.codeplex.com Upda...SQL Server 2005 and 2008 - Backup, Integrity Check and Index Optimization: 30 May 2010: This is the latest version of my solution for Backup, Integrity Check and Index Optimization in SQL Server 2005, SQL Server 2008 and SQL Server 200...Suspension Calculator: SuspensionCalculator_V1.0.0.29: The Suspension Calculator aims to help people who are building race cars perform suspension related calculations. The calculations vary from motion...Svn2Svn: copy, sync, replay or reflect changes across SVN repositories: 1.2 (Beta): Build 1.2.8932.0. Added /incremental (/i) mode, svn2svn detects all previously synced revisions and starts at the latest revision that has not bee...VCC: Latest build, v2.1.30530.0: Automatic drop of latest buildVolgaTransTelecomClient: V.1.0.2.0: v.1.0.2.0 releaseWatchersNET.SkinObjects.ModulActionsMenu: ModulActionsMenu 01.00.01: changes CSS Fixed for the Microsoft Internet ExplorerWouter's SharePoint Demo Land: Navigation Service Basic: This sample shows how to create a simple Service Application for SharePoint 2010. You can read up on the how and why on my blog series about Serv...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsCommunity Forums NNTP bridgeAStar.netpatterns & practices – Enterprise LibraryBlogEngine.NETGMap.NET - Great Maps for Windows Forms & PresentationMirror Testing SystemIonics Isapi Rewrite FilterCustomer Portal Accelerator for Microsoft Dynamics CRMN2 CMSpatterns & practices: Windows Azure Security Guidance

    Read the article

  • Using Sitecore RenderingContext Parameters as MVC controller action arguments

    - by Kyle Burns
    I have been working with the Technical Preview of Sitecore 6.6 on a project and have been for the most part happy with the way that Sitecore (which truly is an MVC implementation unto itself) has been expanded to support ASP.NET MVC. That said, getting up to speed with the combined platform has not been entirely without stumbles and today I want to share one area where Sitecore could have really made things shine from the "it just works" perspective. A couple days ago I was asked by a colleague about the usage of the "Parameters" field that is defined on Sitecore's Controller Rendering data template. Based on the standard way that Sitecore handles a field named Parameters, I was able to deduce that the field expected key/value pairs separated by the "&" character, but beyond that I wasn't sure and didn't see anything from a documentation perspective to guide me, so it was time to dig and find out where the data in the field was made available. My first thought was that it would be really nice if Sitecore handled the parameters in this field consistently with the way that ASP.NET MVC handles the various parameter collections on the HttpRequest object and automatically maps them to parameters of the action method executing. Being the hopeful sort, I configured a name/value pair on one of my renderings, added a parameter with matching name to the controller action and fired up the bugger to see... that the parameter was not populated. Having established that the field's value was not going to be presented to me the way that I had hoped it would, the next assumption that I would work on was that Sitecore would handle this field similar to how they handle other similar data and would plug it into some ambient object that I could reference from within the controller method. After a considerable amount of guessing, testing, and cracking code open with Redgate's Reflector (a must-have companion to Sitecore documentation), I found that the most direct way to access the parameter was through the ambient RenderingContext object using code similar to: string myArgument = string.Empty; var rc = Sitecore.Mvc.Presentation.RenderingContext.CurrentOrNull; if (rc != null) {     var parms = rc.Rendering.Parameters;     myArgument = parms["myArgument"]; } At this point, we know how this field is used out of the box from Sitecore and can provide information from Sitecore's Content Editor that will be available when the controller action is executing, but it feels a little dirty. In order to properly test the action method I would have to do a lot of setup work and possible use an isolation framework such as Pex and Moles to get at a value that my action method is dependent upon. Notice I said that my method is dependent upon the value but in order to meet that dependency I've accepted another dependency upon Sitecore's RenderingContext.  I'm a big believer in, when possible, ensuring that any piece of code explicitly advertises dependencies using the method signature, so I found myself still wanting this to work the same as if the parameters were in the request route, querystring, or form by being able to add a myArgument parameter to the action method and have this parameter populated by the framework. Lucky for us, the ASP.NET MVC framework is extremely flexible and provides some easy to grok and use extensibility points. ASP.NET MVC is able to provide information from the request as input parameters to controller actions because it uses objects which implement an interface called IValueProvider and have been registered to service the application. The most basic statement of responsibility for an IValueProvider implementation is "I know about some data which is indexed by key. If you hand me the key for a piece of data that I know about I give you that data". When preparing to invoke a controller action, the framework queries registered IValueProvider implementations with the name of each method argument to see if the ValueProvider can supply a value for the parameter. (the rest of this post will assume you're working along and make a lot more sense if you do) Let's pull Sitecore out of the equation for a second to simplify things and create an extremely simple IValueProvider implementation. For this example, I first create a new ASP.NET MVC3 project in Visual Studio, selecting "Internet Application" and otherwise taking defaults (I'm assuming that anyone reading this far in the post either already knows how to do this or will need to take a quick run through one of the many available basic MVC tutorials such as the MVC Music Store). Once the new project is created, go to the Index action of HomeController.  This action sets a Message property on the ViewBag to "Welcome to ASP.NET MVC!" and invokes the View, which has been coded to display the Message. For our example, we will remove the hard coded message from this controller (although we'll leave it just as hard coded somewhere else - this is sample code). For the first step in our exercise, add a string parameter to the Index action method called welcomeMessage and use the value of this argument to set the ViewBag.Message property. The updated Index action should look like: public ActionResult Index(string welcomeMessage) {     ViewBag.Message = welcomeMessage;     return View(); } This represents the entirety of the change that you will make to either the controller or view.  If you run the application now, the home page will display and no message will be presented to the user because no value was supplied to the Action method. Let's now write a ValueProvider to ensure this parameter gets populated. We'll start by creating a new class called StaticValueProvider. When the class is created, we'll update the using statements to ensure that they include the following: using System.Collections.Specialized; using System.Globalization; using System.Web.Mvc; With the appropriate using statements in place, we'll update the StaticValueProvider class to implement the IValueProvider interface. The System.Web.Mvc library already contains a pretty flexible dictionary-like implementation called NameValueCollectionValueProvider, so we'll just wrap that and let it do most of the real work for us. The completed class looks like: public class StaticValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider;     public StaticValueProvider(ControllerContext controllerContext)     {         var parameters = new NameValueCollection();         parameters.Add("welcomeMessage", "Hello from the value provider!");         _wrappedProvider = new NameValueCollectionValueProvider(parameters, CultureInfo.InvariantCulture);     }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } Notice that the only entry in the collection matches the name of the argument to our HomeController's Index action.  This is the important "secret sauce" that will make things work. We've got our new value provider now, but that's not quite enough to be finished. Mvc obtains IValueProvider instances using factories that are registered when the application starts up. These factories extend the abstract ValueProviderFactory class by initializing and returning the appropriate implementation of IValueProvider from the GetValueProvider method. While I wouldn't do so in production code, for the sake of this example, I'm going to add the following class definition within the StaticValueProvider.cs source file: public class StaticValueProviderFactory : ValueProviderFactory {     public override IValueProvider GetValueProvider(ControllerContext controllerContext)     {         return new StaticValueProvider(controllerContext);     } } Now that we have a factory, we can register it by adding the following line to the end of the Application_Start method in Global.asax.cs: ValueProviderFactories.Factories.Add(new StaticValueProviderFactory()); If you've done everything right to this point, you should be able to run the application and be presented with the home page reading "Hello from the value provider!". Now that you have the basics of the IValueProvider down, you have everything you need to enhance your Sitecore MVC implementation by adding an IValueProvider that exposes values from the ambient RenderingContext's Parameters property. I'll provide the code for the IValueProvider implementation (which should look VERY familiar) and you can use the work we've already done as a reference to create and register the factory: public class RenderingContextValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider = null;     public RenderingContextValueProvider(ControllerContext controllerContext)     {         var collection = new NameValueCollection();         var rc = RenderingContext.CurrentOrNull;         if (rc != null && rc.Rendering != null)         {             foreach(var parameter in rc.Rendering.Parameters)             {                 collection.Add(parameter.Key, parameter.Value);             }         }         _wrappedProvider = new NameValueCollectionValueProvider(collection, CultureInfo.InvariantCulture);         }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } In this post I've discussed the MVC IValueProvider used to map data to controller action method arguments and how this can be integrated into your Sitecore 6.6 MVC solution.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36  | Next Page >