Search Results

Search found 7619 results on 305 pages for 'chace fields'.

Page 96/305 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • MVC's Html.DropDownList and "There is no ViewData item of type 'IEnumerable<SelectListItem>' that has the key '...'

    - by pjohnson
    ASP.NET MVC's HtmlHelper extension methods take out a lot of the HTML-by-hand drudgery to which MVC re-introduced us former WebForms programmers. Another thing to which MVC re-introduced us is poor documentation, after the excellent documentation for most of the rest of ASP.NET and the .NET Framework which I now realize I'd taken for granted. I'd come to regard using HtmlHelper methods instead of writing HTML by hand as a best practice. When I upgraded a project from MVC 3 to MVC 4, several hidden fields with boolean values broke, because MVC 3 called ToString() on those values implicitly, and MVC 4 threw an exception until you called ToString() explicitly. Fields that used HtmlHelper weren't affected. I then went through dozens of views and manually replaced hidden inputs that had been coded by hand with Html.Hidden calls. So for a dropdown list I was rendering on the initial page as empty, then populating via JavaScript after an AJAX call, I tried to use a HtmlHelper method: @Html.DropDownList("myDropdown") which threw an exception: System.InvalidOperationException: There is no ViewData item of type 'IEnumerable<SelectListItem>' that has the key 'myDropdown'. That's funny--I made no indication I wanted to use ViewData. Why was it looking there? Just render an empty select list for me. When I populated the list with items, it worked, but I didn't want to do that: @Html.DropDownList("myDropdown", new List<SelectListItem>() { new SelectListItem() { Text = "", Value = "" } }) I removed this dummy item in JavaScript after the AJAX call, so this worked fine, but I shouldn't have to give it a list with a dummy item when what I really want is an empty select. A bit of research with JetBrains dotPeek (helpfully recommended by Scott Hanselman) revealed the problem. Html.DropDownList requires some sort of data to render or it throws an error. The documentation hints at this but doesn't make it very clear. Behind the scenes, it checks if you've provided the DropDownList method any data. If you haven't, it looks in ViewData. If it's not there, you get the exception above. In my case, the helper wasn't doing much for me anyway, so I reverted to writing the HTML by hand (I ain't scared), and amended my best practice: When an HTML control has an associated HtmlHelper method and you're populating that control with data on the initial view, use the HtmlHelper method instead of writing by hand.

    Read the article

  • Defining Discovery: Core Concepts

    - by Joe Lamantia
    Discovery tools have had a referencable working definition since at least 2001, when Ben Shneiderman published 'Inventing Discovery Tools: Combining Information Visualization with Data Mining'.  Dr. Shneiderman suggested the combination of the two distinct fields of data mining and information visualization could manifest as new category of tools for discovery, an understanding that remains essentially unaltered over ten years later.  An industry analyst report titled Visual Discovery Tools: Market Segmentation and Product Positioning from March of this year, for example, reads, "Visual discovery tools are designed for visual data exploration, analysis and lightweight data mining." Tools should follow from the activities people undertake (a foundational tenet of activity centered design), however, and Dr. Shneiderman does not in fact describe or define discovery activity or capability. As I read it, discovery is assumed to be the implied sum of the separate fields of visualization and data mining as they were then understood.  As a working definition that catalyzes a field of product prototyping, it's adequate in the short term.  In the long term, it makes the boundaries of discovery both derived and temporary, and leaves a substantial gap in the landscape of core concepts around discovery, making consensus on the nature of most aspects of discovery difficult or impossible to reach.  I think this definitional gap is a major reason that discovery is still an ambiguous product landscape. To help close that gap, I'm suggesting a few definitions of four core aspects of discovery.  These come out of our sustained research into discovery needs and practices, and have the goal of clarifying the relationship between discvoery and other analytical categories.  They are suggested, but should be internally coherent and consistent.   Discovery activity is: "Purposeful sense making activity that intends to arrive at new insights and understanding through exploration and analysis (and for these we have specific defintions as well) of all types and sources of data." Discovery capability is: "The ability of people and organizations to purposefully realize valuable insights that address the full spectrum of business questions and problems by engaging effectively with all types and sources of data." Discovery tools: "Enhance individual and organizational ability to realize novel insights by augmenting and accelerating human sense making to allow engagement with all types of data at all useful scales." Discovery environments: "Enable organizations to undertake effective discovery efforts for all business purposes and perspectives, in an empirical and cooperative fashion." Note: applicability to a world of Big data is assumed - thus the refs to all scales / types / sources - rather than stated explicitly.  I like that Big Data doesn't have to be written into this core set of definitions, b/c I think it's a transitional label - the new version of Web 2.0 - and goes away over time. References and Resources: Inventing Discovery Tools Visual Discovery Tools: Market Segmentation and Product Positioning Logic versus usage: the case for activity-centered design A Taxonomy of Enterprise Search and Discovery

    Read the article

  • ID number for sites [closed]

    - by Jonathan
    Possible Duplicate: please add a key fields to stackauth results It would be easier if sites each had an ID, it help with keeping track of them, not only in a numerical way (which is generally easier and smaller than using name strings). Also changes of site names (such as when a site progresses from beta, or decides it's name is not quite right during beta). Everything else has IDs so why not sites?

    Read the article

  • WordPress plugin for handling User Submitted Posts

    - by Ravish
    User Submitted Posts plugin is a highly useful form, which can be embedded on the desired areas of your WordPress site using a shortcode. User Submitted Posts plugin will allow you to customize the fields in the form like title, or tags. It provides you with useful tools to control uploads. Why you need this? [...] Related posts:Insights WordPress Plugin For Efficient Blogging WordPress User related Plug-ins AddInto Social Bookmarking plugin for WordPress & Blogger

    Read the article

  • Few Steps For Making Your Website SEO Friendly

    Currently going by the dynamics of the internet every web designer of class is expected to be acquainted with or have a basic working knowledge of search engine optimization tools. Web promotion as an optimization tool for enhancing increased traffic to a given site is one skill web designers just like online marketers require in their respective fields to conquer the market.

    Read the article

  • Anatomy of a .NET Assembly - Custom attribute encoding

    - by Simon Cooper
    In my previous post, I covered how field, method, and other types of signatures are encoded in a .NET assembly. Custom attribute signatures differ quite a bit from these, which consequently affects attribute specifications in C#. Custom attribute specifications In C#, you can apply a custom attribute to a type or type member, specifying a constructor as well as the values of fields or properties on the attribute type: public class ExampleAttribute : Attribute { public ExampleAttribute(int ctorArg1, string ctorArg2) { ... } public Type ExampleType { get; set; } } [Example(5, "6", ExampleType = typeof(string))] public class C { ... } How does this specification actually get encoded and stored in an assembly? Specification blob values Custom attribute specification signatures use the same building blocks as other types of signatures; the ELEMENT_TYPE structure. However, they significantly differ from other types of signatures, in that the actual parameter values need to be stored along with type information. There are two types of specification arguments in a signature blob; fixed args and named args. Fixed args are the arguments to the attribute type constructor, named arguments are specified after the constructor arguments to provide a value to a field or property on the constructed attribute type (PropertyName = propValue) Values in an attribute blob are limited to one of the basic types (one of the number types, character, or boolean), a reference to a type, an enum (which, in .NET, has to use one of the integer types as a base representation), or arrays of any of those. Enums and the basic types are easy to store in a blob - you simply store the binary representation. Strings are stored starting with a compressed integer indicating the length of the string, followed by the UTF8 characters. Array values start with an integer indicating the number of elements in the array, then the item values concatentated together. Rather than using a coded token, Type values are stored using a string representing the type name and fully qualified assembly name (for example, MyNs.MyType, MyAssembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=0123456789abcdef). If the type is in the current assembly or mscorlib then just the type name can be used. This is probably done to prevent direct references between assemblies solely because of attribute specification arguments; assemblies can be loaded in the reflection-only context and attribute arguments still processed, without loading the entire assembly. Fixed and named arguments Each entry in the CustomAttribute metadata table contains a reference to the object the attribute is applied to, the attribute constructor, and the specification blob. The number and type of arguments to the constructor (the fixed args) can be worked out by the method signature referenced by the attribute constructor, and so the fixed args can simply be concatenated together in the blob without any extra type information. Named args are different. These specify the value to assign to a field or property once the attribute type has been constructed. In the CLR, fields and properties can be overloaded just on their type; different fields and properties can have the same name. Therefore, to uniquely identify a field or property you need: Whether it's a field or property (indicated using byte values 0x53 and 0x54, respectively) The field or property type The field or property name After the fixed arg values is a 2-byte number specifying the number of named args in the blob. Each named argument has the above information concatenated together, mostly using the basic ELEMENT_TYPE values, in the same way as a method or field signature. A Type argument is represented using the byte 0x50, and an enum argument is represented using the byte 0x55 followed by a string specifying the name and assembly of the enum type. The named argument property information is followed by the argument value, using the same encoding as fixed args. Boxed objects This would be all very well, were it not for object and object[]. Arguments and properties of type object allow a value of any allowed argument type to be specified. As a result, more information needs to be specified in the blob to interpret the argument bytes as the correct type. So, the argument value is simple prepended with the type of the value by specifying the ELEMENT_TYPE or name of the enum the value represents. For named arguments, a field or property of type object is represented using the byte 0x51, with the actual type specified in the argument value. Some examples... All property signatures start with the 2-byte value 0x0001. Similar to my previous post in the series, names in capitals correspond to a particular byte value in the ELEMENT_TYPE structure. For strings, I'll simply give the string value, rather than the length and UTF8 encoding in the actual blob. I'll be using the following enum and attribute types to demonstrate specification encodings: class AttrAttribute : Attribute { public AttrAttribute() {} public AttrAttribute(Type[] tArray) {} public AttrAttribute(object o) {} public AttrAttribute(MyEnum e) {} public AttrAttribute(ushort x, int y) {} public AttrAttribute(string str, Type type1, Type type2) {} public int Prop1 { get; set; } public object Prop2 { get; set; } public object[] ObjectArray; } enum MyEnum : int { Val1 = 1, Val2 = 2 } Now, some examples: Here, the the specification binds to the (ushort, int) attribute constructor, with fixed args only. The specification blob starts off with a prolog, followed by the two constructor arguments, then the number of named arguments (zero): [Attr(42, 84)] 0x0001 0x002a 0x00000054 0x0000 An example of string and type encoding: [Attr("MyString", typeof(Array), typeof(System.Windows.Forms.Form))] 0x0001 "MyString" "System.Array" "System.Windows.Forms.Form, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" 0x0000 As you can see, the full assembly specification of a type is only needed if the type isn't in the current assembly or mscorlib. Note, however, that the C# compiler currently chooses to fully-qualify mscorlib types anyway. An object argument (this binds to the object attribute constructor), and two named arguments (a null string is represented by 0xff and the empty string by 0x00) [Attr((ushort)40, Prop1 = 12, Prop2 = "")] 0x0001 U2 0x0028 0x0002 0x54 I4 "Prop1" 0x0000000c 0x54 0x51 "Prop2" STRING 0x00 Right, more complicated now. A type array as a fixed argument: [Attr(new[] { typeof(string), typeof(object) })] 0x0001 0x00000002 // the number of elements "System.String" "System.Object" 0x0000 An enum value, which is simply represented using the underlying value. The CLR works out that it's an enum using information in the attribute constructor signature: [Attr(MyEnum.Val1)] 0x0001 0x00000001 0x0000 And finally, a null array, and an object array as a named argument: [Attr((Type[])null, ObjectArray = new object[] { (byte)2, typeof(decimal), null, MyEnum.Val2 })] 0x0001 0xffffffff 0x0001 0x53 SZARRAY 0x51 "ObjectArray" 0x00000004 U1 0x02 0x50 "System.Decimal" STRING 0xff 0x55 "MyEnum" 0x00000002 As you'll notice, a null object is encoded as a null string value, and a null array is represented using a length of -1 (0xffffffff). How does this affect C#? So, we can now explain why the limits on attribute arguments are so strict in C#. Attribute specification blobs are limited to basic numbers, enums, types, and arrays. As you can see, this is because the raw CLR encoding can only accommodate those types. Special byte patterns have to be used to indicate object, string, Type, or enum values in named arguments; you can't specify an arbitary object type, as there isn't a generalised way of encoding the resulting value in the specification blob. In particular, decimal values can't be encoded, as it isn't a 'built-in' CLR type that has a native representation (you'll notice that decimal constants in C# programs are compiled as several integer arguments to DecimalConstantAttribute). Jagged arrays also aren't natively supported, although you can get around it by using an array as a value to an object argument: [Attr(new object[] { new object[] { new Type[] { typeof(string) } }, 42 })] Finally... Phew! That was a bit longer than I thought it would be. Custom attribute encodings are complicated! Hopefully this series has been an informative look at what exactly goes on inside a .NET assembly. In the next blog posts, I'll be carrying on with the 'Inside Red Gate' series.

    Read the article

  • What prefer a game developer company? UDK experience or c++ game projects?

    - by momboco
    What prefer a game developer company? A developer with experience in UDK engine ? or, a developer with projects made entirely in c++ with a graphics engine like Ogre3D? I think that a coder can demonstrate better his abilities with games made in c++, because it requires a knowledge deeper in many fields. However, currently there is a lot of companies that develop his games with UDK. Now I don't know if is better specialize in a game engine like UDK.

    Read the article

  • Getting started with Document Set in SharePoint2010

    - by ybbest
    Folders are widely used in traditional file based system, in SharePoint world you can create folder in the document library as well. However, there is a new improved feature in SharePoint called Document Set; you can attach metadata to the document set. To get start with Document set, you can perforce the following steps. 1. Go to Site Settings >>Site collection features >>Activate the Document Sets feature. 2. After the Document Sets feature is activated, you will get a new content type called Document Set. 3. Next, we can create a custom content type called Loan Application Document Set that inherited from Document Set Content Type. 4. Then I create a new column called Application Number. 5. Add this field to the loan application content type 6. Create a new Content Type called Loan Contract form that inherited from Document content type. 7. Add the Application Number to the Loan Contract form content type. 8. Create a new Content Type called Loan Application form that inherited from Document content type and add Application Number to it.(The same step as above.) 9.Go to the Loan Application Document Set content type and go to the Document Set Settings. 10. You can define which content type you would like this Document set contains and you can also define the default document for each content type. When you create a new document set, those default documents will get automatically created in the document set. You can also define the Shared field that shared across content types; in my case I define the Application number and description as my shared fields. Finally, you can define the fields that you’d like to show in the document set welcome page. 11. Now create a new document library and attach those content types to the document library and create a new loan application document set. 12. You will see the default document created in the document set.If you updated Application Number on the document set , the field will get updated in the documents inside the document set as well.

    Read the article

  • Cool Enhancements Everyone Can Enjoy

    - by Ruth
    With Release 17, we have a few visual and functional enhancements that make using CRM On Demand that much better for us all. I'll mention a few here, but to get the full outline of these upgrades, I recommend taking 10 minutes to view the Release 17 Usability Transfer of Information course. First and foremost, I find the ability to customize your theme (or skin) pretty cool, but I've said that before. Take a look at the Selecting Your Theme and the Themes - Create Your CRM Style blog articles for more information. My next favorite is the resizeable user interface (UI). CRM On Demand will dynamically fit the device and screen resolution you're using, which includes the resizing of fields, field editors and pop-ups. If you have a wide screen like me, you should appreciate that one very much. To make it easier to see that resized UI, the detail pages got a little face lift. New horizontal lines and other subtle changes make those pages easier to read. Also, those things you need to know, like error messages and inline help are highlighted with a little icon to show the message type. You may not think every change to the detail pages are particularly exciting, but I'm sure you'll enjoy the new Head Up Display, which saves you scrolling time by adding links to related information sections. I like that the head up display travels with me as I move up and down the page...it's like a little friend that takes me where I want to go as fast as possible. You may also really like the fact that the copy record feature is now available for all record types from both detail pages and lists. Your company administrator can choose which fields get copied, so you can maximize your efficiency when creating new records. Lists also got a face lift. Alternating colors in rows make it easier to see your data. Also, the Favorite Lists icon is now on the list itself, so you can save your most useful lists with one click. If you've ever tried to create a new list with 10 columns or more, you'll be happy to hear that the maximum number of columns in a list has increased from 9 to 20. This is great news, but doesn't mean you should include the kitchen sink in your list...excess columns can slow list performance. So choose your columns wisely. Again, these are just a few of my favorite things. Let us know what you think about the new usability features. What are your favorite things?

    Read the article

  • Creating metadata value relationships

    - by kyle.hatlestad
    I was recently asked an question about an interesting use case. They wanted content to be submitted into UCM with a particular ID in a custom metadata field. But they wanted that ID to be translated during submission into an employee name in another metadata field upon submission. My initial thought was that this could be done with a dependent choice list (DCL). One option list field driving the choices in another. But this didn't work in this case for a couple of reasons. First, the number of IDs could potentially be very large. So making that into a drop-down list would not be practical. The preference would be for that field to simply be a text field to type in the ID. Secondly, data could be submitted through different methods other then the web-based check-in form. And without an interface to select the DCL choices, the system needed a way to determine and populate the name field. So instead I went the approach of having the value of the ID field drive the value of the Name field using the derived field approach in my rule. In looking at it though, it was easy to simply copy the value of the ID field into the Name field...but to have it look up and translate the value proved to be the tricky part. So here is the approach I took... First I created my two metadata fields as standard text fields in the Configuration Manager applet. Next I create a table that stores the relationship between the IDs and Names. I then create a View into that table and set the column to the EmployeeID. I now create a new Application Field and set it as an option list using the View I created in the previous step. The reason I create it as an Application field is because I don't need to display the field or store a value in it. I simply need to make use of the option list in the next step... Finally, I create a Rule in which I select the Employee Name field and turn on the 'Is derived field' checkbox. I edit the derived value and add a new condition. Because the option list is a Application field and not an Information field, I can't use the Compute button. Instead, I insert this line directly in the Value field: @getFieldViewValue("EmployeeMapping",#active.xEmployeeID, "EmployeeName") The "EmployeeMapping" parameter designates that the value should be pulled from the EmployeeMapping Application field that I had created in the previous step. The #active.xEmployeeID field is the ID value that should be pulled from what the user entered. "EmployeeName" is the column name in the table which has the value which corresponds to the ID. The extracted name then becomes the value within our Employee Name field. That's it. You can then add additional Rules to make the Name field read-only/hidden on the check-in page and such.

    Read the article

  • Right approach to convert a word document that contains forms in a web app

    - by carlo
    I would know if someone can suggest a good approach to convert a word document that contains forms in a web app, specifically in an application built with WaveMaker.(but I'm curious also with a general approach not strictly dependent on the technology that I have mentioned). For example, if I have a page in a word document, that maps the fields of a user entity, what could be my "programmer approach" to convert it without much use of copy-paste, but with a dynamic methodology ?

    Read the article

  • I want to install and get to building a personal MySQL DB

    - by Ari Hall
    So how do I go from installing MySQL from the Software Center to inputing data into fields and bringing in a comma delimited file? I've only had brief experience with MSAccess and OOo Base a long time ago, so details are appreciated, I just want to get up and running. I have Ubuntu 10.10, 64 bit, if that affects much. If you can link me to a howto that does exactly what I'm looking for, that would work. Again, Thanks!

    Read the article

  • Determining distribution of NULL values

    - by AaronBertrand
    Today on the twitter hash tag #sqlhelp, @leenux_tux asked: How can I figure out the percentage of fields that don't have data ? After further clarification, it turns out he is after what proportion of columns are NULL. Some folks suggested using a data profiling task in SSIS . There may be some validity to that, but I'm still a fan of sticking to T-SQL when I can, so here is how I would approach it: Create a #temp table or @table variable to store the results. Create a cursor that loops through all...(read more)

    Read the article

  • Pre-Conference Sessions at the PASS Summit

    - by andyleonard
    Introduction I have some thoughts on the selection of pre-conference and post-conference session presenters at the PASS Summit. PASS pre-conference and post-conference sessions are $395. Trainers and speakers in the various SQL Server fields (relational engine, business intelligence, etc.) are selected to deliver these day-long seminars before and (now) after each PASS Summit. I have attended a few and the quality and amount of the training easily justifies the $395 price tag. Full Disclosure I've...(read more)

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 5 - Using the GridView and the EntityDataSource

    In this article, Vince demonstrates the usage of the GridView control to view, add, update, and delete records using the Entity Framework 4. After providing a short introduction, he provides the steps required to create a web site, entity data model, web form and template fields with the help of relevant source code and screenshots.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Server Training in the UK–SSIS, MDX, Admin, MDS, Internals

    - by simonsabin
    If you are looking for SQL Server training they there is no better place to start than a new company Technitrain Its been setup by a fellow MVP and SQLBits Organiser Chris Webb. Why this company rather than any others? Training based on real world experience by the best in the business. The key to Technitrain’s model is not to cram the shelves high with courses and get some average Joe trainers to deliver them. Technitrain bring in world renowned experts in their fields to deliver courses written...(read more)

    Read the article

  • Customizing the Outlook Address Book

    It is possible to change the fields in the Outlook address book to make them a better fit for your organisation. Exchange provides templates for Users, Contacts, Groups, Public Folders, and Mailbox Agents that are downloaded to Outlook. Any changes will be visible in the address book. As usual with Exchange, careful planning before you make changes pays dividends, Ben Lye explains.

    Read the article

  • Is it important for reflection-based serialization maintain consistent field ordering?

    - by Matchlighter
    I just finished writing a packet builder that dynamically loads data into a data stream for eventual network transmission. Each builder operates by finding fields in a given class (and its superclasses) that are marked with a @data annotation. When I finishing my implementation, I remembered that getFields() does not return results in any specific order. Should reflection-based methods for serializing arbitrary data (like my packets) attempt to preserve a specific field ordering (such as alphabetical), and if so, how?

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

  • Reuse Business Logic between Web and API

    - by fesja
    We have a website and two mobile apps that connect through an API. All the platforms do the exactly same things. Right now the structure is the following: Website. It manages models, controllers, views for the website. It also executes all background tasks. So if a user create a place, everything is executed in this code. API. It manages models, controllers and return a JSON. If a user creates a place on the mobile app, the place is created here. After, we add a background task to update other fields. This background task is executed by the Website. We are redoing everything, so it's time to improve the approach. Which is the best way to reuse the business logic so I only need to code the insert/edit/delete of the place & other actions related in just one place? Is a service oriented approach a good idea? For example: Service. It has the models and gets, adds, updates and deletes info from the DB. Website. It send the info to the service, and it renders HTML. API. It sends info to the service, and it returns JSON. Some problems I have found: More initial work? Not sure.. It can work slower. Any experience? The benefits: We only have the business logic in one place, both for web and api. It's easier to scale. We can put each piece on different servers. Other solutions Duplicate the code and be careful not to forget anything (do tests!) DUplicate some code but execute background tasks that updates the related fields and executes other things (emails, indexing...) A "small" detail is we are 1.3 person in backend, for now ;)

    Read the article

  • Smoothing found path on grid

    - by Denis Ermolin
    I implemented several approaches such as A* and Potential fields for my tower defense game. But I want smooth paths, first I tried to find path on very small grid ( 5x5 pixels per tile) but it is extremly slow. I found nice video showing an RTS demo where paths are found on big grid but units dont move from each cell's center to center. How do I implement such behavior? Some examples would be great.

    Read the article

  • Oracle Partner Store (OPS) New Enhancements

    - by Kristin Rose
    Effective June 29th, Oracle Partner Store (OPS) will release the enhancements listed below to improve your overall ordering experience. v Online Transactional Oracle Master Agreement (Online TOMA) The Online TOMA enables end users to execute a transactional end user license agreement with Oracle. The new Online TOMA in OPS will replace the need for you to obtain a signed hard copy of the TOMA from the end user. You will now initiate the Online TOMA via OPS. Navigation: OPS Home > Order Tools > Online TOMA Query > Request Online TOMA> End User Contact, click “Select for TOMA” > Select Language > Submit (an automated email is sent immediately to the requestor and the end user) Ø The Online TOMA can also be initiated from the ‘My OPS’ tab. Under the Online TOMA Query section partners can track Online TOMA request details submitted to end users. The status of the Online TOMA request and the OMA Key generated (once Ts&Cs of the Online TOMA are accepted by an end user) are also displayed in this table. There is also the ability to resend pending Online TOMA requests by clicking ‘Resend’. Navigation: OPS Home > Order Tools > Online TOMA Query For more details on the Transactional OMA, please click here. v Convert Deals to Carts The partner deal registration system within OPS will now allow you to convert approved deals into carts with a simple click of a button. VADs can use Deal to Cart on all of their partners' registrations, regardless of whether they submitted on their partner's behalf, or the partner submitted themselves. Navigation: Login > Deal Registrations > Deal Registration List > Open the approved deal > Click Deal Reg ID number link to open > Click on 'Create Cart' link You can locate your newly created cart in the Saved Carts section of OPS. Links are also available from within an open deal or from the Deal Registration List. Click on the cart number to proceed. v Partner Opportunity Management: Deal Registration on OPS now allows you to see updated information on your opportunities from Oracle’s Fusion CRM opportunity management system.  Key fields such as close date, sales stage, products and status can be viewed by clicking the opportunity ID associated with the deal registration.  This new feature allows you to see regular updates to your opportunities after registrations are approved.  Through ongoing communication with Oracle Channel Managers and Sales Reps, you can ensure that Oracle has the latest information on your active registered deals. v Product Recommendations: When adding products to the Deal Registrations tab, OPS will now show additional products that you can try to include to maximize your sale and rebate. v Advanced Customer Support(ACS) Services Note: This will be available from July 9th. Initiate the purchase of the complete stack (HW/SW/Services) online with one single OPS order. More ACS services now supported online with exception of Start-Up Pack: · New SW installation services for Standard Configurations & stand alone System Software. · New Pre-production & Go-live services for Standard & Engineered Systems · New SW configuration & Platinum Pre-Production & Go-Live services for Engineered Systems · New Travel & Expenses Estimate included · New Partner & VAD volume discount supported v Software as a Service (SaaS) for Independent Software Vendors (ISVs): Oracle SaaS ISVs can now use OPS to submit their monthly usage reports to Oracle within 20 days after the end of every month. Navigation: OPS Home > Cart > Transaction Type: Partner SaaS for ISV’s > Add Eligible Products > Check out v Existing Approvals: In an effort to reduce the processing time of discount approvals, we have added a new section in the Request Approval page for you to communicate pre-existing approvals without having to attach the DAT. Just enter the Approval ID and submit your request. In case of existing software approvals, you will be required to submit the DAT with the Contact Information section filled out. v Additional data for Shipping Box Labels and Packing Slips OPS now has additional fields in the Shipping Notes section for you to add PO details. This will help you easily identify shipments as they arrive. Partners will have an End User PO field, whereas VADs will have VAR and End User PO fields. v Shipping Notes on OPS Hardware delivery Shipping Notes will now have multiple options to better suit your requirements. v Reminders for Royalty Reporting Partners: If you have not submitted your royalty report online, OPS will now send an automated alert to remind you. v Order Tracker Changes: · Order Tracker will now have a deal reg flag (Yes/No). You can now clearly distinguish between orders that have registered opportunities. · All lines of the order will be visible in the order details list. v Changes in Terminology · You will notice textual changes on some of our labels and messages relating to approval requests. “Discount Requests” has been replaced with “Approval Requests” to cater to some of our other offerings. · First Line Support (FLS) transaction type has been renamed to Support Provider Partner (SPP). OPS Support For more details on these enhancements, please request a training here. For assistance on the Oracle Partner Store, please contact the OPS support team in your region. NAMER: [email protected] LAD: [email protected] EMEA : [email protected] APAC: [email protected] Japan: [email protected] You can even call us on our Hotline! Find your local number here.     Thank you, Oracle Partner Store Support Team      

    Read the article

  • Deserialize inherited classes into the same list in XNA

    - by M0rgenstern
    I am writing a Gui for a game (for what else ...). Therefor I wrote a class GuiElement which has some serializeable fields. From this class I deflect a Class "Button" which has one serializeable field more. Furthermore, I have a Class GuiWindow, which is as well deflected from "GuiElement". An Object of this Class has a Field "HandledElements" of the type "List". To know the layout of the Menues, I use XML-Files, which look like that (for example): <?xml version="1.0" encoding="utf-8" ?> <XnaContent xmlns:Generic="System.Collections.Generic"> <Asset Type="System.Collections.Generic.List[GUI.GuiWindow]"> <Item> <Position>0 0</Position> <AlternativeImagePath></AlternativeImagePath> <IsActive>true</IsActive> <Name>MainMenu</Name> <HandledElements> <Item> <Position>100 100</Position> <AlternativeImagePath></AlternativeImagePath> <IsActive>true</IsActive> <Name>Optionen</Name> <Caption>Optionen</Caption> </Item> </HandledElements> </Item> <Item> <Position>0 0</Position> <AlternativeImagePath></AlternativeImagePath> <IsActive>false</IsActive> <Name>Options</Name> <HandledElements> </HandledElements> </Item> </Asset> </XnaContent> As you can see, the first window has in its "HandledElements" List an Item with the Field . This is a field which only a Button has. The Problem is now: I can't deserialize this XML file, because GuiElement does not have this Field, it only has the few fields above. I thought it would know automatically which Class to use,but it doesn't. Do I really have to give my windows a list for each child class of GuiElement? Or can I do another workaround?

    Read the article

  • Theory Of A Weird Thought - Forms Submission

    - by user2738336
    In theory, if you were to open two computers that were perfectly synced together on a website that has a form. This form has fields where say for example the username has to be unique. Assuming both computers have the same information on the form, and in theory let's say that the submit button was pressed at the same time, and that these two computers have the exact same build and internet speed and the same response time from the server, whose information would be submitted to the database and whose information would be denied knowing the username field is unique.

    Read the article

  • How do you manage extensibility in your multi-tenant systems?

    - by Brian MacKay
    I've got a few big web based multi-tenant products now, and very soon I can see that there will be a lot of customizations that are tenant specific. An extra field here or there, maybe an extra page or some extra logic in the middle of a workflow - that sort of thing. Some of these customizations can be rolled into the core product, and that's great. Some of them are highly specific and would get in everyone else's way. I have a few ideas in mind for managing this, but none of them seem to scale well. The obvious solution is to introduce a ton of client-level settings, allowing various 'features' to be enabled on per-client basis. The downside with that, of course, is massive complexity and clutter. You could introduce a truly huge number of settings, and over time various types of logic (presentation, business) could get way out of hand. Then there's the problem of client-specific fields, which begs for something cleaner than just adding a bunch of nullable fields to the existing tables. So what are people doing to manage this? Force.com seems to be the master of extensibility; obviously they've created a platform from the ground up that is super extensible. You can add on to almost anything with their web-based UI. FogBugz did something similiar where they created a robust plugin model that, come to think of it, might have actually been inspired by Force. I know they spent a lot of time and money on it and if I'm not mistaken the intention was to actually use it internally for future product development. Sounds like the kind of thing I could be tempted to build but probably shouldn't. :) Is a massive investment in pluggable architecture the only way to go? How are you managing these problems, and what kind of results are you seeing? EDIT: It does look as though FogBugz handled the problem by building a fairly robust platform and then using that to put together their screens. To extend it you create a DLL containing classes that implement interfaces like ISearchScreenGridColumn, and that becomes a module. I'm sure it was tremendously expensive to build considering that they have a large of devs and they worked on it for months, plus their surface area is perhaps 5% of the size of my application. Right now I am seriously wondering if Force.com is the right way to handle this. And I am a hard core ASP.Net guy, so this is a strange position to find myself in.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >