Search Results

Search found 10463 results on 419 pages for 'required'.

Page 66/419 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Adding Attributes to Generated Classes

    ASP.NET MVC 2 adds support for data annotations, implemented via attributes on your model classes.  Depending on your design, you may be using an OR/M tool like Entity Framework or LINQ-to-SQL to generate your entity classes, and you may further be using these entities directly as your Model.  This is fairly common, and alleviates the need to do mapping between POCO domain objects and such entities (though there are certainly pros and cons to using such entities directly). As an example, the current version of the NerdDinner application (available on CodePlex at nerddinner.codeplex.com) uses Entity Framework for its model.  Thus, there is a NerdDinner.edmx file in the project, and a generated NerdDinner.Models.Dinner class.  Fortunately, these generated classes are marked as partial, so you can extend their behavior via your own partial class in a separate file.  However, if for instance the generated Dinner class has a property Title of type string, you cant then add your own Title of type string for the purpose of adding data annotations to it, like this: public partial class Dinner { [Required] public string Title { get;set; } } This will result in a compilation error, because the generated Dinner class already contains a definition of Title.  How then can we add attributes to this generated code?  Do we need to go into the T4 template and add a special case that says if were generated a Dinner class and it has a Title property, add this attribute?  Ick. MetadataType to the Rescue The MetadataType attribute can be used to define a type which contains attributes (metadata) for a given class.  It is applied to the class you want to add metadata to (Dinner), and it refers to a totally separate class to which youre free to add whatever methods and properties you like.  Using this attribute, our partial Dinner class might look like this: [MetadataType(typeof(Dinner_Validation))] public partial class Dinner {}   public class Dinner_Validation { [Required] public string Title { get; set; } } In this case the Dinner_Validation class is public, but if you were concerned about muddying your API with such classes, it could instead have been created as a private class within Dinner.  Having the validation attributes specified in their own class (with no other responsibilities) complies with the Single Responsibility Principle and makes it easy for you to test that the validation rules you expect are in place via these annotations/attributes. Thanks to Julie Lerman for her help with this.  Right after she showed me how to do this, I realized it was also already being done in the project I was working on. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using SQL Sentry Plan Explorer

    - by fatherjack
    LiveJournal Tags: How To,SSMS,Tips and tricks,Execution Plans This is a quick tip that I hope will help you use SQL Sentry's Plan Explorer tool. It's a really great tool for viewing Execution Plans - something that SSMS isn't too great at. If you don't have the tool then you can download it for free from http://www.sqlsentry.net/plan-explorer/sql-server-query-view.asp. So, just a little setup is required before I can show you the tip in full. Create a directory on your Desktop called Execution...(read more)

    Read the article

  • Service Stack

    - by csharp-source.net
    ServiceStack allows you to build re-usable SOA-style web services with plain POCO DataContract classes. The same DTO's can be shared with a .NET client application eliminating the need for any generated code. With no configuration required, web services created are immediately discoverable and callable via the following supported endpoints: - REST and XML - REST and JSON - SOAP 1.1 / 1.2 Services can run on both Mono and the .NET Framework and be hosted in either a ASP.NET Web Application, a Windows Service or Console application.

    Read the article

  • InvokeRequired not reliable?

    - by marocanu2001
    InvalidOperationException: Cross-thread operation not valid: Control 'progressBar' accessed from a thread other than the thread it was created ... Now this is not a nice way to start a day! not even if it's a Monday! So you have seen this and already thought, come on, this is an old one, just ask whether InvokeRequired before calling the method in the control, something like this: if (progressBar.InvokeRequired) {           progressBar.Invoke  ( new MethodInvoker( delegate {  progressBar.Value = 50;    }) ) }else      progressBar.Value = 50;    Unfortunately this was not working the way I would have expected, I got this error, debugged and though in debugging the InvokeRequired had become true , the error was thrown on the branch that did not required Invoke. So there was a scenario where InvokeRequired returned false and still accessing the control was not on the right thread ... Problem was that I kept showing and hiding the little form showing the progressbar. The progressbar was updating on an event  , ProgressChanged and I could not guarantee the little form was loaded by the time the event was thrown. So , spotted the problem, if none of the parents of the control you try to modify is created at the time of the method invoking, the InvokeRequired returns true! That causes your code to execute on the woring thread. Of course, updating UI before the win dow was created is not a legitimate action either, still I would have expected a different error. MSDN: "If the control's handle does not yet exist, InvokeRequired searches up the control's parent chain until it finds a control or form that does have a window handle. If no appropriate handle can be found, the InvokeRequired method returns false. This means that InvokeRequired can return false if Invoke is not required (the call occurs on the same thread), or if the control was created on a different thread but the control's handle has not yet been created." Have  a look at InvokeRequired's implementation: public bool InvokeRequired {     get     {         HandleRef hWnd;         int lpdwProcessId;         if (this.IsHandleCreated)         {             hWnd = new HandleRef(this, this.Handle);         }         else         {             Control wrapper = this.FindMarshallingControl();             if (!wrapper.IsHandleCreated)             {                 return false; // <==========             }             hWnd = new HandleRef(wrapper, wrapper.Handle);         }         int windowThreadProcessId = SafeNativeMethods.GetWindowThreadProcessId(hWnd, out lpdwProcessId);         int currentThreadId = SafeNativeMethods.GetCurrentThreadId();         return (windowThreadProcessId != currentThreadId);     } } Here 's a good article about this and a workaround http://www.ikriv.com/en/prog/info/dotnet/MysteriousHang.html

    Read the article

  • How can I convince cowboy programmers to use source control?

    - by P.Brian.Mackey
    UPDATE I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • ASP.NET 3.5 User Input Validation Basics

    User input validation is an essential and a requirement for any web application deployed on the Internet. This is because on the Internet no can be sure that the user will enter the required inputs in the correct format type and values. This is especially true for a confused web application user and some malicious users. This article series will show you how validate user input in ASP.NET.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

  • YouTube Direct: Getting Started Guide

    YouTube Direct: Getting Started Guide Jeff Posnick narrates a screencast detailing all aspects of getting started with youtube Direct, from required downloads to configuration to deployment. For even more information about youtube Direct, see code.google.com From: GoogleDevelopers Views: 9685 32 ratings Time: 19:58 More in Science & Technology

    Read the article

  • Does relying on intellisense and documentation a lot while coding makes you a bad programmer? [duplicate]

    - by sharp12345
    This question already has an answer here: Forgetting basic language functions due to use of IDE, over reliance? [duplicate] 4 answers Is a programmer required to learn and memorize all syntax, or is it ok to keep handy some documentation? Would it affect the way that managers look at coders? What are the downside of depending on intellisense and auto-complete technologies and pdf documentation?

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • Source of (programmer) inefficiency

    - by Daniel
    I am interested to gain a better insight about the possible reasons of personal inefficiency as programmers (and only in programming) due to – simply - our own errors (because we are humans – well, almost all of us). I am not interested in how much we are productive or in how many adjustements the customer asks for when the work is done, but where and how each of us spend that part of its time in tasks that are unproductive and there is no one to blame except ourselves. Excluding ego - feeding and / or self – gratification, what I am trying to get (for all of us) is: what are the common issues eating our time; insight on reasons for that issues; identify simple way for us, personally (not delegating actions to other or our organizations), to correct our own problems. Please, do not think in academic terms but aim at the opportunity to compare our daily experiences and understand what are and how we try to fix our personal deficiencies. If you are interested to respond to this post, please: integrate the list if you see something important (or obvious) missing; highlight or name honestly your first issue tellng the way you try to address and solve your issue acting on yourself and yourself only in a sort of "continuous quality improving" My criteria for accepting the answer is: choose the best solution (feasibility and utility) to fix one (or more) of the problems of the list. Of course, selecting an error is not a vote on our skills: maybe we are hyper professional programmers and we lose ten minutes only every year or we are terribly inefficient, losing a couple of days a week: reasons for inefficiency could be really the same - but in a different scale. A possible list: Plain error in the names (variables, functions). Inability to see the obvious in your code. Misreading. Lack of concentration. Trying to use a technology you have not mastered. Errors with data types. Time required to understand your previous code or your documentation. Trying to do something more than requested because you enjoy it Using solutions more complicated than required because you enjoy it. Plain logical errors. Errors due to your fault in communications. Distraction My first personal issue: "Trying to use a technology you do not master." I have to use daily several technologies and I often need to spend significant time correcting code because my assumptions were plainly wrong. Reasons for this: production needs put high pressure and make difficult to find the time to learn. I try to address this reading technical books - as many as I can - even if this actually consumes a lot of time.

    Read the article

  • New UK SQL Server community event

    - by GavinPayneUK
    I’m pleased to announce that with the support of VMware I will be holding a new UK SQL Server community event in January 2011. Wednesday January 19th 2011 6.45-9.00pm Free registration required, free parking on-site Registration link here SQL Server in the Evening , hosted at VMware’s UK headquarters in Frimley in Surrey, will cover contemporary technology topics for those using SQL Server in 2011, as well as providing a chance to make and meet with SQL Server community friends. The event will have...(read more)

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • NNTP bridge for MS forums

    - by Luca Calligaris
    For those who wants to use their newsreader to interact with MS forums there's a new tool: the NNTP Bridge application serves as a channel that enables access for NNTP newsreaders to read and write content to Microsoft Forums. You can download the applcation and documentation from http://connect.microsoft.com/MicrosoftForums (registration required).

    Read the article

  • What's the point of adding Unicode identifier support to various language implementations?

    - by Egor Tensin
    I personally find reading code full of Unicode identifiers confusing. In my opinion, it also prevents the code from being easily maintained. Not to mention all the effort required for authors of various translators to implement such support. I also constantly notice the lack (or the presence) of Unicode identifiers support in the lists of (dis)advantages of various language implementations (like it really matters). I don't get it: why so much attention?

    Read the article

  • Reduce Repetitive Initialization Code in C++ Applications by Using Delegating Constructors

    You're often required to repeat identical pieces of initialization code in every constructor of a class that declares multiple constructors. That's because unlike a few other programming languages, The C++ programming language doesn't allow a constructor to call another constructor of the same class. Luckily, this problem is about to disappear with the recent approval of a new C++0x feature called delegating constructors which are explained in this C++ tutorial.

    Read the article

  • AMIS JDEV 12c, ADF 12c and Weblogic 12c presentations

    - by JuergenKress
    Amis held a JDEV 12c, ADF 12c and Weblogic 12c preview event yesterday. The presentations can be found on there slideshare: www.slideshare.net/AMIS_Services. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Amis,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • PayPal India Problems Continues

    - by Ravish
    Reserve Bank of India has been giving hard time to PayPal and its users in India. RBI had previously blocked PayPal transactions in India a few times, and they made it difficult to withdraw payments by enforcing exports and forex related compliance. Here is yet another bad news for Indian PayPal users. With effect from March 1st, Indian users cannot receive payments of more than $500 in your PayPal account. Moreover, you cannot keep or use any funds in your PayPal account. You can use your PayPal balance to make send money for any goods or services, and must withdraw it to your bank account within 7 days of the receipt. These changes have rendered PayPal almost useless for small business, webmasters and publishers. Most webmasters and publishers rely on PayPal to receive payments from advertisers and clients. It has also made it impossible to buy anything online with PayPal. Sending payments abroad via other channels is already a pain, sending a bank wire requires too many formalities, documentation and time. Moreover, you are even required to deduct TDS on payments you make for any products or services. The restrictions will take effect on March 1st, so you have 30 days to complete any pending transactions you may have. This step by RBI is yet another gimmick by corrupt Indian Government to make life difficult of entrepreneurs, kill innovation, slap more taxes and create more channels to take bribes. Following is the notification from PayPal about this issue: As part of our commitment to provide a high level of customer service, we would like to give you a 30-day advance notice on changes to our user agreement for India. With effect from 1 March 2011, you are required to comply with the requirements set out in the notification of the Reserve Bank of India governing the processing and settlement of export-related receipts facilitated by online payment gateways (“RBI Guidelines”). In order to comply with the RBI Guidelines, our user agreement in India will be amended for the following services as follows: Any balance in and all future payments into your PayPal account may not be used to buy goods or services and must be transferred to your bank account in India within 7 days from the receipt of confirmation from the buyer in respect of the goods or services; and Export-related payments for goods and services into your PayPal account may not exceed US$500 per transaction. We seek your understanding as we continue to employ our best efforts to comply with the RBI Guidelines in a timely manner. Related posts:WordCamp India Ends On a High Note Silicon WordPress Theme Accord WordPress Theme

    Read the article

  • How to implement an experience system?

    - by Roflcoptr
    I'm currently writing a small game that is based on earning experiences when killing enemies. As usual, each level requires more experience gain than the level before, and on higher levels killing enemies awards more experience. But I have problem balancing this system. Are there any prebuild algorithms that help to caculate how the experience curve required for each level should look like? And how much experience an average enemy on a specific level should provide?

    Read the article

  • Setting useLegacyV2RuntimeActivationPolicy At Runtime

    - by Reed
    Version 4.0 of the .NET Framework included a new CLR which is almost entirely backwards compatible with the 2.0 version of the CLR.  However, by default, mixed-mode assemblies targeting .NET 3.5sp1 and earlier will fail to load in a .NET 4 application.  Fixing this requires setting useLegacyV2RuntimeActivationPolicy in your app.Config for the application.  While there are many good reasons for this decision, there are times when this is extremely frustrating, especially when writing a library.  As such, there are (rare) times when it would be beneficial to set this in code, at runtime, as well as verify that it’s running correctly prior to receiving a FileLoadException. Typically, loading a pre-.NET 4 mixed mode assembly is handled simply by changing your app.Config file, and including the relevant attribute in the startup element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } This causes your application to run correctly, and load the older, mixed-mode assembly without issues. For full details on what’s happening here and why, I recommend reading Mark Miller’s detailed explanation of this attribute and the reasoning behind it. Before I show any code, let me say: I strongly recommend using the official approach of using app.config to set this policy. That being said, there are (rare) times when, for one reason or another, changing the application configuration file is less than ideal. While this is the supported approach to handling this issue, the CLR Hosting API includes a means of setting this programmatically via the ICLRRuntimeInfo interface.  Normally, this is used if you’re hosting the CLR in a native application in order to set this, at runtime, prior to loading the assemblies.  However, the F# Samples include a nice trick showing how to load this API and bind this policy, at runtime.  This was required in order to host the Managed DirectX API, which is built against an older version of the CLR. This is fairly easy to port to C#.  Instead of a direct port, I also added a little addition – by trapping the COM exception received if unable to bind (which will occur if the 2.0 CLR is already bound), I also allow a runtime check of whether this property was setup properly: public static class RuntimePolicyHelper { public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; } static RuntimePolicyHelper() { ICLRRuntimeInfo clrRuntimeInfo = (ICLRRuntimeInfo)RuntimeEnvironment.GetRuntimeInterfaceAsObject( Guid.Empty, typeof(ICLRRuntimeInfo).GUID); try { clrRuntimeInfo.BindAsLegacyV2Runtime(); LegacyV2RuntimeEnabledSuccessfully = true; } catch (COMException) { // This occurs with an HRESULT meaning // "A different runtime was already bound to the legacy CLR version 2 activation policy." LegacyV2RuntimeEnabledSuccessfully = false; } } [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("BD39D1D2-BA2F-486A-89B0-B4B0CB466891")] private interface ICLRRuntimeInfo { void xGetVersionString(); void xGetRuntimeDirectory(); void xIsLoaded(); void xIsLoadable(); void xLoadErrorString(); void xLoadLibrary(); void xGetProcAddress(); void xGetInterface(); void xSetDefaultStartupFlags(); void xGetDefaultStartupFlags(); [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)] void BindAsLegacyV2Runtime(); } } Using this, it’s possible to not only set this at runtime, but also verify, prior to loading your mixed mode assembly, whether this will succeed. In my case, this was quite useful – I am working on a library purely for internal use which uses a numerical package that is supplied with both a completely managed as well as a native solver.  The native solver uses a CLR 2 mixed-mode assembly, but is dramatically faster than the pure managed approach.  By checking RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully at runtime, I can decide whether to enable the native solver, and only do so if I successfully bound this policy. There are some tricks required here – To enable this sort of fallback behavior, you must make these checks in a type that doesn’t cause the mixed mode assembly to be loaded.  In my case, this forced me to encapsulate the library I was using entirely in a separate class, perform the check, then pass through the required calls to that class.  Otherwise, the library will load before the hosting process gets enabled, which in turn will fail. This code will also, of course, try to enable the runtime policy before the first time you use this class – which typically means just before the first time you check the boolean value.  As a result, checking this early on in the application is more likely to allow it to work. Finally, if you’re using a library, this has to be called prior to the 2.0 CLR loading.  This will cause it to fail if you try to use it to enable this policy in a plugin for most third party applications that don’t have their app.config setup properly, as they will likely have already loaded the 2.0 runtime. As an example, take a simple audio player.  The code below shows how this can be used to properly, at runtime, only use the “native” API if this will succeed, and fallback (or raise a nicer exception) if this will fail: public class AudioPlayer { private IAudioEngine audioEngine; public AudioPlayer() { if (RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully) { // This will load a CLR 2 mixed mode assembly this.audioEngine = new AudioEngineNative(); } else { this.audioEngine = new AudioEngineManaged(); } } public void Play(string filename) { this.audioEngine.Play(filename); } } Now – the warning: This approach works, but I would be very hesitant to use it in public facing production code, especially for anything other than initializing your own application.  While this should work in a library, using it has a very nasty side effect: you change the runtime policy of the executing application in a way that is very hidden and non-obvious.

    Read the article

  • GPL vs plugin interfaces not designed with a specific application in mind

    - by Kristóf Marussy
    I am not seeking or in need of legal advice, but an interesting though experiment came to my mind. Imagine the following situtation (I cannot really think about a concrete example and I am unsure if a real manifestation even exists): there is a free (libre) api A licensed under some permissive license or even LGPL. Non-free application B implements this api in order host plugins, but there are other free software doing the same thing. Moreover, there is plugin C acting as a plugin under api A. It links to library D, that is under GPL, so C is also under GPL. Plugins using A are loaded into hosts via a dlopen-like mechanism and use complex data structure for host-plugin communication. Neither B nor C distribute any files that may be required for A to function properly (like headers containing the structure definitions of A or dynamic libraries containing helper functions for A written by the authors of A), but such things may exist. Now some user installs application B and plugin C on his machine, along with anything that may be required for api A to function properly. Then he proceeds and loads C into B and creates some intellectual property with B which is not a piece of software. Did a GPL violation happend at some point, and if so, who violated GPL and why? The authors of C violate D's license by making C possible to be used in non-free host B? This is a possibility because they can't give and exception of GPL (like one described in http://www.gnu.org/licenses/gpl-faq.html#GPLPluginsInNF or http://www.gnu.org/licenses/gpl-faq.html#LinkingOverControlledInterface) due to D's license terms. The authors of B violate C's and D's license by making C possible to be loaded in B? This is a possibility because http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins disallows the mechanisms A uses for communitation between the free and non-free modules. The authors of A, because the api may be used (and in this case, was used) for communication between GPL'd and non-free software. This would be extremely absurd. The user, because at the moment of loading B into C, he made a derived work of C. I think this is impossible, because he does not distribute it. But would the situation change is he decided to release a configuration file of B which makes B load C as a plugin? Nobody, because A counts as a 'system library', and both B and C directly interact only with A, not eachother. In a sane world, this would happen... A concrete example of A could be some kind of audio (think LADSPA) or image processing api. However, I could find no such interface (that is free software, generic and is also implemented by commercial tools). A real-world example could also be quite enlightening.

    Read the article

  • Think It's Hard to Integrate the Front and Back Office? Think Again...

    - by ruth.donohue
    There's no doubt about it, fragmented customer information across application silos exist because integration isn't easy. It can be expensive. And it can be further complicated by proprietary architectures and vendors. But by leveraging Oracle Application Integration Architecure, Pillar Data Systems was able to integrate Oracle CRM On Demand with Oracle E-Business Suite in six weeks, reducing the time required to complete the integration by 50% and the maintenance by 20% to 25% to free IT resources to focus on strategic initiatives. Learn more...

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 5 - Using the GridView and the EntityDataSource

    In this article, Vince demonstrates the usage of the GridView control to view, add, update, and delete records using the Entity Framework 4. After providing a short introduction, he provides the steps required to create a web site, entity data model, web form and template fields with the help of relevant source code and screenshots.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle 12cR1 : Evaluación "What-If" de un comando crsctl con Oracle Clusterware

    - by grantunez-Oracle
    Oracle en su nueva version 12cR1 introdujo una nueva y pequeña característica  al Oracle Clusterware, pero el que sea pequeña, no significa que no sea de gran utilidad. En versiones anteriores, si queríamos saber que iba a pasar al ejecutar un comando con la herramienta crsctl, teníamos que hacerlo en un ambiente de pruebas, ya que si no sabíamos de que se trataba el comando, se convertía en algo muy peligroso hacerlo sobre producción. En Oracle Clusterware 12cR1 se introduce la evaluación de comando tipo "What-If" en la herramienta mencionada anteriormente, crsctl eval, que lo que nos permite es ver , que va a suceder si ejecuta el comando, sin que realmente se ejecute el comando. Primero vamos a ver que recursos tenemos arriba  [oracle@oel6-112-rac1 ~]$ crsctl stat res -t--------------------------------------------------------------------------------Name           Target  State        Server                   State details       --------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.ASMNET1LSNR_ASM.lsnr               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.DATA.dg               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.LISTENER.lsnr               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.net1.network               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.ons               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  ONLINE       oel6-112-rac2            STABLEora.proxy_advm               ONLINE  ONLINE       oel6-112-rac1            STABLE               ONLINE  OFFLINE      oel6-112-rac2            CLEANINGora.LISTENER_SCAN1.lsnr      1        ONLINE  ONLINE       oel6-112-rac2            STABLEora.LISTENER_SCAN2.lsnr      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.LISTENER_SCAN3.lsnr      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.MGMTLSNR      1        ONLINE  ONLINE       oel6-112-rac1            169.254.247.50 192.1                                                             68.1.111,STABLEora.asm      1        ONLINE  ONLINE       oel6-112-rac1            STABLE      2        ONLINE  ONLINE       oel6-112-rac2            STABLE      3        OFFLINE OFFLINE                               STABLEora.cvu      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.gns      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.gns.vip      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.mgmtdb      1        ONLINE  ONLINE       oel6-112-rac1            Open,STABLEora.oc4j      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.oel6-112-rac1.vip      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.oel6-112-rac2.vip      1        ONLINE  ONLINE       oel6-112-rac2            STABLEora.orcl.db      1        OFFLINE OFFLINE      oel6-112-rac2            Instance Shutdown,STABLE       2        ONLINE  ONLINE       oel6-112-rac1            Open,STABLEora.scan1.vip      1        ONLINE  ONLINE       oel6-112-rac2            STABLEora.scan2.vip      1        ONLINE  ONLINE       oel6-112-rac1            STABLEora.scan3.vip      1        ONLINE  ONLINE       oel6-112-rac1            STABLE Ahora lo que vamos a hacer , es evaluar que pasaría, si por ejemplo, el recurso de ASM llegara a fallar en nuestro nodo [oracle@oel6-112-rac1 ~]$ crsctl eval fail resource ora.asm Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action --------------------------------------------------------------------------------      1    N Create new group (Stage Group = 2)    Y Resource 'ora.asm' (1/1) will be in state [ONLINE|INTERMEDIATE] on server [oel6-112-rac1]    Y Resource 'ora.asm' (2/1) will be in state [ONLINE|INTERMEDIATE] on server [oel6-112-rac2] -------------------------------------------------------------------------------- Stage Group 2: -------------------------------------------------------------------------------- Stage Number Required Action --------------------------------------------------------------------------------      1    N Resource 'ora.proxy_advm' (oel6-112-rac2) will be in state [ONLINE|INTERMEDIATE] on server [oel6-112-rac2] --------------------------------------------------------------------------------  Como vamos a ver a continuación, no es lo mismo se decidiéramos detener el recurso, en este caso tenemos que forzarlo , ya que es un recurso que no se puede detener sin la opción "-f":  [oracle@oel6-112-rac1 ~]$ crsctl eval stop resource ora.asm Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action --------------------------------------------------------------------------------      1    N Error code [222] for entity [ora.asm]. Message is [CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.DATA.dg', but the force option was not specified]. -------------------------------------------------------------------------------- [oracle@oel6-112-rac1 ~]$ crsctl eval stop resource ora.asm -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action --------------------------------------------------------------------------------      1    Y Resource 'ora.DATA.dg' (oel6-112-rac1) will be in state [OFFLINE]    Y Resource 'ora.DATA.dg' (oel6-112-rac2) will be in state [OFFLINE]    Y Resource 'ora.orcl.db' (2/1) will be in state [OFFLINE]    Y Resource 'ora.proxy_advm' (oel6-112-rac1) will be in state [OFFLINE]      2    Y Resource 'ora.asm' (1/1) will be in state [OFFLINE]    Y Resource 'ora.asm' (2/1) will be in state [OFFLINE] --------------------------------------------------------------------------------  Como puedes ver, es una característica nueva y pequeña, pero bastante util para evaluar todos tus comandos de crsctl sin impactar a ninguno de tus recursos. Así te permitira valorar el impacto que tendra el comando que vas a ejecutar. Puedes encontrar mas información en: Utilizando el comando eval

    Read the article

  • There's More to SEO Than Google

    Though the 10-year search partnership deal between the former rivals was finalised in December, it still required approval from both the US Department of Justice and the European Commission. Microsoft and Yahoo! CEOs seem thrilled with the deal, with Microsoft's Steve Ballmer defining it as 'an exciting milestone' whilst Yahoo!'s Carol Bartz declared it a 'breakthrough search alliance.'

    Read the article

  • Part 1: What are EBS Customizations?

    - by volker.eckardt(at)oracle.com
    Everything what is not shipped as Oracle standard may be called customization. And very often we differentiate between setup and customization, although setup can also be required when working with customizations.This highlights one of the first challenges, because someone needs to track setup brought over with customizations and this needs to be synchronized with the (standard) setup done manually. This is not only a tracking issue, but also a documentation issue. I will cover this in one of the following blogs in more detail.But back to the topic itself. Mainly our code pieces (java, pl/sql, sql, shell scripts), custom objects (tables, views, packages etc.) and application objects (concurrent programs, lookups, forms, reports, OAF pages etc.) are treated as customizations. In general we define two types: customization by extension and customization by modification. For sure we like to minimize standard code modifications, but sometimes it is just not possible to provide a certain functionality without doing it.Keep in mind that the EBS provides a number of alternatives for modifications, just to mention some:Files in file system    add your custom top before the standard top to the pathBI Publisher Report    add a custom layout and disable the standard layout, automatically yours will be taken.Form /OAF Change    use personalization or substitutionUsing such techniques you are on the safe site regarding standard patches, but for sure a retest is always required!Many customizations are growing over the time, initially it was just one file, but in between we have 5, 10 or 15 files in our customization pack. The more files you have, the more important is the installation order.Last but not least also personalization's are treated as customizations, although you may not use any deployment pack to transfer such personalisation's (but you can). For OAF personalization's you can use iSetup, I have also enabled iSetup to allow Forms personalizations to transport.Interfaces and conversion objects are quite often also categorized as customizations and I promote this decision. Your development standards are related to all these kinds of custom code whether we are exchanging data with users (via form or report) or with other systems (via inbound or outbound interface).To cover all these types of customizations two acronyms have been defined: RICE and CEMLI.RICE = Reports, Interfaces, Conversions, and ExtensionsCEMLI = Customization, Extension, Modification, Localization, IntegrationThe word CEMLI has been introduced by Oracle On Demand and is used within Oracle projects quite often, but also RICE is well known as acronym.It doesn't matter which acronym you are using, the main task here is to classify and categorize your customizations to allow everyone to understand when you talk about RICE- 211, CEMLI XXFI_BAST or XXOM_RPT_030.Side note: Such references are not automatically objects prefixes, but they are often used as such. I plan also to address this point in one other blog.Thank you!Volker

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >