Search Results

Search found 17526 results on 702 pages for 'dynamic methods'.

Page 27/702 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Using Dynamic LINQ to get a filter for my Web API

    - by Espo
    We are considering using the Dynamic.CS linq-sample included in the "Samples" directory of visual studio 2008 for our WebAPI project to allow clients to query our data. The interface would be something like this (In addition to the normal GET-methods): public HttpResponseMessage List(string filter = null); The plan is to use the dynamic library to parse the "filter"-variable and then execute the query agains the DB. Any thoughts if this is a good idea? Is it a security problem?

    Read the article

  • These Are Obvious Advantages and Disadvantages to Static As Well As Dynamic Sites

    You can find mostly these types of websites that are mostly developed these days and they are static sites as well as dynamic sites because there is various importance of each of these techniques. Which one you are going to opt should be as per your requirements for your website and generally if you want a website that is going to generate revenue with PPC or affiliate programs then you should go with a static site but in case you wish to create a website that will be more appealing to audience with more interactivity then a dynamic site...

    Read the article

  • Using Dynamic SQL in Stored Procedures

    Dynamic SQL allows stored procedures to “write” or dynamically generate their SQL statements. The most common use case for dynamic SQL is stored procedures with optional parameters in the WHERE clause. These are typically called from reports or screens that have multiple, optional search criteria. This article describes how to write these types of stored procedures so they execute well and resist SQL injection attacks.

    Read the article

  • How to Optimize a Dynamic Website

    Internet technologies and e-commerce are advanced now and still developing day by day. As a result people prefer to have a dynamic website for their businesses or their online presence. So for some webmasters or new SEO's who have experience in doing SEO for a simple static websites becomes necessary to know also about Optimizing a Dynamic Website.

    Read the article

  • Should I Have a Static Or Dynamic Website?

    Are you a business currently looking to get a website built, but don't know whether to get a static or dynamic one? In this article, I explain what the difference is between a static and dynamic website, and the questions you need to ask yourself to help decide which one will be best for your business's website.

    Read the article

  • Fixed and dynamic IPs in ISC DHPD lead to double lease

    - by GorillaPatch
    I would like to have a small dynamic adress part and the most clients are assigned a fixed IP adress. My dhcpd.conf looks like this: use-host-decl-names on; authoritative; allow client-updates; ddns-updates on; # Einstellungen fuer DHCP leases default-lease-time 3600; max-lease-time 86400; lease-file-name "/var/lib/dhcpd/dhcpd.leases"; subnet 192.168.11.0 netmask 255.255.255.0 { ddns-updates on; pool { # IP range which will be assigned statically range 192.168.11.1 192.168.11.240; deny all clients; } pool { # small dynamic range range 192.168.11.241 192.168.11.254; # used for temporary devices } } group { host pc1 { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 192.168.11.11; } } The motivation for the pool declaration with deny all hosts comes from the ISC DHCPD homepage http://www.isc.org/files/auth.html This will allow hosts to be first added to the network, where they will receive a temporary IP from the 241-254 adress range and then later write an explicit host declaration. Upon next connect it will receive the right configuration. The problem is that I am getting error messages that 192.168.11.13 has a dynamic and a static lease. I am a bit confused as I expected the pool declaration with deny all clients would not count as dynamic. Dynamic and static leases present for 192.168.11.13. Remove host declaration pc1 or remove 192.168.11.13 from the dynamic address pool for 192.168.11.0/24 Is there a way to have the DHCP server send an DHCPNA to clients if they have a host statement and retain this dynamic range?

    Read the article

  • SSH Public Key - No supported authentication methods available (server sent public key)

    - by F21
    I have a 12.10 server setup in a virtual machine with its network set to bridged (essentially will be seen as a computer connected to my switch). I installed opensshd via apt-get and was able to connect to the server using putty with my username and password. I then set about trying to get it to use public/private key authentication. I did the following: Generated the keys using PuttyGen. Moved the public key to /etc/ssh/myusername/authorized_keys (I am using encrypted home directories). Set up sshd_config like so: PubkeyAuthentication yes AuthorizedKeysFile /etc/ssh/%u/authorized_keys StrictModes no PasswordAuthentication no UsePAM yes When I connect using putty or WinSCP, I get an error saying No supported authentication methods available (server sent public key). If I run sshd in debug mode, I see: PAM: initializing for "username" PAM: setting PAM_RHOST to "192.168.1.7" PAM: setting PAM_TTY to "ssh" userauth-request for user username service ssh-connection method publickey [preauth] attempt 1 failures 0 [preauth] test whether pkalg/pkblob are acceptable [preauth[ Checking blacklist file /usr/share/ssh/blacklist.RSA-1023 Checking blacklist file /etc/ssh/blacklist.RSA-1023 temporarily_use_uid: 1000/1000 (e=0/0) trying public key file /etc/ssh/username/authorized_keys fd4 clearing O_NONBLOCK restore_uid: 0/0 Failed publickey for username from 192.168.1.7 port 14343 ssh2 Received disconnect from 192.168.1.7: 14: No supported authentication methods available [preauth] do_cleanup [preauth] monitor_read_log: child log fd closed do_cleanup PAM: cleanup Why is this happening and how can I fix this?

    Read the article

  • The Most Effective Learning Methods – The Results

    - by BuckWoody
    Yesterday I posted a blank graph and asked where you thought the labels should go for the most effective learning methods, according to a study they read to me and other teachers here at the University of Washington. Here are the labels in the correct order according to that study – and remember, “Teaching” here means one student explaining something to another: It isn’t really that surprising to learn that we comprehend best when we have to teach a subject to someone else, and you can see that the “participation factor” is the key in the learning methods. The real shocker was the retention level at the various learning modes – lecture was down near the single digits! What does this have to do with databases or the DBA? Well, we all need to learn new things – and many of us are asked to teach others a new task. To be a good teacher, we have to know how a student learns best – and of course that makes us better students as well. So next time you’re asked to transfer some knowledge to someone else, take a look at this chart first – and let me know how it affected your knowledge transfer. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Dynamic Grouping and Columns

    - by Tim Dexter
    Some good collaboration between myself and Kan Nishida (Oracle BIP Consulting) over at bipconsulting on a question that came in yesterday to an internal mailing list. Is there a way to allow columns to be place into a template dynamically? This would be similar to the Answers Column selector. A customer has said Crystal can do this and I am trying to see how BI Pub can do the same. Example: Report has Regions as a dimension in a table, they want the user to select a parameter that will insert either Units or Dollars without having to create multiple templates. Now whether Crystal can actually do it or not is another question, can Publisher? Yes we can! Kan took the first stab. His approach, was to allow to swap out columns in a table in the report. Some quick steps: 1. Create a parameter from BIP server UI 2. Declare the parameter in RTF template You can check this post to see how you can declare the parameter from the server. http://bipconsulting.blogspot.com/2010/02/how-to-pass-user-input-values-to-report.html 3. Use the parameter value to condition if a particular column needs to be displayed or not. You can use <?if@column:.....?> syntax for Column level IF condition. The if@column is covered in user documentation. This would allow a developer to create a report with the parameter or multiple parameters to allow the user to pick a column to be included in the report. I took a slightly different tack, with the mention of the column selector in the Answers report I took that to mean that the user wanted to select more of a dimensional column and then have the report recalculate all its totals and subtotals based on that selected column. This is a little bit more involved and involves some smart XSL and XPATH expressions, but still very doable. The user can select a column as a parameter, that is passed to the template rather than the query. The parameter value that is actually passed is the element name that you want to regroup the data by. Inside the template we then reference that parameter value in our for-each-group loop. That's where we need the trixy XSL/XPATH code to get the regrouping to happen. At this juncture, I need to hat tip to Klaus, for his article on dynamic sorting that he wrote back in 2006. I basically took his sorting code and applied it to the for-each loop. You can follow both of Kan's first two steps above i.e. Create a parameter from BIP server UI - this just needs to be based on a 'list' type list of value with name/value pairs e.g. Department/DEPARTMENT_NAME, Job/JOB_TITLE, etc. The user picks the 'friendly' value and the server passes the element name to the template. Declare the parameter in RTF template - been here before lots of times right? <?param@begin:group1;'"DEPARTMENT_NAME"'?> I have used a default value so that I can test the funtionality inside the template builder (notice the single and double quotes.) Next step is to use the template builder to build a re-grouped report layout. It does not matter if its hard coded right now; we will add in the dynamic piece next. Once you have a functioning template that is re-grouping correctly. Open up the for-each-group field and modify it to use the parameter: <?for-each-group:ROW;./*[name(.) = $group1]?> 'group1' is my grouping parameter, declared above. We need the XPATH expression to find the column in the XML structure we want to group that matches the one passed by the parameter. Its essentially looking through the data tree for a match. We can show the actual grouping value in the report output with a similar XPATH expression <?./*[name(.) = $group1]?> In my example, I took things a little further so that I could have a dynamic label for the parameter value. For instance if I am using MANAGER as the parameter I want to show: Manager: Tim Dexter My XML elements are readable e.g. DEPARTMENT_NAME. Its a simple case of replacing the underscore with a space and then 'initcapping' the result: <?xdoxslt:init_cap(translate($group1,'_',' '))?> With this in place, the user can now select a grouping column in the BIP report viewer and the layout will re-group the data and any calculations based on that column. I built a group above report but you could equally build the group left version to truly mimic the Answers column selector. If you are interested you can get an example report, sample data and layout template here. Of course, you can combine Klaus' dynamic sorting, Kan's conditional column approach and this dynamic grouping to build a real kick ass report for users that will keep them happy for hours..

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • Unit testing internal methods in a strongly named assembly/project

    - by Rohit Gupta
    If you need create Unit tests for internal methods within a assembly in Visual Studio 2005 or greater, then we need to add an entry in the AssemblyInfo.cs file of the assembly for which you are creating the units tests for. For e.g. if you need to create tests for a assembly named FincadFunctions.dll & this assembly contains internal/friend methods within which need to write unit tests for then we add a entry in the FincadFunctions.dll’s AssemblyInfo.cs file like so : 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests")] where FincadFunctionsTests is the name of the Unit Test project which contains the Unit Tests. However if the FincadFunctions.dll is a strongly named assembly then you will the following error when compiling the FincadFunctions.dll assembly :      Friend assembly reference “FincadFunctionsTests” is invalid. Strong-name assemblies must specify a public key in their InternalsVisibleTo declarations. Thus to add a public key token to InternalsVisibleTo Declarations do the following: You need the .snk file that was used to strong-name the FincadFunctions.dll assembly. You can extract the public key from this .snk with the sn.exe tool from the .NET SDK. First we extract just the public key from the key pair (.snk) file into another .snk file. sn -p test.snk test.pub Then we ask for the value of that public key (note we need the long hex key not the short public key token): sn -tp test.pub We end up getting a super LONG string of hex, but that's just what we want, the public key value of this key pair. We add it to the strongly named project "FincadFunctions.dll" that we want to expose our internals from. Before what looked like: 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests")] Now looks like. 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests, 2: PublicKey=002400000480000094000000060200000024000052534131000400000100010011fdf2e48bb")] And we're done. hope this helps

    Read the article

  • Are Get-Set methods a violation of Encapsulation?

    - by Dipan Mehta
    In an Object oriented framework, one believes there must be strict encapsulation. Hence, internal variables are not to be exposed to outside applications. But in many codebases, we see tons of get/set methods which essentially open a formal window to modify internal variables that were originally intended to be strictly prohibited. Isn't it a clear violation of encapsulation? How broadly such a practice is seen and what to do about it? EDIT: I have seen some discussions where there are two opinions in extreme: on one hand people believe that because get/set interface is used to modify any parameter, it does qualifies not be violating encapsulation. On the other hand, there are people who believe it is does violate. Here is my point. Take a case of UDP server, with methods - get_URL(), set_URL(). The URL (to listen to) property is quite a parameter that application needs to be supplied and modified. However, in the same case, if the property like get_byte_buffer_length() and set_byte_buffer_length(), clearly points to values which are quite internal. Won't it imply that it does violate the encapsulation? In fact, even get_byte_buffer_length() which otherwise doesn't modify the object, still misses the point of encapsulation, because, certainly there is an App which knows i have an internal buffer! Tomorrow, if the internal buffer is replaced by something like a *packet_list* the method goes dysfunctional. Is there a universal yes/no towards get set method? Is there any strong guideline that tell programmers (specially the junior ones) as to when does it violate encapsulation and when does it not?

    Read the article

  • Discovery methods

    - by Owen Allen
    In Ops Center, asset discovery is a process in which the software determines what assets exist in your environment. You can't monitor an asset, or do anything to it through Ops Center, until it's discovered. I've seen a couple of questions about how to discover various types of asset, so I thought I'd explain the discovery methods and what they each do. Find Assets - This discovery method searches for service tags on all known networks. Service tags are small files on some hardware and operating systems that provide basic identification info. Once a service tag has been found, you provide credentials to manage the asset. This method can discover assets quickly, but only if the target assets have service tags. Add Assets with discovery profile - This method lets you specify targets by providing IP addresses, IP ranges, or hostnames, as well as the credentials needed to connect to and manage these assets. You can create discovery profiles for any type of asset. Declare asset - This method lets you specify the details of a server, with or without a configured service processor. You can then use Ops Center to install a new operating system or configure the SP. This method works well for new hardware. These methods are all discussed in more detail in the Asset Management chapter of the Feature Reference guide.

    Read the article

  • Find methods related to testcases in Java

    - by user3623718
    I want to automatically change some methods in the program. These methods contain some compiler error and my program aims to fix these compiler errors. After fixing compiler errors I need to run test cases related to the changed method (or class) to know it is correct and if not which test cases failed. As the programs under investigation are very big, I only need to run test cases related to changes. As an example, if I change one method, then I need to only run test cases related to this method. Therefore, what I need is to programmatically be able to find test cases related to each method, and class. It is also useful if there is a tool that can do that for me. As an example, a tool which creates a matrix shows each test case is related to which method(s) One easy way to do that is to run all test cases and save functions they accessed. However, the problem is at the beginning the input program contains compiler error and it is not possible to run test cases because of these compiler error. Please let me know what is the best way to do that. An API or a tool that I can be used programmatically is the best for me.

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • share code between check and process methods

    - by undu
    My job is to refactor an old library for GIS vector data processing. The main class encapsulates a collection of building outlines, and offers different methods for checking data consistency. Those checking functions have an optional parameter that allows to perform some process. For instance: std::vector<Point> checkIntersections(int process_mode = 0); This method tests if some building outlines are intersecting, and return the intersection points. But if you pass a non null argument, the method will modify the outlines to remove the intersection. I think it's pretty bad (at call site, a reader not familiar with the code base will assume that a method called checkSomething only performs a check and doesn't modifiy data) and I want to change this. I also want to avoid code duplication as check and process methods are mostly similar. So I was thinking to something like this: // a private worker std::vector<Point> workerIntersections(int process_mode = 0) { // it's the equivalent of the current checkIntersections, it may perform // a process depending on process_mode } // public interfaces for check and process std::vector<Point> checkIntersections() /* const */ { workerIntersections(0); } std::vector<Point> processIntersections(int process_mode /*I have different process modes*/) { workerIntersections(process_mode); } But that forces me to break const correctness as workerIntersections is a non-const method. How can I separate check and process, avoiding code duplication and keeping const-correctness?

    Read the article

  • Good practice to create extension methods that apply to System.Object?

    - by Christian
    Hello, I'm wondering whether I should create extension methods that apply on the object level or whether they should be located at a lower point in the class hierarchy. What I mean is something along the lines of: public static string SafeToString(this Object o) { if (o == null || o is System.DBNull) return ""; else { if (o is string) return (string)o; else return ""; } } public static int SafeToInt(this Object o) { if (o == null || o is System.DBNull) return 0; else { if (o.IsNumeric()) return Convert.ToInt32(o); else return 0; } } //same for double.. etc I wrote those methods since I have to deal a lot with database data (From the OleDbDataReader) that can be null (shouldn't, though) since the underlying database is unfortunately very liberal with columns that may be null. And to make my life a little easier, I came up with those extension methods. What I'd like to know is whether this is good style, acceptable style or bad style. I kinda have my worries about it since it kinda "pollutes" the Object-class. Thank you in advance & Best Regards :) Christian P.S. I didn't tag it as "subjective" intentionally.

    Read the article

  • master pages, form pages, form runat=server > all onclick methods on master page?

    - by b0x0rz
    the problem i have is that i have multiple nested master pages: level 1: global (header, footer, login, navigation, etc...) level 2: specific (search pages, account pages, etc...) level 3: the page itself now, since only one form can have runat=server, i put the form at global page (so i can handle things like login, feedback, etc...). now with this solution i'd have to also put the for example level 3 (see above) methods, such as search also on the level 1 master page, but this will lead to this page being heavy (for development) with code from all places, even those that are used on a single page only (change email form for example). is there any way to delegate such methods (for example: ChangeEMail) from level 1 (global masterpage) to level 3 (the single page itself). to be even more clear: i want to NOT have to have the method ChangeEMail on the global master page code behind, but would like to 'MOVE' it somehow to the only page that will actually use it. the reason why it currently has to be on the global master is that global master has form runat=server and there can be only one of those per aspx page. this way it will be easier (more logical) to structure the code. thnx (hope i explained it correctly) have searched but did not find any general info on handling this case, usualy the answer is: have all the methods on the master page, but i don't like it. so ANY way of moving it to the specific page would be awesome. thnx

    Read the article

  • Use properties or methods to expose business rules in C#?

    - by Val
    I'm writing a class to encapsulate some business rules, each of which is represented by a boolean value. The class will be used in processing an InfoPath form, so the rules get the current program state by looking up values in a global XML data structure using XPath operations. What's the best (most idiomatic) way to expose these rules to callers -- properties or public methods? Call using properties Rules rules = new Rules(); if ( rules.ProjectRequiresApproval ) { // get approval } else { // skip approval } Call using methods Rules rules = new Rules(); if ( rules.ProjectRequiresApproval() ) { // get approval } else { // skip approval } Rules class exposing rules as properties public class Rules() { private int _amount; private int threshold = 100; public Rules() { _amount = someExpensiveXpathOperation; } // rule property public bool ProjectRequiresApproval { get { return _amount < threshold } } } Rules class exposing rules as methods public class Rules() { private int _amount; private int threshold = 100; public Rules() { _amount = someExpensiveXpathOperation; } // rule method public bool ProjectRequiresApproval() { return _amount < threshold; } } What are the pros and cons of one over the other?

    Read the article

  • Do I have to implement Add/Delete methods in my NHibernate entities ?

    - by Lisa
    This is a sample from the Fluent NHibernate website: Compared to the Entitiy Framework I have ADD methods in my POCO in this code sample using NHibernate. With the EF I did context.Add or context.AddObject etc... the context had the methods to put one entity into the others entity collection! Do I really have to implement Add/Delete/Update methods (I do not mean the real database CRUD operations!) in a NHibernate entity ? public class Store { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual IList<Product> Products { get; set; } public virtual IList<Employee> Staff { get; set; } public Store() { Products = new List<Product>(); Staff = new List<Employee>(); } public virtual void AddProduct(Product product) { product.StoresStockedIn.Add(this); Products.Add(product); } public virtual void AddEmployee(Employee employee) { employee.Store = this; Staff.Add(employee); } }

    Read the article

  • Dynamic LINQ in an Assembly Near By

    - by Ricardo Peres
    You may recall my post on Dynamic LINQ. I said then that you had to download Microsoft's samples and compile the DynamicQuery project (or just grab my copy), but there's another way. It turns out Microsoft included the Dynamic LINQ classes in the System.Web.Extensions assembly, not the one from ASP.NET 2.0, but the one that was included with ASP.NET 3.5! The only problem is that all types are private: Here's how to use it: Assembly asm = typeof(UpdatePanel).Assembly; Type dynamicExpressionType = asm.GetType("System.Web.Query.Dynamic.DynamicExpression"); MethodInfo parseLambdaMethod = dynamicExpressionType.GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = (m.Name == "ParseLambda") && (m.GetParameters().Length == 2)).Single().MakeGenericMethod(typeof(DateTime), typeof(Boolean)); Func filterExpression = (parseLambdaMethod.Invoke(null, new Object [] { "Year == 2010", new Object [ 0 ] }) as Expression).Compile(); List list = new List { new DateTime(2010, 1, 1), new DateTime(1999, 1, 12), new DateTime(1900, 10, 10), new DateTime(1900, 2, 20), new DateTime(2012, 5, 5), new DateTime(2012, 1, 20) }; IEnumerable filteredDates = list.Where(filterExpression); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Dynamic endpoint binding in Oracle SOA Suite by Cattle Crew

    - by JuergenKress
    Why is dynamic endpoint binding needed? Sometimes a BPEL process instance has to determine at run-time which implementation of a web service interface is to be called. We’ll show you how to achieve that using dynamic endpoint binding. Let’s imagine the following scenario: we’re running a car rental agency called RYLC (Rent Your Legacy Car) which operates different locations. The process of renting a car is basically identical for all locations except for the determination which cars are currently available. This is depicted in the following diagram: There are three different implementations of the GetAvailableCars service. But how can we achieve calling them dynamically at run-time using Oracle SOA Suite? How to dynamically set the service endpoint There are just a couple of implementation steps we need to perform to enable dynamic endpoint binding: create a new SOA project in JDeveloper add a CarRental BPEL process add an external reference to the GetAvailableCars service within the composite create a DVM file containing the URI’s by which the services for the different locations can be accessed set the endpointURI property on the Invoke component calling the GetAvailableCars service (value is taken from the DVM file) Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Cattle crew,SOA binding,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to deal with static utility classes when designing for testability

    - by Benedikt
    We are trying to design our system to be testable and in most parts developed using TDD. Currently we are trying to solve the following problem: In various places it is necessary for us to use static helper methods like ImageIO and URLEncoder (both standard Java API) and various other libraries that consist mostly of static methods (like the Apache Commons libraries). But it is extremely difficult to test those methods that use such static helper classes. I have several ideas for solving this problem: Use a mock framework that can mock static classes (like PowerMock). This may be the simplest solution but somehow feels like giving up. Create instantiable wrapper classes around all those static utilities so they can be injected into the classes that use them. This sounds like a relatively clean solution but I fear we'll end up creating an awful lot of those wrapper classes. Extract every call to these static helper classes into a function that can be overridden and test a subclass of the class I actually want to test. But I keep thinking that this just has to be a problem that many people have to face when doing TDD - so there must already be solutions for this problem. What is the best strategy to keep classes that use these static helpers testable?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >