Search Results

Search found 8616 results on 345 pages for 'primitive types'.

Page 59/345 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • displaying search results of more than one word

    - by fusion
    in my search form, if the user types 'good', it displays all the results which contain the keyword 'good'. however if the user types in 'good sweetest', it displays no results because there is no record with the two words appearing together; BUT appearing in an entry at different places. for example, the record says: A good action is an ever-remaining store and a pure yield the user types in 'good', it will show up this record, but if the user types in 'good' + 'pure', it will not show anything. or if the record contains the keyword 'good-deeds' and if the user types in 'good deeds' without the hyphen, it will not show anything. what i would like is that if the user types in 'good' + 'pure' or 'good deeds' it should records containing these keywords highlighting them. search.php code: $search_result = ""; $search_result = $_POST["q"]; $search_result = trim($search_result); //Check if the string is empty if ($search_result == "") { echo "<p class='error'>Search Error. Please Enter Your Search Query.</p>" ; exit(); } if ($search_result == "%" || $search_result == "_" || $search_result == "+" ) { echo "<p class='error1'>Search Error. Please Enter a Valid Search Query.</p>" ; exit(); } $result = mysql_query('SELECT cQuotes, vAuthor, cArabic, vReference FROM thquotes WHERE cQuotes LIKE "%' . mysql_real_escape_string($search_result) .'%" ORDER BY idQuotes DESC', $conn) or die ('Error: '.mysql_error()); function h($s) { echo htmlspecialchars($s, ENT_QUOTES); } function highlightWords($string, $word) { $string = preg_replace("/".preg_quote($word, "/")."/i", "<span class='highlight'>$0</span>", $string); /*** return the highlighted string ***/ return $string; } ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result, MYSQL_ASSOC)) { $cQuote = highlightWords(htmlspecialchars($row['cQuotes']), $search_result); ?> <tr> <td style="text-align:right; font-size:18px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php echo $cQuote; ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> </div> search.html: <form name="myform" class="wrapper"> <input type="text" name="q" onkeyup="showUser()" class="txt_search"/> <input type="button" name="button" onclick="showUser()" class="button"/> <p> <div id="txtHint"></div> </form>

    Read the article

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

  • Recover Data Like a Forensics Expert Using an Ubuntu Live CD

    - by Trevor Bekolay
    There are lots of utilities to recover deleted files, but what if you can’t boot up your computer, or the whole drive has been formatted? We’ll show you some tools that will dig deep and recover the most elusive deleted files, or even whole hard drive partitions. We’ve shown you simple ways to recover accidentally deleted files, even a simple method that can be done from an Ubuntu Live CD, but for hard disks that have been heavily corrupted, those methods aren’t going to cut it. In this article, we’ll examine four tools that can recover data from the most messed up hard drives, regardless of whether they were formatted for a Windows, Linux, or Mac computer, or even if the partition table is wiped out entirely. Note: These tools cannot recover data that has been overwritten on a hard disk. Whether a deleted file has been overwritten depends on many factors – the quicker you realize that you want to recover a file, the more likely you will be able to do so. Our setup To show these tools, we’ve set up a small 1 GB hard drive, with half of the space partitioned as ext2, a file system used in Linux, and half the space partitioned as FAT32, a file system used in older Windows systems. We stored ten random pictures on each hard drive. We then wiped the partition table from the hard drive by deleting the partitions in GParted. Is our data lost forever? Installing the tools All of the tools we’re going to use are in Ubuntu’s universe repository. To enable the repository, open Synaptic Package Manager by clicking on System in the top-left, then Administration > Synaptic Package Manager. Click on Settings > Repositories and add a check in the box labelled “Community-maintained Open Source software (universe)”. Click Close, and then in the main Synaptic Package Manager window, click the Reload button. Once the package list has reloaded, and the search index rebuilt, search for and mark for installation one or all of the following packages: testdisk, foremost, and scalpel. Testdisk includes TestDisk, which can recover lost partitions and repair boot sectors, and PhotoRec, which can recover many different types of files from tons of different file systems. Foremost, originally developed by the US Air Force Office of Special Investigations, recovers files based on their headers and other internal structures. Foremost operates on hard drives or drive image files generated by various tools. Finally, scalpel performs the same functions as foremost, but is focused on enhanced performance and lower memory usage. Scalpel may run better if you have an older machine with less RAM. Recover hard drive partitions If you can’t mount your hard drive, then its partition table might be corrupted. Before you start trying to recover your important files, it may be possible to recover one or more partitions on your drive, recovering all of your files with one step. Testdisk is the tool for the job. Start it by opening a terminal (Applications > Accessories > Terminal) and typing in: sudo testdisk If you’d like, you can create a log file, though it won’t affect how much data you recover. Once you make your choice, you’re greeted with a list of the storage media on your machine. You should be able to identify the hard drive you want to recover partitions from by its size and label. TestDisk asks you select the type of partition table to search for. In most cases (ext2/3, NTFS, FAT32, etc.) you should select Intel and press Enter. Highlight Analyse and press enter. In our case, our small hard drive has previously been formatted as NTFS. Amazingly, TestDisk finds this partition, though it is unable to recover it. It also finds the two partitions we just deleted. We are able to change their attributes, or add more partitions, but we’ll just recover them by pressing Enter. If TestDisk hasn’t found all of your partitions, you can try doing a deeper search by selecting that option with the left and right arrow keys. We only had these two partitions, so we’ll recover them by selecting Write and pressing Enter. Testdisk informs us that we will have to reboot. Note: If your Ubuntu Live CD is not persistent, then when you reboot you will have to reinstall any tools that you installed earlier. After restarting, both of our partitions are back to their original states, pictures and all. Recover files of certain types For the following examples, we deleted the 10 pictures from both partitions and then reformatted them. PhotoRec Of the three tools we’ll show, PhotoRec is the most user-friendly, despite being a console-based utility. To start recovering files, open a terminal (Applications > Accessories > Terminal) and type in: sudo photorec To begin, you are asked to select a storage device to search. You should be able to identify the right device by its size and label. Select the right device, and then hit Enter. PhotoRec asks you select the type of partition to search. In most cases (ext2/3, NTFS, FAT, etc.) you should select Intel and press Enter. You are given a list of the partitions on your selected hard drive. If you want to recover all of the files on a partition, then select Search and hit enter. However, this process can be very slow, and in our case we only want to search for pictures files, so instead we use the right arrow key to select File Opt and press Enter. PhotoRec can recover many different types of files, and deselecting each one would take a long time. Instead, we press “s” to clear all of the selections, and then find the appropriate file types – jpg, gif, and png – and select them by pressing the right arrow key. Once we’ve selected these three, we press “b” to save these selections. Press enter to return to the list of hard drive partitions. We want to search both of our partitions, so we highlight “No partition” and “Search” and then press Enter. PhotoRec prompts for a location to store the recovered files. If you have a different healthy hard drive, then we recommend storing the recovered files there. Since we’re not recovering very much, we’ll store it on the Ubuntu Live CD’s desktop. Note: Do not recover files to the hard drive you’re recovering from. PhotoRec is able to recover the 20 pictures from the partitions on our hard drive! A quick look in the recup_dir.1 directory that it creates confirms that PhotoRec has recovered all of our pictures, save for the file names. Foremost Foremost is a command-line program with no interactive interface like PhotoRec, but offers a number of command-line options to get as much data out of your had drive as possible. For a full list of options that can be tweaked via the command line, open up a terminal (Applications > Accessories > Terminal) and type in: foremost –h In our case, the command line options that we are going to use are: -t, a comma-separated list of types of files to search for. In our case, this is “jpeg,png,gif”. -v, enabling verbose-mode, giving us more information about what foremost is doing. -o, the output folder to store recovered files in. In our case, we created a directory called “foremost” on the desktop. -i, the input that will be searched for files. This can be a disk image in several different formats; however, we will use a hard disk, /dev/sda. Our foremost invocation is: sudo foremost –t jpeg,png,gif –o foremost –v –i /dev/sda Your invocation will differ depending on what you’re searching for and where you’re searching for it. Foremost is able to recover 17 of the 20 files stored on the hard drive. Looking at the files, we can confirm that these files were recovered relatively well, though we can see some errors in the thumbnail for 00622449.jpg. Part of this may be due to the ext2 filesystem. Foremost recommends using the –d command-line option for Linux file systems like ext2. We’ll run foremost again, adding the –d command-line option to our foremost invocation: sudo foremost –t jpeg,png,gif –d –o foremost –v –i /dev/sda This time, foremost is able to recover all 20 images! A final look at the pictures reveals that the pictures were recovered with no problems. Scalpel Scalpel is another powerful program that, like Foremost, is heavily configurable. Unlike Foremost, Scalpel requires you to edit a configuration file before attempting any data recovery. Any text editor will do, but we’ll use gedit to change the configuration file. In a terminal window (Applications > Accessories > Terminal), type in: sudo gedit /etc/scalpel/scalpel.conf scalpel.conf contains information about a number of different file types. Scroll through this file and uncomment lines that start with a file type that you want to recover (i.e. remove the “#” character at the start of those lines). Save the file and close it. Return to the terminal window. Scalpel also has a ton of command-line options that can help you search quickly and effectively; however, we’ll just define the input device (/dev/sda) and the output folder (a folder called “scalpel” that we created on the desktop). Our invocation is: sudo scalpel /dev/sda –o scalpel Scalpel is able to recover 18 of our 20 files. A quick look at the files scalpel recovered reveals that most of our files were recovered successfully, though there were some problems (e.g. 00000012.jpg). Conclusion In our quick toy example, TestDisk was able to recover two deleted partitions, and PhotoRec and Foremost were able to recover all 20 deleted images. Scalpel recovered most of the files, but it’s very likely that playing with the command-line options for scalpel would have enabled us to recover all 20 images. These tools are lifesavers when something goes wrong with your hard drive. If your data is on the hard drive somewhere, then one of these tools will track it down! Similar Articles Productive Geek Tips Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDUse an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveReset Your Ubuntu Password Easily from the Live CDBackup Your Windows Live Writer SettingsAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Identity Claims Encoding for SharePoint

    - by Shawn Cicoria
    Just to remind myself, the list of claim types and their encodings are listed here at the bottom. http://msdn.microsoft.com/en-us/library/gg481769.aspx Where for example: i:0#.w|contoso\scicoria ‘i’ = identity, could be ‘c’ for others # == SPClaimTypes.UserLogonName . == Microsoft.IdentityModel.Claims.ClaimValueTypes.String Table for reference: Table 1. Claim types encoding Character Claim Type ! SPClaimTypes.IdentityProvider ” SPClaimTypes.UserIdentifier # SPClaimTypes.UserLogonName $ SPClaimTypes.DistributionListClaimType % SPClaimTypes.FarmId & SPClaimTypes.ProcessIdentitySID ‘ SPClaimTypes.ProcessIdentityLogonName ( SPClaimTypes.IsAuthenticated ) Microsoft.IdentityModel.Claims.ClaimTypes.PrimarySid * Microsoft.IdentityModel.Claims.ClaimTypes.PrimaryGroupSid + Microsoft.IdentityModel.Claims.ClaimTypes.GroupSid - Microsoft.IdentityModel.Claims.ClaimTypes.Role . System.IdentityModel.Claims.ClaimTypes.Anonymous / System.IdentityModel.Claims.ClaimTypes.Authentication 0 System.IdentityModel.Claims.ClaimTypes.AuthorizationDecision 1 System.IdentityModel.Claims.ClaimTypes.Country 2 System.IdentityModel.Claims.ClaimTypes.DateOfBirth 3 System.IdentityModel.Claims.ClaimTypes.DenyOnlySid 4 System.IdentityModel.Claims.ClaimTypes.Dns 5 System.IdentityModel.Claims.ClaimTypes.Email 6 System.IdentityModel.Claims.ClaimTypes.Gender 7 System.IdentityModel.Claims.ClaimTypes.GivenName 8 System.IdentityModel.Claims.ClaimTypes.Hash 9 System.IdentityModel.Claims.ClaimTypes.HomePhone < System.IdentityModel.Claims.ClaimTypes.Locality = System.IdentityModel.Claims.ClaimTypes.MobilePhone > System.IdentityModel.Claims.ClaimTypes.Name ? System.IdentityModel.Claims.ClaimTypes.NameIdentifier @ System.IdentityModel.Claims.ClaimTypes.OtherPhone [ System.IdentityModel.Claims.ClaimTypes.PostalCode \ System.IdentityModel.Claims.ClaimTypes.PPID ] System.IdentityModel.Claims.ClaimTypes.Rsa ^ System.IdentityModel.Claims.ClaimTypes.Sid _ System.IdentityModel.Claims.ClaimTypes.Spn ` System.IdentityModel.Claims.ClaimTypes.StateOrProvince a System.IdentityModel.Claims.ClaimTypes.StreetAddress b System.IdentityModel.Claims.ClaimTypes.Surname c System.IdentityModel.Claims.ClaimTypes.System d System.IdentityModel.Claims.ClaimTypes.Thumbprint e System.IdentityModel.Claims.ClaimTypes.Upn f System.IdentityModel.Claims.ClaimTypes.Uri g System.IdentityModel.Claims.ClaimTypes.Webpage Table 2. Claim value types encoding Character Claim Type ! Microsoft.IdentityModel.Claims.ClaimValueTypes.Base64Binary “ Microsoft.IdentityModel.Claims.ClaimValueTypes.Boolean # Microsoft.IdentityModel.Claims.ClaimValueTypes.Date $ Microsoft.IdentityModel.Claims.ClaimValueTypes.Datetime % Microsoft.IdentityModel.Claims.ClaimValueTypes.DaytimeDuration & Microsoft.IdentityModel.Claims.ClaimValueTypes.Double ‘ Microsoft.IdentityModel.Claims.ClaimValueTypes.DsaKeyValue ( Microsoft.IdentityModel.Claims.ClaimValueTypes.HexBinary ) Microsoft.IdentityModel.Claims.ClaimValueTypes.Integer * Microsoft.IdentityModel.Claims.ClaimValueTypes.KeyInfo + Microsoft.IdentityModel.Claims.ClaimValueTypes.Rfc822Name - Microsoft.IdentityModel.Claims.ClaimValueTypes.RsaKeyValue . Microsoft.IdentityModel.Claims.ClaimValueTypes.String / Microsoft.IdentityModel.Claims.ClaimValueTypes.Time 0 Microsoft.IdentityModel.Claims.ClaimValueTypes.X500Name 1 Microsoft.IdentityModel.Claims.ClaimValueTypes.YearMonthDuration

    Read the article

  • C#/.NET Little Wonders: Tuples and Tuple Factory Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can really help improve your code by making it easier to write and maintain.  This week, we look at the System.Tuple class and the handy factory methods for creating a Tuple by inferring the types. What is a Tuple? The System.Tuple is a class that tends to inspire a reaction in one of two ways: love or hate.  Simply put, a Tuple is a data structure that holds a specific number of items of a specific type in a specific order.  That is, a Tuple<int, string, int> is a tuple that contains exactly three items: an int, followed by a string, followed by an int.  The sequence is important not only to distinguish between two members of the tuple with the same type, but also for comparisons between tuples.  Some people tend to love tuples because they give you a quick way to combine multiple values into one result.  This can be handy for returning more than one value from a method (without using out or ref parameters), or for creating a compound key to a Dictionary, or any other purpose you can think of.  They can be especially handy when passing a series of items into a call that only takes one object parameter, such as passing an argument to a thread's startup routine.  In these cases, you do not need to define a class, simply create a tuple containing the types you wish to return, and you are ready to go? On the other hand, there are some people who see tuples as a crutch in object-oriented design.  They may view the tuple as a very watered down class with very little inherent semantic meaning.  As an example, what if you saw this in a piece of code: 1: var x = new Tuple<int, int>(2, 5); What are the contents of this tuple?  If the tuple isn't named appropriately, and if the contents of each member are not self evident from the type this can be a confusing question.  The people who tend to be against tuples would rather you explicitly code a class to contain the values, such as: 1: public sealed class RetrySettings 2: { 3: public int TimeoutSeconds { get; set; } 4: public int MaxRetries { get; set; } 5: } Here, the meaning of each int in the class is much more clear, but it's a bit more work to create the class and can clutter a solution with extra classes. So, what's the correct way to go?  That's a tough call.  You will have people who will argue quite well for one or the other.  For me, I consider the Tuple to be a tool to make it easy to collect values together easily.  There are times when I just need to combine items for a key or a result, in which case the tuple is short lived and so the meaning isn't easily lost and I feel this is a good compromise.  If the scope of the collection of items, though, is more application-wide I tend to favor creating a full class. Finally, it should be noted that tuples are immutable.  That means they are assigned a value at construction, and that value cannot be changed.  Now, of course if the tuple contains an item of a reference type, this means that the reference is immutable and not the item referred to. Tuples from 1 to N Tuples come in all sizes, you can have as few as one element in your tuple, or as many as you like.  However, since C# generics can't have an infinite generic type parameter list, any items after 7 have to be collapsed into another tuple, as we'll show shortly. So when you declare your tuple from sizes 1 (a 1-tuple or singleton) to 7 (a 7-tuple or septuple), simply include the appropriate number of type arguments: 1: // a singleton tuple of integer 2: Tuple<int> x; 3:  4: // or more 5: Tuple<int, double> y; 6:  7: // up to seven 8: Tuple<int, double, char, double, int, string, uint> z; Anything eight and above, and we have to nest tuples inside of tuples.  The last element of the 8-tuple is the generic type parameter Rest, this is special in that the Tuple checks to make sure at runtime that the type is a Tuple.  This means that a simple 8-tuple must nest a singleton tuple (one of the good uses for a singleton tuple, by the way) for the Rest property. 1: // an 8-tuple 2: Tuple<int, int, int, int, int, double, char, Tuple<string>> t8; 3:  4: // an 9-tuple 5: Tuple<int, int, int, int, double, int, char, Tuple<string, DateTime>> t9; 6:  7: // a 16-tuple 8: Tuple<int, int, int, int, int, int, int, Tuple<int, int, int, int, int, int, int, Tuple<int,int>>> t14; Notice that on the 14-tuple we had to have a nested tuple in the nested tuple.  Since the tuple can only support up to seven items, and then a rest element, that means that if the nested tuple needs more than seven items you must nest in it as well.  Constructing tuples Constructing tuples is just as straightforward as declaring them.  That said, you have two distinct ways to do it.  The first is to construct the tuple explicitly yourself: 1: var t3 = new Tuple<int, string, double>(1, "Hello", 3.1415927); This creates a triple that has an int, string, and double and assigns the values 1, "Hello", and 3.1415927 respectively.  Make sure the order of the arguments supplied matches the order of the types!  Also notice that we can't half-assign a tuple or create a default tuple.  Tuples are immutable (you can't change the values once constructed), so thus you must provide all values at construction time. Another way to easily create tuples is to do it implicitly using the System.Tuple static class's Create() factory methods.  These methods (much like C++'s std::make_pair method) will infer the types from the method call so you don't have to type them in.  This can dramatically reduce the amount of typing required especially for complex tuples! 1: // this 4-tuple is typed Tuple<int, double, string, char> 2: var t4 = Tuple.Create(42, 3.1415927, "Love", 'X'); Notice how much easier it is to use the factory methods and infer the types?  This can cut down on typing quite a bit when constructing tuples.  The Create() factory method can construct from a 1-tuple (singleton) to an 8-tuple (octuple), which of course will be a octuple where the last item is a singleton as we described before in nested tuples. Accessing tuple members Accessing a tuple's members is simplicity itself… mostly.  The properties for accessing up to the first seven items are Item1, Item2, …, Item7.  If you have an octuple or beyond, the final property is Rest which will give you the nested tuple which you can then access in a similar matter.  Once again, keep in mind that these are read-only properties and cannot be changed. 1: // for septuples and below, use the Item properties 2: var t1 = Tuple.Create(42, 3.14); 3:  4: Console.WriteLine("First item is {0} and second is {1}", 5: t1.Item1, t1.Item2); 6:  7: // for octuples and above, use Rest to retrieve nested tuple 8: var t9 = new Tuple<int, int, int, int, int, int, int, 9: Tuple<int, int>>(1,2,3,4,5,6,7,Tuple.Create(8,9)); 10:  11: Console.WriteLine("The 8th item is {0}", t9.Rest.Item1); Tuples are IStructuralComparable and IStructuralEquatable Most of you know about IComparable and IEquatable, what you may not know is that there are two sister interfaces to these that were added in .NET 4.0 to help support tuples.  These IStructuralComparable and IStructuralEquatable make it easy to compare two tuples for equality and ordering.  This is invaluable for sorting, and makes it easy to use tuples as a compound-key to a dictionary (one of my favorite uses)! Why is this so important?  Remember when we said that some folks think tuples are too generic and you should define a custom class?  This is all well and good, but if you want to design a custom class that can automatically order itself based on its members and build a hash code for itself based on its members, it is no longer a trivial task!  Thankfully the tuple does this all for you through the explicit implementations of these interfaces. For equality, two tuples are equal if all elements are equal between the two tuples, that is if t1.Item1 == t2.Item1 and t1.Item2 == t2.Item2, and so on.  For ordering, it's a little more complex in that it compares the two tuples one at a time starting at Item1, and sees which one has a smaller Item1.  If one has a smaller Item1, it is the smaller tuple.  However if both Item1 are the same, it compares Item2 and so on. For example: 1: var t1 = Tuple.Create(1, 3.14, "Hi"); 2: var t2 = Tuple.Create(1, 3.14, "Hi"); 3: var t3 = Tuple.Create(2, 2.72, "Bye"); 4:  5: // true, t1 == t2 because all items are == 6: Console.WriteLine("t1 == t2 : " + t1.Equals(t2)); 7:  8: // false, t1 != t2 because at least one item different 9: Console.WriteLine("t2 == t2 : " + t2.Equals(t3)); The actual implementation of IComparable, IEquatable, IStructuralComparable, and IStructuralEquatable is explicit, so if you want to invoke the methods defined there you'll have to manually cast to the appropriate interface: 1: // true because t1.Item1 < t3.Item1, if had been same would check Item2 and so on 2: Console.WriteLine("t1 < t3 : " + (((IComparable)t1).CompareTo(t3) < 0)); So, as I mentioned, the fact that tuples are automatically equatable and comparable (provided the types you use define equality and comparability as needed) means that we can use tuples for compound keys in hashing and ordering containers like Dictionary and SortedList: 1: var tupleDict = new Dictionary<Tuple<int, double, string>, string>(); 2:  3: tupleDict.Add(t1, "First tuple"); 4: tupleDict.Add(t2, "Second tuple"); 5: tupleDict.Add(t3, "Third tuple"); Because IEquatable defines GetHashCode(), and Tuple's IStructuralEquatable implementation creates this hash code by combining the hash codes of the members, this makes using the tuple as a complex key quite easy!  For example, let's say you are creating account charts for a financial application, and you want to cache those charts in a Dictionary based on the account number and the number of days of chart data (for example, a 1 day chart, 1 week chart, etc): 1: // the account number (string) and number of days (int) are key to get cached chart 2: var chartCache = new Dictionary<Tuple<string, int>, IChart>(); Summary The System.Tuple, like any tool, is best used where it will achieve a greater benefit.  I wouldn't advise overusing them, on objects with a large scope or it can become difficult to maintain.  However, when used properly in a well defined scope they can make your code cleaner and easier to maintain by removing the need for extraneous POCOs and custom property hashing and ordering. They are especially useful in defining compound keys to IDictionary implementations and for returning multiple values from methods, or passing multiple values to a single object parameter. Tweet Technorati Tags: C#,.NET,Tuple,Little Wonders

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

  • SharePoint Content Type Cheat Sheet

    - by Bil Simser
    PrincipleAny application or solution built in SharePoint must use a custom content type over adding columns to lists. The only exception to this is one-off solutions that have no life-cycle, proof-of-concepts, etc.Creating Content TypesWeb UI. Not portable, POC onlyC# or Declarative (XML). Must deploy these as FeaturesRuleDo not chagne the base XML for a Content Type after deploying. The only exception to this rule is that you can re-deploy a modified Content Type definition only after completely removing it from the environment (either programatically or by hand).Updating Content TypesUpdate and push down to child typesWeb UI. Manual for each environment. Document steps required for repeatability.Feature Upgrade. Preferred solution.C#. If you created the content type through code you might want to go this route. Create new modified Content Types and hide the old one. Not recommended but useful for legacy.ReferencesCreate Custom Content  Types in SharePoint 2010 (C#)Content Type Definitions  (XML)Creating Content Types (XML  and C#)Updating ApproachesUpdating Child Content TypesAgree or disagree?

    Read the article

  • A Look Inside JSR 360 - CLDC 8

    - by Roger Brinkley
    If you didn't notice during JavaOne the Java Micro Edition took a major step forward in its consolidation with Java Standard Edition when JSR 360 was proposed to the JCP community. Over the last couple of years there has been a focus to move Java ME back in line with it's big brother Java SE. We see evidence of this in JCP itself which just recently merged the ME and SE/EE Executive Committees into a single Java Executive Committee. But just before that occurred JSR 360 was proposed and approved for development on October 29. So let's take a look at what changes are now being proposed. In a way JSR 360 is returning back to the original roots of Java ME when it was first introduced. It was indeed a subset of the JDK 4 language, but as Java progressed many of the language changes were not implemented in the Java ME. Back then the tradeoff was still a functionality, footprint trade off but the major market was feature phones. Today the market has changed and CLDC, while it will still target feature phones, will have it primary emphasis on embedded devices like wireless modules, smart meters, health care monitoring and other M2M devices. The major changes will come in three areas: language feature changes, library changes, and consolidating the Generic Connection Framework.  There have been three Java SE versions that have been implemented since JavaME was first developed so the language feature changes can be divided into changes that came in JDK 5 and those in JDK 7, which mostly consist of the project Coin changes. There were no language changes in JDK 6 but the changes from JDK 5 are: Assertions - Assertions enable you to test your assumptions about your program. For example, if you write a method that calculates the speed of a particle, you might assert that the calculated speed is less than the speed of light. In the example code below if the interval isn't between 0 and and 1,00 the an error of "Invalid value?" would be thrown. private void setInterval(int interval) { assert interval > 0 && interval <= 1000 : "Invalid value?" } Generics - Generics add stability to your code by making more of your bugs detectable at compile time. Code that uses generics has many benefits over non-generic code with: Stronger type checks at compile time. Elimination of casts. Enabling programming to implement generic algorithms. Enhanced for Loop - the enhanced for loop allows you to iterate through a collection without having to create an Iterator or without having to calculate beginning and end conditions for a counter variable. The enhanced for loop is the easiest of the new features to immediately incorporate in your code. In this tip you will see how the enhanced for loop replaces more traditional ways of sequentially accessing elements in a collection. void processList(Vector<string> list) { for (String item : list) { ... Autoboxing/Unboxing - This facility eliminates the drudgery of manual conversion between primitive types, such as int and wrapper types, such as Integer.  Hashtable<Integer, string=""> data = new Hashtable<>(); void add(int id, String value) { data.put(id, value); } Enumeration - Prior to JDK 5 enumerations were not typesafe, had no namespace, were brittle because they were compile time constants, and provided no informative print values. JDK 5 added support for enumerated types as a full-fledged class (dubbed an enum type). In addition to solving all the problems mentioned above, it allows you to add arbitrary methods and fields to an enum type, to implement arbitrary interfaces, and more. Enum types provide high-quality implementations of all the Object methods. They are Comparable and Serializable, and the serial form is designed to withstand arbitrary changes in the enum type. enum Season {WINTER, SPRING, SUMMER, FALL}; } private Season season; void setSeason(Season newSeason) { season = newSeason; } Varargs - Varargs eliminates the need for manually boxing up argument lists into an array when invoking methods that accept variable-length argument lists. The three periods after the final parameter's type indicate that the final argument may be passed as an array or as a sequence of arguments. Varargs can be used only in the final argument position. void warning(String format, String... parameters) { .. for(String p : parameters) { ...process(p);... } ... } Static Imports -The static import construct allows unqualified access to static members without inheriting from the type containing the static members. Instead, the program imports the members either individually or en masse. Once the static members have been imported, they may be used without qualification. The static import declaration is analogous to the normal import declaration. Where the normal import declaration imports classes from packages, allowing them to be used without package qualification, the static import declaration imports static members from classes, allowing them to be used without class qualification. import static data.Constants.RATIO; ... double r = Math.cos(RATIO * theta); Annotations - Annotations provide data about a program that is not part of the program itself. They have no direct effect on the operation of the code they annotate. There are a number of uses for annotations including information for the compiler, compiler-time and deployment-time processing, and run-time processing. They can be applied to a program's declarations of classes, fields, methods, and other program elements. @Deprecated public void clear(); The language changes from JDK 7 are little more familiar as they are mostly the changes from Project Coin: String in switch - Hey it only took us 18 years but the String class can be used in the expression of a switch statement. Fortunately for us it won't take that long for JavaME to adopt it. switch (arg) { case "-data": ... case "-out": ... Binary integral literals and underscores in numeric literals - Largely for readability, the integral types (byte, short, int, and long) can also be expressed using the binary number system. and any number of underscore characters (_) can appear anywhere between digits in a numerical literal. byte flags = 0b01001111; long mask = 0xfff0_ff08_4fff_0fffl; Multi-catch and more precise rethrow - A single catch block can handle more than one type of exception. In addition, the compiler performs more precise analysis of rethrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. catch (IOException | InterruptedException ex) { logger.log(ex); throw ex; } Type Inference for Generic Instance Creation - Otherwise known as the diamond operator, the type arguments required to invoke the constructor of a generic class can be replaced with an empty set of type parameters (<>) as long as the compiler can infer the type arguments from the context.  map = new Hashtable<>(); Try-with-resource statement - The try-with-resources statement is a try statement that declares one or more resources. A resource is an object that must be closed after the program is finished with it. The try-with-resources statement ensures that each resource is closed at the end of the statement.  try (DataInputStream is = new DataInputStream(...)) { return is.readDouble(); } Simplified varargs method invocation - The Java compiler generates a warning at the declaration site of a varargs method or constructor with a non-reifiable varargs formal parameter. Java SE 7 introduced a compiler option -Xlint:varargs and the annotations @SafeVarargs and @SuppressWarnings({"unchecked", "varargs"}) to supress these warnings. On the library side there are new features that will be added to satisfy the language requirements above and some to improve the currently available set of APIs.  The library changes include: Collections update - New Collection, List, Set and Map, Iterable and Iteratator as well as implementations including Hashtable and Vector. Most of the work is too support generics String - New StringBuilder and CharSequence as well as a Stirng formatter. The javac compiler  now uses the the StringBuilder instead of String Buffer. Since StringBuilder is synchronized there is a performance increase which has necessitated the wahat String constructor works. Comparable interface - The comparable interface works with Collections, making it easier to reuse. Try with resources - Closeable and AutoCloseable Annotations - While support for Annotations is provided it will only be a compile time support. SuppressWarnings, Deprecated, Override NIO - There is a subset of NIO Buffer that have been in use on the of the graphics packages and needs to be pulled in and also support for NIO File IO subset. Platform extensibility via Service Providers (ServiceLoader) - ServiceLoader interface dos late bindings of interface to existing implementations. It helpe to package an interface and behavior of the implementation at a later point in time.Provider classes must have a zero-argument constructor so that they can be instantiated during loading. They are located and instantiated on demand and are identified via a provider-configuration file in the METAINF/services resource directory. This is a mechansim from Java SE. import com.XYZ.ServiceA; ServiceLoader<ServiceA> sl1= new ServiceLoader(ServiceA.class); Resources: META-INF/services/com.XYZ.ServiceA: ServiceAProvider1 ServiceAProvider2 ServiceAProvider3 META-INF/services/ServiceB: ServiceBProvider1 ServiceBProvider2 From JSR - I would rather use this list I think The Generic Connection Framework (GCF) was previously specified in a number of different JSRs including CLDC, MIDP, CDC 1.2, and JSR 197. JSR 360 represents a rare opportunity to consolidated and reintegrate parts that were duplicated in other specifications into a single specification, upgrade the APIs as well provide new functionality. The proposal is to specify a combined GCF specification that can be used with Java ME or Java SE and be backwards compatible with previous implementations. Because of size limitations as well as the complexity of the some features like InvokeDynamic and Unicode 6 will not be included. Additionally, any language or library changes in JDK 8 will be not be included. On the upside, with all the changes being made, backwards compatibility will still be maintained. JSR 360 is a major step forward for Java ME in terms of platform modernization, language alignment, and embedded support. If you're interested in following the progress of this JSR see the JSR's java.net project for details of the email lists, discussions groups.

    Read the article

  • My father is a doctor. He is insisting on writing a database to store non-critical patient information, with no programming background

    - by Dominic Bou-Samra
    So, my father is currently in the process of "hacking" together a database using FileMaker Pro, a GUI based databasing tool for his small (4 doctor) practice. The database will be used to help ease the burden on reporting from medical machines, streamlining quite a clumsy process. He's got no programming background, and seems to be doing everything in his power to not learn things correctly. He's got duplicate data types, no database-enforced relationships (foreign/primary key constraints) and a dozen other issues. He's doing it all by hand via GUI tool using Youtube videos. My issue is, that whilst I want him to succeed 100%, I don't think it's appropriate for him to be handling these types of decisions. How do I convince him that without some sort of education in these topics, a hacked together solution is a bad idea? He's can be quite stubborn and I think he sees these types of jobs as "childs play" How should I approach this? Is it even that bad an idea - or am I correct in thinking he should hire a proper DBA/developer to handle this so that it doesn't become a maintenance nightmare? NB: I am a developer consultant of 4 years and I've seen my share of painful customer implementations.

    Read the article

  • Automate #include refactoring in C++ [on hold]

    - by Mikhail
    I have a big project with hundreds of files. And as it often happens to C++ projects, #include directives are in messed up. I want to refactor them to increase clarity, decrease compilation time and simplify analysis. For each .h file I want to make sure that: It have #include directives only for types it is using But it have only forward declarations of types that are used as T* or T& For each .cpp file I want to make sure that: It have #include directives only for types it is using and not already included by another headers (no indirect includes when possible) I'm looking for a tool which will help me to automate this refactoring. For now I only know of tools that helps to remove redundant includes, they are many: PC-lint include-what-you-use cppclean ProFactor IncludeManager But I know of no tools to help me to move necessary includes in .h files or replace includes with forward declarations. Any ideas? Tools for Windows and Visual Studio are preferred. Update. Considered to be off-topic. Please, follow the link on Software Recommendations http://softwarerecs.stackexchange.com/q/4461/3331

    Read the article

  • Misconceptions about purely functional languages?

    - by Giorgio
    I often encounter the following statements / arguments: Pure functional programming languages do not allow side effects (and are therefore of little use in practice because any useful program does have side effects, e.g. when it interacts with the external world). Pure functional programming languages do not allow to write a program that maintains state (which makes programming very awkward because in many application you do need state). I am not an expert in functional languages but here is what I have understood about these topics until now. Regarding point 1, you can interact with the environment in purely functional languages but you have to explicitly mark the code (functions) that introduces them (e.g. in Haskell by means of monadic types). Also, AFAIK computing by side effects (destructively updating data) should also be possible (using monadic types?) but is not the preferred way of working. Regarding point 2, AFAIK you can represent state by threading values through several computation steps (in Haskell, again, using monadic types) but I have no practical experience doing this and my understanding is rather vague. So, are the two statements above correct in any sense or are they just misconceptions about purely functional languages? If they are misconceptions, how did they come about? Could you write a (possibly small) code snippet illustrating the Haskell idiomatic way to (1) implement side effects and (2) implement a computation with state?

    Read the article

  • EE vs Computer Science: Effect on Developers' Approaches, Styles?

    - by DarenW
    Are there any systematic differences between software developers (sw engineers, architect, whatever job title) with an electronics or other engineering background, compared to those who entered the profession through computer science? By electronics background, I mean an EE degree, or a self-taught electronics tinkerer, other types of engineers and experimental physicists. I'm wondering if coming into the software-making professions from a strong knowledge of flip flops, tristate buffers, clock edge rise times and so forth, usually leads to a distinct approach to problems, mindsets, or superior skills at certain specialties and lack of skills at others, when compared to the computer science types who are full of concepts like abstract data types, object orientation, database normalization, who speak of "closures" in programming languages - things that make little sense to the soldering iron crowd until they learn enough programming. The real world, I'm sure, offers a wild range of individual exceptions, but for the most part, can you say there are overall differences? Would these have hiring implications e.g. (to make up something) "never hire an electron wrangler to do database design"? Could knowing about any differences help job seekers find something appropriate more effectively? Or provide enlightenment or some practical advice for those who find themselves misfits in a particular job role? (Btw, I've never taken any computer science classes; my impression of exactly what they cover is fuzzy. I'm an electronics/physics/art type, myself.)

    Read the article

  • JSR 308 Moves Forward

    - by abuckley
    I am pleased to announce a number of recent milestones for JSR 308, Annotations on Java Types: Adoption of JCP 2.8 Thanks to the agreement of the Expert Group, JSR 308 operates under JCP 2.8 from September 2012. There is a publicly archived mailing list for EG members, and a companion list for anyone who wishes to follow EG traffic by email. There is also a "suggestion box" mailing list where anyone can send feedback to the E.G. directly. Feedback will be discussed on the main EG list. Co-spec lead Prof. Michael Ernst maintains an issue tracker and a document archive. Early-Access Builds of the Reference Implementation Oracle has published binaries for all platforms of JDK 8 with support for type annotations. Builds are generated from OpenJDK's type-annotations/type-annotations forest (notably the langtools repo). The forest is owned by the Type Annotations project. Integration with Enhanced Metadata On the enhanced metadata mailing list, Oracle has proposed support for repeating annotations in the Java language in Java SE 8. For completeness, it must be possible to repeat annotations on types, not only on declarations. The implementation of repeating annotations on declarations is already in the type-annotations/type-annotations forest (and hence in the early-access builds above) and work is underway to extend it to types.

    Read the article

  • Representing a world in memory

    - by user9993
    I'm attempting to write a chunk based map system for a game, where as the player moves around chunks are loaded/unloaded, so that the program doesn't run out of memory by having the whole map in memory. I have this part mostly working, however I've hit a wall regarding how to represent the contents of each chunk in memory because of my so far limited understanding of OOP languages. The design I have currently has a ChunkManager class that uses a .NET List type to store instances of Chunk classes. The "chunks" consist of "blocks". It's similar to a Minecraft style game. Inside the Chunk classes, I have some information such as the chunk's X/Y coordinate etc, and I also have a three dimensional array of Block objects. (Three dimensional because I need XYZ values) Here's the problem: The Block class has some basic properties, and I had planned on making different types of blocks inherit from this "base" class. So for example, I would have "StoneBlock", "WaterBlock" etc. So because I have blocks of many different types, I don't know how I would create an array with different object types in each cell. This is how I currently have the three dimensional array declared in my Chunk class: private Block[][][] ArrayOfBlocks; But obviously this will only accept Block objects, not any of the other classes that inherit from Block. How would I go about creating this? Or am I doing it completely wrong and there's a better way?

    Read the article

  • Creating a dynamic proxy generator with c# – Part 3 – Creating the constructors

    - by SeanMcAlinden
    Creating a dynamic proxy generator with c# – Part 1 – Creating the Assembly builder, Module builder and caching mechanism Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design For the latest code go to http://rapidioc.codeplex.com/ When building our proxy type, the first thing we need to do is build the constructors. There needs to be a corresponding constructor for each constructor on the passed in base type. We also want to create a field to store the interceptors and construct this list within each constructor. So assuming the passed in base type is a User<int, IRepository> class, were looking to generate constructor code like the following:   Default Constructor public User`2_RapidDynamicBaseProxy() {     this.interceptors = new List<IInterceptor<User<int, IRepository>>>();     DefaultInterceptor<User<int, IRepository>> item = new DefaultInterceptor<User<int, IRepository>>();     this.interceptors.Add(item); }     Parameterised Constructor public User`2_RapidDynamicBaseProxy(IRepository repository1) : base(repository1) {     this.interceptors = new List<IInterceptor<User<int, IRepository>>>();     DefaultInterceptor<User<int, IRepository>> item = new DefaultInterceptor<User<int, IRepository>>();     this.interceptors.Add(item); }   As you can see, we first populate a field on the class with a new list of the passed in base type. Construct our DefaultInterceptor class. Add the DefaultInterceptor instance to our interceptor collection. Although this seems like a relatively small task, there is a fair amount of work require to get this going. Instead of going through every line of code – please download the latest from http://rapidioc.codeplex.com/ and debug through. In this post I’m going to concentrate on explaining how it works. TypeBuilder The TypeBuilder class is the main class used to create the type. You instantiate a new TypeBuilder using the assembly module we created in part 1. /// <summary> /// Creates a type builder. /// </summary> /// <typeparam name="TBase">The type of the base class to be proxied.</typeparam> public static TypeBuilder CreateTypeBuilder<TBase>() where TBase : class {     TypeBuilder typeBuilder = DynamicModuleCache.Get.DefineType         (             CreateTypeName<TBase>(),             TypeAttributes.Class | TypeAttributes.Public,             typeof(TBase),             new Type[] { typeof(IProxy) }         );       if (typeof(TBase).IsGenericType)     {         GenericsHelper.MakeGenericType(typeof(TBase), typeBuilder);     }       return typeBuilder; }   private static string CreateTypeName<TBase>() where TBase : class {     return string.Format("{0}_RapidDynamicBaseProxy", typeof(TBase).Name); } As you can see, I’ve create a new public class derived from TBase which also implements my IProxy interface, this is used later for adding interceptors. If the base type is generic, the following GenericsHelper.MakeGenericType method is called. GenericsHelper using System; using System.Reflection.Emit; namespace Rapid.DynamicProxy.Types.Helpers {     /// <summary>     /// Helper class for generic types and methods.     /// </summary>     internal static class GenericsHelper     {         /// <summary>         /// Makes the typeBuilder a generic.         /// </summary>         /// <param name="concrete">The concrete.</param>         /// <param name="typeBuilder">The type builder.</param>         public static void MakeGenericType(Type baseType, TypeBuilder typeBuilder)         {             Type[] genericArguments = baseType.GetGenericArguments();               string[] genericArgumentNames = GetArgumentNames(genericArguments);               GenericTypeParameterBuilder[] genericTypeParameterBuilder                 = typeBuilder.DefineGenericParameters(genericArgumentNames);               typeBuilder.MakeGenericType(genericTypeParameterBuilder);         }           /// <summary>         /// Gets the argument names from an array of generic argument types.         /// </summary>         /// <param name="genericArguments">The generic arguments.</param>         public static string[] GetArgumentNames(Type[] genericArguments)         {             string[] genericArgumentNames = new string[genericArguments.Length];               for (int i = 0; i < genericArguments.Length; i++)             {                 genericArgumentNames[i] = genericArguments[i].Name;             }               return genericArgumentNames;         }     } }       As you can see, I’m getting all of the generic argument types and names, creating a GenericTypeParameterBuilder and then using the typeBuilder to make the new type generic. InterceptorsField The interceptors field will store a List<IInterceptor<TBase>>. Fields are simple made using the FieldBuilder class. The following code demonstrates how to create the interceptor field. FieldBuilder interceptorsField = typeBuilder.DefineField(     "interceptors",     typeof(System.Collections.Generic.List<>).MakeGenericType(typeof(IInterceptor<TBase>)),       FieldAttributes.Private     ); The field will now exist with the new Type although it currently has no data – we’ll deal with this in the constructor. Add method for interceptorsField To enable us to add to the interceptorsField list, we are going to utilise the Add method that already exists within the System.Collections.Generic.List class. We still however have to create the methodInfo necessary to call the add method. This can be done similar to the following: Add Interceptor Field MethodInfo addInterceptor = typeof(List<>)     .MakeGenericType(new Type[] { typeof(IInterceptor<>).MakeGenericType(typeof(TBase)) })     .GetMethod     (        "Add",        BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic,        null,        new Type[] { typeof(IInterceptor<>).MakeGenericType(typeof(TBase)) },        null     ); So we’ve create a List<IInterceptor<TBase>> type, then using the type created a method info called Add which accepts an IInterceptor<TBase>. Now in our constructor we can use this to call this.interceptors.Add(// interceptor); Building the Constructors This will be the first hard-core part of the proxy building process so I’m going to show the class and then try to explain what everything is doing. For a clear view, download the source from http://rapidioc.codeplex.com/, go to the test project and debug through the constructor building section. Anyway, here it is: DynamicConstructorBuilder using System; using System.Collections.Generic; using System.Reflection; using System.Reflection.Emit; using Rapid.DynamicProxy.Interception; using Rapid.DynamicProxy.Types.Helpers; namespace Rapid.DynamicProxy.Types.Constructors {     /// <summary>     /// Class for creating the proxy constructors.     /// </summary>     internal static class DynamicConstructorBuilder     {         /// <summary>         /// Builds the constructors.         /// </summary>         /// <typeparam name="TBase">The base type.</typeparam>         /// <param name="typeBuilder">The type builder.</param>         /// <param name="interceptorsField">The interceptors field.</param>         public static void BuildConstructors<TBase>             (                 TypeBuilder typeBuilder,                 FieldBuilder interceptorsField,                 MethodInfo addInterceptor             )             where TBase : class         {             ConstructorInfo interceptorsFieldConstructor = CreateInterceptorsFieldConstructor<TBase>();               ConstructorInfo defaultInterceptorConstructor = CreateDefaultInterceptorConstructor<TBase>();               ConstructorInfo[] constructors = typeof(TBase).GetConstructors();               foreach (ConstructorInfo constructorInfo in constructors)             {                 CreateConstructor<TBase>                     (                         typeBuilder,                         interceptorsField,                         interceptorsFieldConstructor,                         defaultInterceptorConstructor,                         addInterceptor,                         constructorInfo                     );             }         }           #region Private Methods           private static void CreateConstructor<TBase>             (                 TypeBuilder typeBuilder,                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ConstructorInfo defaultInterceptorConstructor,                 MethodInfo AddDefaultInterceptor,                 ConstructorInfo constructorInfo             ) where TBase : class         {             Type[] parameterTypes = GetParameterTypes(constructorInfo);               ConstructorBuilder constructorBuilder = CreateConstructorBuilder(typeBuilder, parameterTypes);               ILGenerator cIL = constructorBuilder.GetILGenerator();               LocalBuilder defaultInterceptorMethodVariable =                 cIL.DeclareLocal(typeof(DefaultInterceptor<>).MakeGenericType(typeof(TBase)));               ConstructInterceptorsField(interceptorsField, interceptorsFieldConstructor, cIL);               ConstructDefaultInterceptor(defaultInterceptorConstructor, cIL, defaultInterceptorMethodVariable);               AddDefaultInterceptorToInterceptorsList                 (                     interceptorsField,                     AddDefaultInterceptor,                     cIL,                     defaultInterceptorMethodVariable                 );               CreateConstructor(constructorInfo, parameterTypes, cIL);         }           private static void CreateConstructor(ConstructorInfo constructorInfo, Type[] parameterTypes, ILGenerator cIL)         {             cIL.Emit(OpCodes.Ldarg_0);               if (parameterTypes.Length > 0)             {                 LoadParameterTypes(parameterTypes, cIL);             }               cIL.Emit(OpCodes.Call, constructorInfo);             cIL.Emit(OpCodes.Ret);         }           private static void LoadParameterTypes(Type[] parameterTypes, ILGenerator cIL)         {             for (int i = 1; i <= parameterTypes.Length; i++)             {                 cIL.Emit(OpCodes.Ldarg_S, i);             }         }           private static void AddDefaultInterceptorToInterceptorsList             (                 FieldBuilder interceptorsField,                 MethodInfo AddDefaultInterceptor,                 ILGenerator cIL,                 LocalBuilder defaultInterceptorMethodVariable             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Ldfld, interceptorsField);             cIL.Emit(OpCodes.Ldloc, defaultInterceptorMethodVariable);             cIL.Emit(OpCodes.Callvirt, AddDefaultInterceptor);         }           private static void ConstructDefaultInterceptor             (                 ConstructorInfo defaultInterceptorConstructor,                 ILGenerator cIL,                 LocalBuilder defaultInterceptorMethodVariable             )         {             cIL.Emit(OpCodes.Newobj, defaultInterceptorConstructor);             cIL.Emit(OpCodes.Stloc, defaultInterceptorMethodVariable);         }           private static void ConstructInterceptorsField             (                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ILGenerator cIL             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Newobj, interceptorsFieldConstructor);             cIL.Emit(OpCodes.Stfld, interceptorsField);         }           private static ConstructorBuilder CreateConstructorBuilder(TypeBuilder typeBuilder, Type[] parameterTypes)         {             return typeBuilder.DefineConstructor                 (                     MethodAttributes.Public | MethodAttributes.SpecialName | MethodAttributes.RTSpecialName                     | MethodAttributes.HideBySig, CallingConventions.Standard, parameterTypes                 );         }           private static Type[] GetParameterTypes(ConstructorInfo constructorInfo)         {             ParameterInfo[] parameterInfoArray = constructorInfo.GetParameters();               Type[] parameterTypes = new Type[parameterInfoArray.Length];               for (int p = 0; p < parameterInfoArray.Length; p++)             {                 parameterTypes[p] = parameterInfoArray[p].ParameterType;             }               return parameterTypes;         }           private static ConstructorInfo CreateInterceptorsFieldConstructor<TBase>() where TBase : class         {             return ConstructorHelper.CreateGenericConstructorInfo                 (                     typeof(List<>),                     new Type[] { typeof(IInterceptor<TBase>) },                     BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic                 );         }           private static ConstructorInfo CreateDefaultInterceptorConstructor<TBase>() where TBase : class         {             return ConstructorHelper.CreateGenericConstructorInfo                 (                     typeof(DefaultInterceptor<>),                     new Type[] { typeof(TBase) },                     BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic                 );         }           #endregion     } } So, the first two tasks within the class should be fairly clear, we are creating a ConstructorInfo for the interceptorField list and a ConstructorInfo for the DefaultConstructor, this is for instantiating them in each contructor. We then using Reflection get an array of all of the constructors in the base class, we then loop through the array and create a corresponding proxy contructor. Hopefully, the code is fairly easy to follow other than some new types and the dreaded Opcodes. ConstructorBuilder This class defines a new constructor on the type. ILGenerator The ILGenerator allows the use of Reflection.Emit to create the method body. LocalBuilder The local builder allows the storage of data in local variables within a method, in this case it’s the constructed DefaultInterceptor. Constructing the interceptors field The first bit of IL you’ll come across as you follow through the code is the following private method used for constructing the field list of interceptors. private static void ConstructInterceptorsField             (                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ILGenerator cIL             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Newobj, interceptorsFieldConstructor);             cIL.Emit(OpCodes.Stfld, interceptorsField);         } The first thing to know about generating code using IL is that you are using a stack, if you want to use something, you need to push it up the stack etc. etc. OpCodes.ldArg_0 This opcode is a really interesting one, basically each method has a hidden first argument of the containing class instance (apart from static classes), constructors are no different. This is the reason you can use syntax like this.myField. So back to the method, as we want to instantiate the List in the interceptorsField, first we need to load the class instance onto the stack, we then load the new object (new List<TBase>) and finally we store it in the interceptorsField. Hopefully, that should follow easily enough in the method. In each constructor you would now have this.interceptors = new List<User<int, IRepository>>(); Constructing and storing the DefaultInterceptor The next bit of code we need to create is the constructed DefaultInterceptor. Firstly, we create a local builder to store the constructed type. Create a local builder LocalBuilder defaultInterceptorMethodVariable =     cIL.DeclareLocal(typeof(DefaultInterceptor<>).MakeGenericType(typeof(TBase))); Once our local builder is ready, we then need to construct the DefaultInterceptor<TBase> and store it in the variable. Connstruct DefaultInterceptor private static void ConstructDefaultInterceptor     (         ConstructorInfo defaultInterceptorConstructor,         ILGenerator cIL,         LocalBuilder defaultInterceptorMethodVariable     ) {     cIL.Emit(OpCodes.Newobj, defaultInterceptorConstructor);     cIL.Emit(OpCodes.Stloc, defaultInterceptorMethodVariable); } As you can see, using the ConstructorInfo named defaultInterceptorConstructor, we load the new object onto the stack. Then using the store local opcode (OpCodes.Stloc), we store the new object in the local builder named defaultInterceptorMethodVariable. Add the constructed DefaultInterceptor to the interceptors field collection Using the add method created earlier in this post, we are going to add the new DefaultInterceptor object to the interceptors field collection. Add Default Interceptor private static void AddDefaultInterceptorToInterceptorsList     (         FieldBuilder interceptorsField,         MethodInfo AddDefaultInterceptor,         ILGenerator cIL,         LocalBuilder defaultInterceptorMethodVariable     ) {     cIL.Emit(OpCodes.Ldarg_0);     cIL.Emit(OpCodes.Ldfld, interceptorsField);     cIL.Emit(OpCodes.Ldloc, defaultInterceptorMethodVariable);     cIL.Emit(OpCodes.Callvirt, AddDefaultInterceptor); } So, here’s whats going on. The class instance is first loaded onto the stack using the load argument at index 0 opcode (OpCodes.Ldarg_0) (remember the first arg is the hidden class instance). The interceptorsField is then loaded onto the stack using the load field opcode (OpCodes.Ldfld). We then load the DefaultInterceptor object we stored locally using the load local opcode (OpCodes.Ldloc). Then finally we call the AddDefaultInterceptor method using the call virtual opcode (Opcodes.Callvirt). Completing the constructor The last thing we need to do is complete the constructor. Complete the constructor private static void CreateConstructor(ConstructorInfo constructorInfo, Type[] parameterTypes, ILGenerator cIL)         {             cIL.Emit(OpCodes.Ldarg_0);               if (parameterTypes.Length > 0)             {                 LoadParameterTypes(parameterTypes, cIL);             }               cIL.Emit(OpCodes.Call, constructorInfo);             cIL.Emit(OpCodes.Ret);         }           private static void LoadParameterTypes(Type[] parameterTypes, ILGenerator cIL)         {             for (int i = 1; i <= parameterTypes.Length; i++)             {                 cIL.Emit(OpCodes.Ldarg_S, i);             }         } So, the first thing we do again is load the class instance using the load argument at index 0 opcode (OpCodes.Ldarg_0). We then load each parameter using OpCode.Ldarg_S, this opcode allows us to specify an index position for each argument. We then setup calling the base constructor using OpCodes.Call and the base constructors ConstructorInfo. Finally, all methods are required to return, even when they have a void return. As there are no values on the stack after the OpCodes.Call line, we can safely call the OpCode.Ret to give the constructor a void return. If there was a value, we would have to pop the value of the stack before calling return otherwise, the method would try and return a value. Conclusion This was a slightly hardcore post but hopefully it hasn’t been too hard to follow. The main thing is that a number of the really useful opcodes have been used and now the dynamic proxy is capable of being constructed. If you download the code and debug through the tests at http://rapidioc.codeplex.com/, you’ll be able to create proxies at this point, they cannon do anything in terms of interception but you can happily run the tests, call base methods and properties and also take a look at the created assembly in Reflector. Hope this is useful. The next post should be up soon, it will be covering creating the private methods for calling the base class methods and properties. Kind Regards, Sean.

    Read the article

  • Storing a set of criteria in another table

    - by bendataclear
    I have a large table with sales data, useful data below: RowID Date Customer Salesperson Product_Type Manufacturer Quantity Value 1 01-06-2004 James Ian Taps Tap Ltd 200 £850 2 02-06-2004 Apple Fran Hats Hats Inc 30 £350 3 04-06-2004 James Lawrence Pencils ABC Ltd 2000 £980 ... Many rows later... ... 185352 03-09-2012 Apple Ian Washers Tap Ltd 600 £80 I need to calculate a large set of targets from table containing values different types, target table is under my control and so far is like: TargetID Year Month Salesperson Target_Type Quantity 1 2012 7 Ian 1 6000 2 2012 8 James 2 2000 3 2012 9 Ian 2 6500 At present I am working out target types using a view of the first table which has a lot of extra columns: SELECT YEAR(Date) , MONTH(Date) , Salesperson , Quantity , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType1 , CASE WHEN Manufacturer = 'Hats Inc' AND Product_Type IN ('Hats','Coats') THEN True ELSE False END AS IsType2 ... ... , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType24 , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType25 FROM SalesTable WHERE [some stuff here] This is horrible to read/debug and I hate it!! I've tried a few different ways of simplifying this but have been unable to get it to work. The closest I have come is to have a third table holding the definition of the types with the values for each field and the type number, this can be joined to the tables to give me the full values but I can't work out a way to cope with multiple values for each field. Finally the question: Is there a standard way this can be done or an easier/neater method other than one column for each type of target? I know this is a complex problem so if anything is unclear please let me know. Edit - What I need to get: At the very end of the process I need to have targets displayed with actual sales: Type Year Month Salesperson TargetQty ActualQty 2 2012 8 James 2000 2809 2 2012 9 Ian 6500 6251 Each row of the sales table could potentially satisfy 8 of the types. Some more points: I have 5 different columns that need to be defined against the targets (or set to NULL to include any value) I have between 30 and 40 different types that need to be defined, several of the columns could contain as many as 10 different values For point 2, if I am using a row for each permutation of values, 2 columns with 10 values each would give me 100 rows for each sales person for each month which is a lot but if this is the only way to define multiple values I will have to do this. Sorry if this makes no sense!

    Read the article

  • What does this WCF error mean: "Custom tool warning: Cannot import wsdl:portType"

    - by stiank81
    I created a WCF service library project in my solution, and have service references to this. I use the services from a class library, so I have references from my WPF application project in addition to the class library. Services are set up straight forward - only changed to get async service functions. Everything was working fine - until I wanted to update my service references. It failed, so I eventually rolled back and retried, but it failed even then! So - updating the service references fails without doing any changes to it. Why?! The error I get is this one: Custom tool error: Failed to generate code for the service reference 'MyServiceReference'. Please check other error and warning messages for details. The warning gives more information: Custom tool warning: Cannot import wsdl:portType Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter Error: List of referenced types contains more than one type with data contract name 'Patient' in namespace 'http://schemas.datacontract.org/2004/07/MyApp.Model'. Need to exclude all but one of the following types. Only matching types can be valid references: "MyApp.Dashboard.MyServiceReference.Patient, Medski.Dashboard, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" (matching) "MyApp.Model.Patient, MyApp.Model, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" (matching) XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='ISomeService'] There are two similar warnings too saying: Custom tool warning: Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='ISomeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:binding[@name='WSHttpBinding_ISomeService'] And the same for: Custom tool warning: Cannot import wsdl:port .. I find this all confusing.. I don't have a Patient class on the client side Dashboard except the one I got through the service reference. So what does it mean? And why does it suddenly show? Remember: I didn't even change anything! Now, the solution to this was found here, but without an explanation to what this means. So; in the "Configure service reference" for the service I uncheck the "Reuse types in the referenced assemblies" checkbox. Rebuilding now it all works fine without problems. But what did I really change? Will this make an impact on my application? And when should one uncheck this? I do want to reuse the types I've set up DataContract on, but no more. Will I still get access to those without this checked?

    Read the article

  • Optimizing if-else /switch-case with string options

    - by cc
    What modification would bring to this piece of code? In the last lines, should I use more if-else structures, instead of "if-if-if" if (action.equals("opt1")) { //something } else { if (action.equals("opt2")) { //something } else { if ((action.equals("opt3")) || (action.equals("opt4"))) { //something } if (action.equals("opt5")) { //something } if (action.equals("opt6")) { //something } } } Later Edit: This is Java. I don't think that switch-case structure will work with Strings. Later Edit 2: A switch works with the byte, short, char, and int primitive data types. It also works with enumerated types (discussed in Classes and Inheritance) and a few special classes that "wrap" certain primitive types: Character, Byte, Short, and Integer (discussed in Simple Data Objects ).

    Read the article

  • BigInteger or not BigInteger?

    - by Alon
    In Java, most of the primitive types are signed (one bit is used to represent the +/-), and therefore when I am exceed the limits of this type, I can get many strange things, like negative numbers. In the BigInteger class, I have no limits and there are some helpful functions there but it is pretty depressing to convert your beautiful code to work with the BigInteger class, specially when primitive operators don't work there and you must use functions from this class. I was wondering if you had this problem, and if you have any better solution than the BigInteger class? Thank you.

    Read the article

  • Django Multi-Table Inheritance VS Specifying Explicit OneToOne Relationship in Models

    - by chefsmart
    Hope all this makes sense :) I'll clarify via comments if necessary. Also, I am experimenting using bold text in this question, and will edit it out if I (or you) find it distracting. With that out of the way... Using django.contrib.auth gives us User and Group, among other useful things that I can't do without (like basic messaging). In my app I have several different types of users. A user can be of only one type. That would easily be handled by groups, with a little extra care. However, these different users are related to each other in hierarchies / relationships. Let's take a look at these users: - Principals - "top level" users Administrators - each administrator reports to a Principal Coordinators - each coordinator reports to an Administrator Apart from these there are other user types that are not directly related, but may get related later on. For example, "Company" is another type of user, and can have various "Products", and products may be supervised by a "Coordinator". "Buyer" is another kind of user that may buy products. Now all these users have various other attributes, some of which are common to all types of users and some of which are distinct only to one user type. For example, all types of users have to have an address. On the other hand, only the Principal user belongs to a "BranchOffice". Another point, which was stated above, is that a User can only ever be of one type. The app also needs to keep track of who created and/or modified Principals, Administrators, Coordinators, Companies, Products etc. (So that's two more links to the User model.) In this scenario, is it a good idea to use Django's multi-table inheritance as follows: - from django.contrib.auth.models import User class Principal(User): # # # branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Or should I go about doing it like this: - class Principal(models.Model): # # # user = models.OneToOneField(User, blank=True) branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Please keep in mind that there are other user types that are related via foreign keys, for example: - class Administrator(models.Model): # # # principal = models.ForeignKey(Principal, help_text="The supervising principal for this Administrator") user = models.OneToOneField(User, blank=True) province = models.ForeignKey( Province) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratorcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratormodifier") I am aware that Django does use a one-to-one relationship for multi-table inheritance behind the scenes. I am just not qualified enough to decide which is a more sound approach.

    Read the article

  • How to define generic super type for static factory method?

    - by Esko
    If this has already been asked, please link and close this one. I'm currently prototyping a design for a simplified API of a certain another API that's a lot more complex (and potentially dangerous) to use. Considering the related somewhat complex object creation I decided to use static factory methods to simplify the API and I currently have the following which works as expected: public class Glue<T> { private List<Type<T>> types; private Glue() { types = new ArrayList<Type<T>>(); } private static class Type<T> { private T value; /* some other properties, omitted for simplicity */ public Type(T value) { this.value = value; } } public static <T> Glue<T> glueFactory(String name, T first, T second) { Glue<T> g = new Glue<T>(); Type<T> firstType = new Glue.Type<T>(first); Type<T> secondType = new Glue.Type<T>(second); g.types.add(firstType); g.types.add(secondType); /* omitted complex stuff */ return g; } } As said, this works as intended. When the API user (=another developer) types Glue<Horse> strongGlue = Glue.glueFactory("2HP", new Horse(), new Horse()); he gets exactly what he wanted. What I'm missing is that how do I enforce that Horse - or whatever is put into the factory method - always implements both Serializable and Comparable? Simply adding them to factory method's signature using <T extends Comparable<T> & Serializable> doesn't necessarily enforce this rule in all cases, only when this simplified API is used. That's why I'd like to add them to the class' definition and then modify the factory method accordingly. PS: No horses (and definitely no ponies!) were harmed in writing of this question.

    Read the article

  • How to mark empty in a single element in a float array

    - by Vineeth Mohan
    I have a large float (primitive) array and not every element in the array is filled. How can i mark a particular element as EMPTY. I understand this can be achieved by some special symbols but still i would like to know the standard way. Even if i am using some special symbol , how will i handle a situation where the actual data item is the value of special symbol. In short my question is how to implement the NULL feature in a primitive type array in java. PS - The reason why i am not using Float object is to achieve a high memory and speed performance. Thanks Vineeth

    Read the article

  • Using Singleton synchronized array with NSThread

    - by hmthur
    I have a books app with a UISearchBar, where the user types any book name and gets search results (from ext API call) below as he types. I am using a singleton variable in my app called retrievedArray which stores all the books. @interface Shared : NSObject { NSMutableArray *books; } @property (nonatomic, retain) NSMutableArray *books; + (id)sharedManager; @end This is accessed in multiple .m files using NSMutableArray *retrievedArray; ...in the header file retrievedArray = [[Shared sharedManager] books]; My question is how do I ensure that the values inside retrievedArray remain synchronized across all the classes. Actually the values inside retrievedArray gets added through an NSXMLParser (i.e. through external web service API). There is a separate XMLParser.m file, where I do all the parsing and fill the array. The parsing is done on a separate thread. - (void) run: (id) param { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSXMLParser *parser = [[NSXMLParser alloc] initWithContentsOfURL: [self URL]]; [parser setDelegate: self]; [parser parse]; [parser release]; NSString *tmpURLStr = [[self URL]absoluteString]; NSRange range_srch_book = [tmpURLStr rangeOfString:@"v1/books"]; if (range_srch_book.location != NSNotFound) [delegate performSelectorOnMainThread:@selector(parseDidComplete_srch_book) withObject:nil waitUntilDone:YES]; [pool release]; } - (void) parseXMLFile: (NSURL *) url { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; [self setURL: url]; NSThread* myThread = [[NSThread alloc] initWithTarget:self selector:@selector(run:) object: nil]; [retrievedArray removeAllObjects]; [myThread start]; [pool release]; } There seems to be some synchronization issues if the user types very quickly (It seems to be working fine if the user types slowly)....So there are 2 views in which the content of an object in this shared array item is displayed; List and Detail. If user types fast and clicks on A in List view, he is shown B in detail view...That is the main issue. I have tried literally all the solutions I could think of, but am still unable to fix the issue. Please suggest some suitable fixes.

    Read the article

  • Create LINQ to entities OrderBy expression on the fly

    - by AyKarsi
    I'm trying to add the orderby expression on the fly. But when the query below is executed I get the following exception: System.NotSupportedException: Unable to create a constant value of type 'Closure type'. Only primitive types ('such as Int32, String, and Guid') are supported in this context. The strange thing is, I am query exactly those primitive types only. string sortBy = HttpContext.Current.Request.QueryString["sidx"]; ParameterExpression prm = Expression.Parameter(typeof(buskerPosting), "posting"); Expression orderByProperty = Expression.Property(prm, sortBy); // get the paged records IQueryable<PostingListItemDto> query = (from posting in be.buskerPosting where posting.buskerAccount.cmsMember.nodeId == m.Id orderby orderByProperty //orderby posting.Created select new PostingListItemDto { Set = posting }).Skip<PostingListItemDto>((page - 1) * pageSize).Take<PostingListItemDto>(pageSize); Hope somebody can shed some light on this!

    Read the article

  • how to refactor user-permission system?

    - by John
    Sorry for lengthy question. I can't tell if this should be a programming question or a project management question. Any advice will help. I inherited a reasonably large web project (1 year old) from a solo freelancer who architected it then abandoned it. The project was a mess, but I cleaned up what I could, and now the system is more maintainable. I need suggestions on how to extend the user-permission system. As it is now, the database has a t_user table with the column t_user.membership_type. Currently, there are 4 membership types with the following properties: 3 of the membership types are almost functionally the same, except for the different monthly fees each must pay 1 of the membership type is a "fake-user" type which has limited access ( different business logic also applies) With regards to the fake-user type, if you look in the system's business logic files, you will see a lot of hard-coded IF statements that do something like if (fake-user) { // do something } else { // a paid member of type 1,2 or 3 // proceed normally } My client asked me to add 3 more membership types to the system, each of them with unique features to be implemented this month, and substantive "to-be-determined" features next month. My first reaction is that I need to refactor the user-permission system. But it concerns me that I don't have enough information on the "to-be-determined" membership type features for next month. Refactoring the user-permission system will take a substantive amount of time. I don't want to refactor something and throw it out the following month. I get substantive feature requests on a monthly basis that come out of the blue. There is no project road map. I've asked my client to provide me with a roadmap of what they intend to do with the new membership types, but their answer is along the lines of "We just want to do [feature here] this month. We'll think of something new next month." So questions that come to mind are: 1) Is it dangerous for me to refactor the user permission system not knowing what membership type features exist beyond a month from now? 2) Should I refactor the user permission system regardless? Or just continue adding IF statements as needed in all my controller files? Or can you recommend a different approach to user permission systems? Maybe role-based ? 3) Should this project have a road map? For a 1 year old project like mine, how far into the future should this roadmap project? 4) Any general advice on the best way to add 3 new membership types?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >