Search Results

Search found 7651 results on 307 pages for 'pattern matching'.

Page 21/307 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Sed: Deleting all content matching a pattern

    - by Svish
    I have some plist files on mac os x that I would like to shrink. They have a lot of <dict> with <key> and values. One of these keys is a thumbnail which has a <data> value with base64 encoded binary (I think). I would like to remove this key and value. I was thinking this could maybe be done by sed, but I don't really know how to use it and it seems like sed only works on a line-by-line basis? Either way I was hoping someone could help me out. In the file I would like to delete everything that matches the following pattern or something close to that: <key>Thumbnail<\/key>[^<]*<\/data> In the file it looks like this: // Other keys and values <key>Thumbnail</key> <data> TU0AKgAAOEi25Pqx3/ip2fak0vOdzPCVxu2RweuPv+mLu+mIt+aGtuaEtOSB ... dCBBcHBsZSBDb21wdXRlciwgSW5jLiwgMjAwNQAAAAA= </data> // Other keys and values Anyone know how I could do this? Also, if there are any better tools that I can use in the terminal to do this, I would like to know about that as well :)

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • categorize a set of phrases into a set of similar phrases

    - by Dingo
    I have a few apps that generate textual tracing information (logs) to log files. The tracing information is the typical printf() style - i.e. there are a lot of log entries that are similar (same format argument to printf), but differ where the format string had parameters. What would be an algorithm (url, books, articles, ...) that will allow me to analyze the log entries and categorize them into several bins/containers, where each bin has one associated format? Essentially, what I would like is to transform the raw log entries into (formatA, arg0 ... argN) instances, where formatA is shared among many log entries. The formatA does not have to be the exact format used to generate the entry (even more so if this makes the algo simpler). Most of the literature and web-info I found deals with exact matching, a max substring matching, or a k-difference (with k known/fixed ahead of time). Also, it focuses on matching a pair of (long) strings, or a single bin output (one match among all input). My case is somewhat different, since I have to discover what represents a (good-enough) match (generally a sequence of discontinuous strings), and then categorize each input entries to one of the discovered matches. Lastly, I'm not looking for a perfect algorithm, but something simple/easy to maintain. Thanks!

    Read the article

  • Excel: conditionally format a cell using the format of another, content-matching cell

    - by Eric A. Meyer
    I have an Excel spreadsheet where I’d like to be able to create a “key” of formatted cells with unique values, and then in another sheet format cells using the key formatting. So for example, my key is as follows, with one value per cell and the visual formatting indicated in parentheses: A (red background) B (green background) C (blue background) So that’s on one sheet (or in a remote corner of the current sheet—whichever is better). Then, in an area that I mark for conditional formatting, I can type one of those three letters and have the cell where I typed it visually formatted according to the key. So if I type a “B” into one of the conditionally formatted cells, it gets a green background. (Note that I’m using backgrounds here solely for ease of explanation: ideally I want to have all visual formatting copied over, whether it’s foreground color, background color, font weight, borders, or whatever. But I’ll take what I can get, obviously.) And—just to make it extra-tricky—if I change the formatting in the key, that change should be reflected in cells that reference the key. Thus, if I change the “B” formatting in the key from a green background to a purple background, any “B” in the main sheet should switch to the new color. Similarly, it should be possible to add or remove values from the key and have those changes applied to the main data set. I’m okay with the formatting-update-on-key-change being triggered by clicking a button or something. I suspect that if any of this is possible it will require VBA, but I’ve never used it so I’ve no idea where to start if that’s the case. I’m hoping it’s possible without VBA. I know it’s possible to just use multiple conditional formats, but my use case here is that I’m trying to create the above-described capability for someone who isn’t conversant with conditional formatting. I’d like to let them be able to define a key, update it if necessary, and keep on truckin’ without me having to rewrite the spreadsheet’s formatting rules for them. --- UPDATE --- So I think I was a bit unclear about my original request. Let me try again with an image. The image shows the “key” on the left, where values and styles are defined using keyboard and mouse input. On the right, you see the data that should be formatted to match the key. Thus if I type a “C” into a cell in the Data area, it should be blue-backed. Furthermore, if I change the formatting of “C” in the Key to have a purple background, all the “C” cells should switch from blue to purple. For further craziness, if I add more to the Key (say, “D” with a yellow background) then any “D” cells will be styled to match; if I remove a Key entry, then matching values in the Data area should revert to default styling. So. Is that more clear? Is it possible, in whole or in part? I don’t have to use conditional formatting for this; in fact, at this point I suspect I probably shouldn’t. But I’m open to any approach!

    Read the article

  • Word mergefield wildcard not correctly matching

    - by aZn137
    Hello, Below is my mergefield code: { IF { MERGEFIELD Subs_State } = "GA" "blah blah" "{ IF { MERGEFIELD CEOrgStates } = "GA" "blah blah" ""} "} I'm pulling records from a MS Access db. My goal is to check whether a record has Subs_State field matching "GA", or the CEOrgStates has the word "GA" (some records have stuff like "|FL|CA|GA|CT|KY|" (no quotes)). When I merged the docs, Word doesnt seem to be able to match with the wildcards: If I use and compare "*GA" (fields ending with GA), it works; however, the double wildcards "*GA*" dont seem to work at all. Here are the things I’ve tried: Have data in lowercase, then compare with lowercase Have data in lowercase, convert to and then compare with uppercase Do the opposite of the above 2 with uppercase data Use “*GA*” and “*ga*” (no pipe) Use different delimiters Nothing seems to work with the double wildcard matching. What am I doing wrong? Thanks!

    Read the article

  • Matching tag in HTML keyboard shortcut

    - by kape123
    Is there a shortcut in Visual Studio (2008) that will allow me to jump to matching HTML tag... as CTRL+] does for matching braces when you are in code view? Example: <table> <tr> <td> </td> </tr> </table|> Cursor is on closing table tag and I would like to press something like CTRL+] to jump to opening table tag. Any ideas?

    Read the article

  • PHP and MySQL - Printing rows matching a column value

    - by Michael
    Hello, I need to write a PHP script that will print out results from a MySQL database. For example, say I have 9 fields. Field 1 is an auto increasing number, field two is a three digit number. I need to be able to have a script read, find the matching number (it'll be from a POST), and then display all matching three digit results, and the 7 other fields as well. I am already logged in to the database in this script. I guess I'm really at a loss of where to begin. How would one start something like this? Thank you.

    Read the article

  • g++ no matching function call error

    - by gufftan
    I've got a compiler error but I can't figure out why. the .hpp: #ifndef _CGERADE_HPP #define _CGERADE_HPP #include "CVektor.hpp" #include <string> class CGerade { protected: CVektor o, rv; public: CGerade(CVektor n_o, CVektor n_rv); CVektor getPoint(float t); string toString(); }; the .cpp: #include "CGerade.hpp" CGerade::CGerade(CVektor n_o, CVektor n_rv) { o = n_o; rv = n_rv.getUnitVector(); } the error message: CGerade.cpp:10: error: no matching function for call to ‘CVektor::CVektor()’ CVektor.hpp:28: note: candidates are: CVektor::CVektor(float, float, float) CVektor.hpp:26: note: CVektor::CVektor(bool, float, float, float) CVektor.hpp:16: note: CVektor::CVektor(const CVektor&) CGerade.cpp:10: error: no matching function for call to ‘CVektor::CVektor()’ CVektor.hpp:28: note: candidates are: CVektor::CVektor(float, float, float) CVektor.hpp:26: note: CVektor::CVektor(bool, float, float, float) CVektor.hpp:16: note: CVektor::CVektor(const CVektor&)

    Read the article

  • grep command is not search the complete pattern

    - by Sumit Vedi
    0 down vote favorite I am facing a problem while using the grep command in shell script. Actually I have one file (PCF_STARHUB_20130625_1) which contain below records. SH_5.55916.00.00.100029_20130601_0001_NUC.csv.gz|438|3556691115 SH_5.55916.00.00.100029_20130601_0001_Summary.csv.gz|275|3919504621 SH_5.55916.00.00.100029_20130601_0001_UI.csv.gz|226|593316831 SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_NUC.csv.gz|368|3553014997 SH_5.55916.00.00.100038_20130601_0001_Summary.csv.gz|276|2625719449 SH_5.55916.00.00.100038_20130601_0001_UI.csv.gz|226|3825232121 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 SH_5.75470.00.00.100015_20130601_0001_NUC.csv.gz|425|1627227450 And I have a pattern which is stored in one variable (INPUT_FILE_T), and want to search the pattern from the file (PCF_STARHUB_20130625_1). For that I have used below command INPUT_FILE_T="SH?*???????????????US.*" grep ${INPUT_FILE_T} PCF_STARHUB_20130625_1 The output of above command is coming as below PCF_STARHUB_20130625_1:SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 I have two problem in the output, first is, only one entry is showing in output (It should contain two entries) and second problem is, output contains "PCF_STARHUB_20130625_1:" which should not be came. output should come like below SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 Is there any technique except grep please let me know. Please help me on this issue.

    Read the article

  • Sessions I Submitted to the PASS Summit 2010

    - by andyleonard
    Introduction I'm borrowing an idea and blog post title from Brent Ozar ( Blog - @BrentO ). I am honored the PASS Summit 2010 (Seattle, 8 - 11 Nov 2010) would consider allowing me to present. It's a truly awesome event. If you have an opportunity to attend and read this blog, please find me and introduce yourself. If you've built a cool solution to a business or technical problem; or written a script - or a bunch of scripts - to automate part of your daily / weekly / monthly routine; or have some...(read more)

    Read the article

  • Security in Software

    The term security has many meanings based on the context and perspective in which it is used. Security from the perspective of software/system development is the continuous process of maintaining confidentiality, integrity, and availability of a system, sub-system, and system data. This definition at a very high level can be restated as the following: Computer security is a continuous process dealing with confidentiality, integrity, and availability on multiple layers of a system. Key Aspects of Software Security Integrity Confidentiality Availability Integrity within a system is the concept of ensuring only authorized users can only manipulate information through authorized methods and procedures. An example of this can be seen in a simple lead management application.  If the business decided to allow each sales member to only update their own leads in the system and sales managers can update all leads in the system then an integrity violation would occur if a sales member attempted to update someone else’s leads. An integrity violation occurs when a team member attempts to update someone else’s lead because it was not entered by the sales member.  This violates the business rule that leads can only be update by the originating sales member. Confidentiality within a system is the concept of preventing unauthorized access to specific information or tools.  In a perfect world the knowledge of the existence of confidential information/tools would be unknown to all those who do not have access. When this this concept is applied within the context of an application only the authorized information/tools will be available. If we look at the sales lead management system again, leads can only be updated by originating sales members. If we look at this rule then we can say that all sales leads are confidential between the system and the sales person who entered the lead in to the system. The other sales team members would not need to know about the leads let alone need to access it. Availability within a system is the concept of authorized users being able to access the system. A real world example can be seen again from the lead management system. If that system was hosted on a web server then IP restriction can be put in place to limit access to the system based on the requesting IP address. If in this example all of the sales members where accessing the system from the 192.168.1.23 IP address then removing access from all other IPs would be need to ensure that improper access to the system is prevented while approved users can access the system from an authorized location. In essence if the requesting user is not coming from an authorized IP address then the system will appear unavailable to them. This is one way of controlling where a system is accessed. Through the years several design principles have been identified as being beneficial when integrating security aspects into a system. These principles in various combinations allow for a system to achieve the previously defined aspects of security based on generic architectural models. Security Design Principles Least Privilege Fail-Safe Defaults Economy of Mechanism Complete Mediation Open Design Separation Privilege Least Common Mechanism Psychological Acceptability Defense in Depth Least Privilege Design PrincipleThe Least Privilege design principle requires a minimalistic approach to granting user access rights to specific information and tools. Additionally, access rights should be time based as to limit resources access bound to the time needed to complete necessary tasks. The implications of granting access beyond this scope will allow for unnecessary access and the potential for data to be updated out of the approved context. The assigning of access rights will limit system damaging attacks from users whether they are intentional or not. This principle attempts to limit data changes and prevents potential damage from occurring by accident or error by reducing the amount of potential interactions with a resource. Fail-Safe Defaults Design PrincipleThe Fail-Safe Defaults design principle pertains to allowing access to resources based on granted access over access exclusion. This principle is a methodology for allowing resources to be accessed only if explicit access is granted to a user. By default users do not have access to any resources until access has been granted. This approach prevents unauthorized users from gaining access to resource until access is given. Economy of Mechanism Design PrincipleThe Economy of mechanism design principle requires that systems should be designed as simple and small as possible. Design and implementation errors result in unauthorized access to resources that would not be noticed during normal use. Complete Mediation Design PrincipleThe Complete Mediation design principle states that every access to every resource must be validated for authorization. Open Design Design PrincipleThe Open Design Design Principle is a concept that the security of a system and its algorithms should not be dependent on secrecy of its design or implementation Separation Privilege Design PrincipleThe separation privilege design principle requires that all resource approved resource access attempts be granted based on more than a single condition. For example a user should be validated for active status and has access to the specific resource. Least Common Mechanism Design PrincipleThe Least Common Mechanism design principle declares that mechanisms used to access resources should not be shared. Psychological Acceptability Design PrincipleThe Psychological Acceptability design principle refers to security mechanisms not make resources more difficult to access than if the security mechanisms were not present Defense in Depth Design PrincipleThe Defense in Depth design principle is a concept of layering resource access authorization verification in a system reduces the chance of a successful attack. This layered approach to resource authorization requires unauthorized users to circumvent each authorization attempt to gain access to a resource. When designing a system that requires meeting a security quality attribute architects need consider the scope of security needs and the minimum required security qualities. Not every system will need to use all of the basic security design principles but will use one or more in combination based on a company’s and architect’s threshold for system security because the existence of security in an application adds an additional layer to the overall system and can affect performance. That is why the definition of minimum security acceptably is need when a system is design because this quality attributes needs to be factored in with the other system quality attributes so that the system in question adheres to all qualities based on the priorities of the qualities. Resources: Barnum, Sean. Gegick, Michael. (2005). Least Privilege. Retrieved on August 28, 2011 from https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/351-BSI.html Saltzer, Jerry. (2011). BASIC PRINCIPLES OF INFORMATION PROTECTION. Retrieved on August 28, 2011 from  http://web.mit.edu/Saltzer/www/publications/protection/Basic.html Barnum, Sean. Gegick, Michael. (2005). Defense in Depth. Retrieved on August 28, 2011 from  https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/347-BSI.html Bertino, Elisa. (2005). Design Principles for Security. Retrieved on August 28, 2011 from  http://homes.cerias.purdue.edu/~bhargav/cs526/security-9.pdf

    Read the article

  • Presenting Designing an SSIS Execution Framework to Steel City SQL 18 Jan 2011!

    - by andyleonard
    I'm honored to present Designing an SSIS Execution Framework (Level 300) to Steel City SQL - the Birmingham Alabama chapter of PASS - on 18 Jan 2011! The meeting starts at 6:00 PM 18 Jan 2011 and will be held at: New Horizons Computer Learning Center 601 Beacon Pkwy. West Suite 106 Birmingham, Alabama, 35209 ( Map for directions ) Abstract In this “demo-tastic” presentation, SSIS trainer, author, and consultant Andy Leonard explains the what, why, and how of an SSIS framework that delivers metadata-driven...(read more)

    Read the article

  • Join the Authors of SSIS Design Patterns at the PASS Summit 2012!

    - by andyleonard
    My fellow authors and I will be presenting a day-long pre-conference session titled SSIS Design Patterns at the PASS Summit 2012 in Seattle Monday 5 Nov 2012! Register to learn patterns for: Package execution Package logging Loading flat file sources Loading XML sources Loading the cloud Dynamic package generation SSIS Frameworks Data warehouse ETL Data flow performance   Presenting this session: Matt Masson Tim Mitchell Jessica Moss Michelle Ufford Andy Leonard I hope to see you in Seattle!...(read more)

    Read the article

  • Presenting at SQL Saturday #70 - Columbia SC

    - by andyleonard
    Introduction I'm honored to be presenting at SQL Saturday #70 in Coumbia SC 19 Mar 2011! Its always good to travel to places where I don't have to suppress my accent (what accent? I talk normal. Everyone else sounds funny...) and repeat my order at Waffle House . It's always an honor to hang out with The Keeper of the Duck (K. Brian Kelley) ( Blog | @kbriankelley ) and the cool crew in Columbia. Presentations There are some stellar presentations from awesome speakers scheduled for the event... plus...(read more)

    Read the article

  • Learn SSIS from the Authors of SSIS Design Patterns at the PASS Summit 2012!

    - by andyleonard
    Jessica Moss ( blog | @jessicammoss ), Michelle Ufford ( blog | @sqlfool ), Tim Mitchell ( blog | @tim_mitchell ), Matt Masson ( blog | @mattmasson ), and me – we are all presenting the SSIS Design Patterns pre-conference session at the PASS Summit 2012 ! We will be covering material from, and based upon, the book. We will describe and demonstrate patterns for package execution, package logging, loading flat file and XML sources, loading the cloud, dynamic package generation, SSIS Frameworks, data...(read more)

    Read the article

  • Book Review: SSIS Design Patterns

    - by andyleonard
    Samuel Vanga ( Blog | @SamuelVanga ) has posted a review of our new book SSIS Design Patterns at his blog . Several of Sam’s statements struck me, but none more than this: Within a few hours of reading SQL Server 2012 Integration Services Design Patterns , it stood out that none of the authors were trying to impress by showing what they all know in SSIS. Instead, they focused on describing solutions and patterns in a great detail (exactly why I paid for). Sam mentions he could not locate the source...(read more)

    Read the article

  • Presenting at Roanoke Code Camp Saturday!

    - by andyleonard
    Introduction I am honored to once again be selected to present at Roanoke Code Camp ! An Introductory Topic One of my presentations is titled "I See a Control Flow Tab. Now What?" It's a Level 100 talk for those wishing to learn how to build their very first SSIS package. This highly-interactive, demo-intense presentation is for beginners and developers just getting started with SSIS. Attend and learn how to build SSIS packages from the ground up . Designing an SSIS Framework I'm also presenting...(read more)

    Read the article

  • How to "translate" interdependent object states in code?

    - by Earl Grey
    I have the following problem. My UI interace contains several buttons, labels, and other visual information. I am able to describe every possible workflow scenario that should be be allowed on that UI. That means I can describe it like this - when button A is pressed, the following should follow - In the case of that A button, there are three independent factors that influence the possible result when pushing the A button. The state of the session (blank, single, multi, multi special), the actual work that is being done by the system at the moment of pressing the A button (nothing was happening, work was being done, work was paused) and a separate UI element that has two states (on , off)..This gives me a 3 dimensional cube with 24 possible outcomes. I could write code for this using if cycles, switch cycles etc....but the problem is, I have another 7 buttons on that ui, I can enter this UI from different states..some buttons change the state, some change parameters... To sum up, the combinations are mindbogling and I am not able come up with a methodology that scales and is systematically reliable. I am able to describe EVERY workflow with words, I am sure my description is complete and without logical errors. But I am not able to translate that into code. I was trying to draw flowcharts but it soon became visually too complicated due to too many if "emafors". Can you advice how to proceeed?

    Read the article

  • A first look at ConfORM - Part 1

    - by thangchung
    All source codes for this post can be found at here.Have you ever heard of ConfORM is not? I have read it three months ago when I wrote an post about NHibernate and Autofac. At that time, this project really has just started and still in beta version, so I still do not really care much. But recently when reading a book by Jason Dentler NHibernate 3.0 Cookbook, I started to pay attention to it. Author have mentioned quite a lot of OSS in his book. And now again I have reviewed ConfORM once again. I have been involved in ConfORM development group on google and read some articles about it. Fabio Maulo spent a lot of work for the OSS, and I hope it will adapt a great way for NHibernate (because he contributed to NHibernate that). So what is ConfORM? It is stand for Configuration ORM, and it was trying to use a lot of heuristic model for identifying entities from C# code. Today, it's mostly Model First Driven development, so the first thing is to build the entity model. This is really important and we can see it is the heart of business software. Then we have to tell DB about the entity of this model. We often will use Inversion Engineering here, Database Schema is will create based on recently Entity Model. From now we will absolutely not interested in the DB again, only focus on the Entity Model.Fluent NHibenate really good, I liked this OSS. Sharp Architecture and has done so well in Fluent NHibernate integration with applications. A Multiple Database technical in Sharp Architecture is truly awesome. It can receive configuration, a connection string and a dll containing entity model, which would then create a SessionFactory, finally caching inside the computer memory. As the number of SessionFactory can be very large and will full of the memory, it has also devised a way of caching SessionFactory in the file. This post I hope this will not completely explain about and building a model of multiple databases. I just tried to mount a number of posts from the community and apply some of my knowledge to build a management model Session for ConfORM.As well as Fluent NHibernate, ConfORM also supported on the interface mapping, see this to understand it. So the first thing we will build the Entity Model for it, and here is what I will use the model for this article. A simple model for managing news and polls, it will be too easy for a number of people, but I hope not to bring complexity to this post.I will then have some code to build super type for the Entity Model. public interface IEntity<TId>    {        TId Id { get; set; }    } public abstract class EntityBase<TId> : IEntity<TId>    {        public virtual TId Id { get; set; }         public override bool Equals(object obj)        {            return Equals(obj as EntityBase<TId>);        }         private static bool IsTransient(EntityBase<TId> obj)        {            return obj != null &&            Equals(obj.Id, default(TId));        }         private Type GetUnproxiedType()        {            return GetType();        }         public virtual bool Equals(EntityBase<TId> other)        {            if (other == null)                return false;            if (ReferenceEquals(this, other))                return true;            if (!IsTransient(this) &&            !IsTransient(other) &&            Equals(Id, other.Id))            {                var otherType = other.GetUnproxiedType();                var thisType = GetUnproxiedType();                return thisType.IsAssignableFrom(otherType) ||                otherType.IsAssignableFrom(thisType);            }            return false;        }         public override int GetHashCode()        {            if (Equals(Id, default(TId)))                return base.GetHashCode();            return Id.GetHashCode();        }    } Database schema will be created as:The next step is to build the ConORM builder to create a NHibernate Configuration. Patrick have a excellent article about it at here. Contract of it below: public interface IConfigBuilder    {        Configuration BuildConfiguration(string connectionString, string sessionFactoryName);    } The idea here is that I will pass in a connection string and a set of the DLL containing the Entity Model and it makes me a NHibernate Configuration (shame that I stole this ideas of Sharp Architecture). And here is its code: public abstract class ConfORMConfigBuilder : RootObject, IConfigBuilder    {        private static IConfigurator _configurator;         protected IEnumerable<Type> DomainTypes;         private readonly IEnumerable<string> _assemblies;         protected ConfORMConfigBuilder(IEnumerable<string> assemblies)            : this(new Configurator(), assemblies)        {            _assemblies = assemblies;        }         protected ConfORMConfigBuilder(IConfigurator configurator, IEnumerable<string> assemblies)        {            _configurator = configurator;            _assemblies = assemblies;        }         public abstract void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString);         protected abstract HbmMapping GetMapping();         public Configuration BuildConfiguration(string connectionString, string sessionFactoryName)        {            Contract.Requires(!string.IsNullOrEmpty(connectionString), "ConnectionString is null or empty");            Contract.Requires(!string.IsNullOrEmpty(sessionFactoryName), "SessionFactory name is null or empty");            Contract.Requires(_configurator != null, "Configurator is null");             return CatchExceptionHelper.TryCatchFunction(                () =>                {                    DomainTypes = GetTypeOfEntities(_assemblies);                     if (DomainTypes == null)                        throw new Exception("Type of domains is null");                     var configure = new Configuration();                    configure.SessionFactoryName(sessionFactoryName);                     configure.Proxy(p => p.ProxyFactoryFactory<ProxyFactoryFactory>());                    configure.DataBaseIntegration(db => GetDatabaseIntegration(db, connectionString));                     if (_configurator.GetAppSettingString("IsCreateNewDatabase").ConvertToBoolean())                    {                        configure.SetProperty("hbm2ddl.auto", "create-drop");                    }                     configure.Properties.Add("default_schema", _configurator.GetAppSettingString("DefaultSchema"));                    configure.AddDeserializedMapping(GetMapping(),                                                     _configurator.GetAppSettingString("DocumentFileName"));                     SchemaMetadataUpdater.QuoteTableAndColumns(configure);                     return configure;                }, Logger);        }         protected IEnumerable<Type> GetTypeOfEntities(IEnumerable<string> assemblies)        {            var type = typeof(EntityBase<Guid>);            var domainTypes = new List<Type>();             foreach (var assembly in assemblies)            {                var realAssembly = Assembly.LoadFrom(assembly);                 if (realAssembly == null)                    throw new NullReferenceException();                 domainTypes.AddRange(realAssembly.GetTypes().Where(                    t =>                    {                        if (t.BaseType != null)                            return string.Compare(t.BaseType.FullName,                                          type.FullName) == 0;                        return false;                    }));            }             return domainTypes;        }    } I do not want to dependency on any RDBMS, so I made a builder as an abstract class, and so I will create a concrete instance for SQL Server 2008 as follows: public class SqlServerConfORMConfigBuilder : ConfORMConfigBuilder    {        public SqlServerConfORMConfigBuilder(IEnumerable<string> assemblies)            : base(assemblies)        {        }         public override void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString)        {            dBIntegration.Dialect<MsSql2008Dialect>();            dBIntegration.Driver<SqlClientDriver>();            dBIntegration.KeywordsAutoImport = Hbm2DDLKeyWords.AutoQuote;            dBIntegration.IsolationLevel = IsolationLevel.ReadCommitted;            dBIntegration.ConnectionString = connectionString;            dBIntegration.LogSqlInConsole = true;            dBIntegration.Timeout = 10;            dBIntegration.LogFormatedSql = true;            dBIntegration.HqlToSqlSubstitutions = "true 1, false 0, yes 'Y', no 'N'";        }         protected override HbmMapping GetMapping()        {            var orm = new ObjectRelationalMapper();             orm.Patterns.PoidStrategies.Add(new GuidPoidPattern());             var patternsAppliers = new CoolPatternsAppliersHolder(orm);            //patternsAppliers.Merge(new DatePropertyByNameApplier()).Merge(new MsSQL2008DateTimeApplier());            patternsAppliers.Merge(new ManyToOneColumnNamingApplier());            patternsAppliers.Merge(new OneToManyKeyColumnNamingApplier(orm));             var mapper = new Mapper(orm, patternsAppliers);             var entities = new List<Type>();             DomainDefinition(orm);            Customize(mapper);             entities.AddRange(DomainTypes);             return mapper.CompileMappingFor(entities);        }         private void DomainDefinition(IObjectRelationalMapper orm)        {            orm.TablePerClassHierarchy(new[] { typeof(EntityBase<Guid>) });            orm.TablePerClass(DomainTypes);             orm.OneToOne<News, Poll>();            orm.ManyToOne<Category, News>();             orm.Cascade<Category, News>(Cascade.All);            orm.Cascade<News, Poll>(Cascade.All);            orm.Cascade<User, Poll>(Cascade.All);        }         private static void Customize(Mapper mapper)        {            CustomizeRelations(mapper);            CustomizeTables(mapper);            CustomizeColumns(mapper);        }         private static void CustomizeRelations(Mapper mapper)        {        }         private static void CustomizeTables(Mapper mapper)        {        }         private static void CustomizeColumns(Mapper mapper)        {            mapper.Class<Category>(                cm =>                {                    cm.Property(x => x.Name, m => m.NotNullable(true));                    cm.Property(x => x.CreatedDate, m => m.NotNullable(true));                });             mapper.Class<News>(                cm =>                {                    cm.Property(x => x.Title, m => m.NotNullable(true));                    cm.Property(x => x.ShortDescription, m => m.NotNullable(true));                    cm.Property(x => x.Content, m => m.NotNullable(true));                });             mapper.Class<Poll>(                cm =>                {                    cm.Property(x => x.Value, m => m.NotNullable(true));                    cm.Property(x => x.VoteDate, m => m.NotNullable(true));                    cm.Property(x => x.WhoVote, m => m.NotNullable(true));                });             mapper.Class<User>(                cm =>                {                    cm.Property(x => x.UserName, m => m.NotNullable(true));                    cm.Property(x => x.Password, m => m.NotNullable(true));                });        }    } As you can see that we can do so many things in this class, such as custom entity relationships, custom binding on the columns, custom table name, ... Here I only made two so-Appliers for OneToMany and ManyToOne relationships, you can refer to it here public class ManyToOneColumnNamingApplier : IPatternApplier<PropertyPath, IManyToOneMapper>    {        #region IPatternApplier<PropertyPath,IManyToOneMapper> Members         public void Apply(PropertyPath subject, IManyToOneMapper applyTo)        {            applyTo.Column(subject.ToColumnName() + "Id");        }         #endregion         #region IPattern<PropertyPath> Members         public bool Match(PropertyPath subject)        {            return subject != null;        }         #endregion    } public class OneToManyKeyColumnNamingApplier : OneToManyPattern, IPatternApplier<PropertyPath, ICollectionPropertiesMapper>    {        public OneToManyKeyColumnNamingApplier(IDomainInspector domainInspector) : base(domainInspector) { }         #region Implementation of IPattern<PropertyPath>         public bool Match(PropertyPath subject)        {            return Match(subject.LocalMember);        }         #endregion Implementation of IPattern<PropertyPath>         #region Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         public void Apply(PropertyPath subject, ICollectionPropertiesMapper applyTo)        {            applyTo.Key(km => km.Column(GetKeyColumnName(subject)));        }         #endregion Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         protected virtual string GetKeyColumnName(PropertyPath subject)        {            Type propertyType = subject.LocalMember.GetPropertyOrFieldType();            Type childType = propertyType.DetermineCollectionElementType();            var entity = subject.GetContainerEntity(DomainInspector);            var parentPropertyInChild = childType.GetFirstPropertyOfType(entity);            var baseName = parentPropertyInChild == null ? subject.PreviousPath == null ? entity.Name : entity.Name + subject.PreviousPath : parentPropertyInChild.Name;            return GetKeyColumnName(baseName);        }         protected virtual string GetKeyColumnName(string baseName)        {            return string.Format("{0}Id", baseName);        }    } Everyone also can download the ConfORM source at google code and see example inside it. Next part I will write about multiple database factory. Hope you enjoy about it. happy coding and see you next part.

    Read the article

  • Presenting Loading Data Warehouse Partitions with SSIS 2012 at SQL Saturday DC!

    - by andyleonard
    Join Darryll Petrancuri and me as we present Loading Data Warehouse Partitions with SSIS 2012 Saturday 8 Dec 2012 at SQL Saturday 173 in DC ! SQL Server 2012 table partitions offer powerful Big Data solutions to the Data Warehouse ETL Developer. In this presentation, Darryll Petrancuri and Andy Leonard demonstrate one approach to loading partitioned tables and managing the partitions using SSIS 2012, and reporting partition metrics using SSRS 2012. Objectives A practical solution for loading Big...(read more)

    Read the article

  • Access functions from user control without events?

    - by BornToCode
    I have an application made with usercontrols and a function on main form that removes the previous user controls and shows the desired usercontrol centered and tweaked: public void DisplayControl(UserControl uControl) I find it much easier to make this function static or access this function by reference from the user control, like this: MainForm mainform_functions = (MainForm)Parent; mainform_functions.DisplayControl(uc_a); You probably think it's a sin to access a function in mainform, from the usercontrol, however, raising an event seems much more complex in such case - I'll give a simple example - let's say I raise an event from usercontrol_A to show usercontrol_B on mainform, so I write this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); }; Now what if I want usercontrol_B to also have an event to show usercontrol_C? now it would look like this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); uc_b.show_uc_c += (s2,e2) => {usercontrol_C uc_c = new usercontrol_C(); DisplayControl(uc_c);} }; THIS LOOKS AWFUL! The code is much simpler and readable when you actually access the function from the usercontrol itself, therefore I came to the conclusion that in such case it's not so terrible if I break the rules and not use events for such general function, I also think that a readable usercontrol that you need to make small adjustments for another app is preferable than a 100% 'generic' one which makes my code look like a pile of mud. What is your opinion? Am I mistaken?

    Read the article

  • Presenting Loading Data Warehouse Partitions with SSIS 2012 at SQL Saturday DC!

    - by andyleonard
    Join Darryll Petrancuri and me as we present Loading Data Warehouse Partitions with SSIS 2012 Saturday 8 Dec 2012 at SQL Saturday 173 in DC ! SQL Server 2012 table partitions offer powerful Big Data solutions to the Data Warehouse ETL Developer. In this presentation, Darryll Petrancuri and Andy Leonard demonstrate one approach to loading partitioned tables and managing the partitions using SSIS 2012, and reporting partition metrics using SSRS 2012. Objectives A practical solution for loading Big...(read more)

    Read the article

  • Speaking at Triangle SQL Server User Group 16 Mar 2010!

    - by andyleonard
    I'm excited to present Applied SSIS Design Patterns to the Triangle SQL Server User Group 16 Mar 2010! This is a reprise of my PASS Summit 2009 spotlight session. If you read this blog and make the meeting, introduce yourself! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • PASS Summit 2010 Presentation Feedback

    - by andyleonard
    Introduction It's always an honor to present anywhere. Presenting at the PASS Summit is a special honor. I delivered three presentations last month: Database Design for Developers SSIS Design Patterns, Part 2 A Lightning Talk on SSIS Database Design for Developers First, a bit of explanation (defense): I submitted this abstract to the PASS Abstracts folks by mistake . I kid you not. Inspired by Adam Machanic ( Blog | @AdamMachanic ) I maintain a document of current presentations. I've recently published...(read more)

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >