Search Results

Search found 1215 results on 49 pages for 'recursive acronym'.

Page 44/49 | < Previous Page | 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • AutoMapper strings to enum descriptions

    - by 6footunder
    Given the requirement: Take an object graph, set all enum type properties based on the processed value of a second string property. Convention dictates that the name of the source string property will be that of the enum property with a postfix of "Raw". By processed we mean we'll need to strip specified characters e.t.c. I've looked at custom formatters, value resolvers and type converters, none of which seems like a solution for this? We want to use AutoMapper as opposed to our own reflection routine for two reasons, a) it's used extensively throughout the rest of the project and b) it gives you recursive traversal ootb. -- Example -- Given the (simple) structure below, and this: var tmp = new SimpleClass { CountryRaw = "United States", Person = new Person { GenderRaw="Male" } }; var tmp2 = new SimpleClass(); Mapper.Map(tmp, tmp2); we'd expect tmp2's MappedCountry enum to be Country.UnitedStates and the Person property to have a gender of Gender.Male. public class SimpleClass1 { public string CountryRaw {get;set;} public Country MappedCountry {get;set;} public Person Person {get;set;} } public class Person { public string GenderRaw {get;set;} public Gender Gender {get;set;} public string Surname {get;set;} } public enum Country { UnitedStates = 1, NewZealand = 2 } public enum Gender { Male, Female, Unknown } Thanks

    Read the article

  • Is this the best way to grab common elements from a Hash of arrays?

    - by Hulihan Applications
    I'm trying to get a common element from a group of arrays in Ruby. Normally, you can use the & operator to compare two arrays, which returns elements that are present or common in both arrays. This is all good, except when you're trying to get common elements from more than two arrays. However, I want to get common elements from an unknown, dynamic number of arrays, which are stored in a hash. I had to resort to using the eval() method in ruby, which executes a string as actual code. Here's the function I wrote: def get_common_elements_for_hash_of_arrays(hash) # get an array of common elements contained in a hash of arrays, for every array in the hash. # ["1","2","3"] & ["2","4","5"] & ["2","5","6"] # => ["2"] # eval("[\"1\",\"2\",\"3\"] & [\"2\",\"4\",\"5\"] & [\"2\",\"5\",\"6\"]") # => ["2"] eval_string_array = Array.new # an array to store strings of Arrays, ie: "[\"2\",\"5\",\"6\"]", which we will join with & to get all common elements hash.each do |key, array| eval_string_array << array.inspect end eval_string = eval_string_array.join(" & ") # create eval string delimited with a & so we can get common values return eval(eval_string) end example_hash = {:item_0 => ["1","2","3"], :item_1 => ["2","4","5"], :item_2 => ["2","5","6"] } puts get_common_elements_for_hash_of_arrays(example_hash) # => 2 This works and is great, but I'm wondering...eval, really? Is this the best way to do it? Are there even any other ways to accomplish this(besides a recursive function, of course). If anyone has any suggestions, I'm all ears. Otherwise, Feel free to use this code if you need to grab a common item or element from a group or hash of arrays, this code can also easily be adapted to search an array of arrays.

    Read the article

  • Shell halts while looping and 'transforming' values in dictionary (Python 2.7.5)

    - by Gus
    I'm building a program that will sum digits in a given list in a recursive way. Say, if the source list has 10 elements, the second list will have 9, the third 8 and so on until the last list that will have only one element. This is done by adding the first element to the second, then the second to the third and so on. I'm stuck without feedback from the shell. It halts without throwing any errors, then in a couple of seconds the fan is spinning like crazy. I've read quite a few posts here and changed my approach, but I'm not sure that what have so far can produce the results I'm looking for. Thanks in advance: #--------------------------------------------------- #functions #--------------------------------------------------- #sum up pairs in a list def reduce(inputList): i = 0 while (i < len(inputList)): #ref to current and next item j = i + 1 #don't go for the last item if j != len(inputList): #new number eq current + next number newNumber = inputList[i] + inputList[j] if newNumber >= 10: #reduce newNumber to single digit newNumber = sum(map(int, str(newNumber))) #collect into temp list outputList.append(newNumber) i = i + 1 return outputList; #--------------------------------------------------- #program starts here #--------------------------------------------------- outputList = [] sourceList = [7, 3, 1, 2, 1, 4, 6] counter = len(sourceList) dict = {} dict[0] = sourceList print '-------------' print 'Level 0:', dict[0] for i in range(counter): j = i + 1 if j != counter: baseList = dict.get(i) #check function to understand what it does newList = reduce(baseList) #new key and value from previous/transformed value dict[j] = newList print 'Level %d: %s' % (j, dict[j])

    Read the article

  • HELP IN JAVA!! URGENT

    - by annayena
    The following function accepts 2 strings, the 2nd (not 1st) possibly containing *'s (asterisks). An * is a replacement for a string (empty, 1 char or more), it can appear appear (only in s2) once, twice, more or not at all, it cannot be adjacent to another * (ab**c), no need to check that. public static boolean samePattern(String s1, String s2) It returns true if strings are of the same pattern. It must be recursive, not use any loops, static & global variables. Also it's PROHIBITED to use the method equals in the String class. Can use local variables & method overloading. Can use only these methods: charAt(i), substring(i), substring(i, j), length(). Examples: 1: TheExamIsEasy; 2: "The*xamIs*y" --- true 1: TheExamIsEasy; 2: "Th*mIsEasy*" --- true 1: TheExamIsEasy; 2: "*" --- true 1: TheExamIsEasy; 2: "TheExamIsEasy" --- true 1: TheExamIsEasy; 2: "The*IsHard" --- FALSE I am stucked on this question for many hours now! I need the solution in Java please kindly help me.

    Read the article

  • Is this the best way to grab Common element from a Hash of arrays?

    - by Hulihan Applications
    I'm trying to get a common element from a group of arrays in Ruby. Normally, you can use the & operator to compare two arrays, which returns elements that are present or common in both arrays. This is all good, except when you're trying to get common elements from more than two arrays. However, I want to get common elements from an unknown, dynamic number of arrays, which are stored in a hash. I had to resort to using the eval() method in ruby, which executes a string as actual code. Here's the function I wrote: def get_common_elements_for_hash_of_arrays(hash) # get an array of common elements contained in a hash of arrays, for every array in the hash. # ["1","2","3"] & ["2","4","5"] & ["2","5","6"] # => ["2"] # eval("[\"1\",\"2\",\"3\"] & [\"2\",\"4\",\"5\"] & [\"2\",\"5\",\"6\"]") # => ["2"] eval_string_array = Array.new # an array to store strings of Arrays, ie: "[\"2\",\"5\",\"6\"]", which we will join with & to get all common elements hash.each do |key, array| eval_string_array << array.inspect end eval_string = eval_string_array.join(" & ") # create eval string delimited with a & so we can get common values return eval(eval_string) end example_hash = {:item_0 => ["1","2","3"], :item_1 => ["2","4","5"], :item_2 => ["2","5","6"] } puts get_common_elements_for_hash_of_arrays(example_hash) # => 2 This works and is great, but I'm wondering...eval, really? Is this the best way to do it? Are there even any other ways to accomplish this(besides a recursive function, of course). If anyone has any suggestions, I'm all ears. Otherwise, Feel free to use this code if you need to grab a common item or element from a group or hash of arrays, this code can also easily be adapted to search an array of arrays.

    Read the article

  • Rsync over ssh: "ERROR: module is read only" suddenly appeared

    - by user978548
    I've used from some time rsync/ssh to backup my shared host contents to my personal Synology NAS (212j for that matter), and it worked quite well. For information, I use a password-less ssh connection. 3 days ago, I updated my NAS software and since (or at least I believe it's since that), the backup won't work anymore. I get the following error on the host: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) ERROR: module is read only ..which I do not understand. beside that nothing changed that I know of in both source and destination that can be related to rsync or ssh, I did check a few things and all seems to be alright: I can still connect through ssh from the host to my NAS with the good user, so ssh stuff like keys haven't changed. I also have the correct file permissions on the NAS (I checked, and also tried to create files, directories, .. with the user used by rsync through ssh). I read here and there that the error means that I have to ensure that my rsyncd.conf have the right read only = no in it, but as far as I know, I never used rsyncd as well as I never configured anything for it and until now it worked like a charm.. I use the following command to do the backup: rsync -ab --recursive \ --files-from="$FILES_FROM" \ --backup-dir=backup_$SUFFIX \ --delete \ --filter='protect backup_*' \ $WDIRECTORY/ \ remote_backup:$REMOTE_BACKUP/ So I'm stuck and really can't figure out what happened. Edit: As suggested in comments, I also tried passing commands to ssh (but not from inside a ssh session), that worked as expected, and also tried a single rsync command, which didnt worked, failing just like the complete backup command. (sharedHost):hostuser:~ > touch test.txt (sharedHost):hostuser:~ > rsync test.txt remote_backup:backups/test.txt ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.8] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7] and (sharedHost):hostuser:~ > ssh remote_backup 'touch /abs_path_to_backups/backups/test2.txt && echo "ProoF" > /abs_path_to_backups/backups/test2.txt' (sharedHost):hostuser:~ > ssh remote_backup 'cat /abs_path_to_backups/backups/test2.txt' ProoF

    Read the article

  • Trying to get DNS services running on Windows Server 2008 R2, what am I getting wrong ?

    - by LaserBeak
    Ok, So I am basically trying to get a home server pc up that will provide Domain name services, act as Mail server and web server. I have one static IP, well it's not officially static but hasn't changed in two years so I'll call it static. I have done the following: Configured router NAT/virtual port forward UDP/TCP port 53 to the internal IP of my server 192.168.1.16, in adapter settings specified the manual settings: 192.168.1.16 IP, gateway 192.168.1.1, Subnet: 255.255.255.0 and loopback DNS: 127.0.0.1 Using my public my public IP Checked using http://www.canyouseeme.org/ that port 53 is open and is not being blocked by my ISP. It can see services on this port. Registered Domain name (mydomain.com.au) Updated whois database through the domain registrars site and registered NameServer names: ns0.mydomain.com.au and ns2.mydomain.com.au, both have been associated with my single public IP. (Waited 24 hours) Update the nameserver for mydomain.com.au: primary ns0.mydomain.com.au secondary: ns2.mydomain.com.au (waited 24+ hours) Installed Server 2008 R2, install web server role and DNS role. Webserver works when I enter my public IP into browser of any PC/mobile, get IIS7 welcome page. In DNS server: Created new forward lookup zone: ; ; Database file mydoman.com.au.dns for mydomain.com.au zone. ; Zone version: 10 ; @ IN SOA mydomain.com.au. mydomain.testdomain.com. ( 10 ; serial number 900 ; refresh 600 ; retry 86400 ; expire 3600 ) ; default TTL ; ; Zone NS records ; @ NS ns0.mydomain.com.au. @ NS ns1.mydomain.com.au. ; ; Zone records ; @ A 192.168.1.16 www A 192.168.1.16 The Domain name services will however not work, the whois database updated with ns0.mydomain.com.au etc. but when I type in my site name www.mydomain.com.au from an external machine it will not open site and I can't even ping it (Can't find host) When I check the ns0.mydomain.com.au NS record using a tool Like: http://www.squish.net/dnscheck/ I get: Security: Server ns0.mydomain.com.au (XXX.XXX.XXX.XX <- my public IP) is recursive Domain exists but there is no such record Any ideas, thanks...

    Read the article

  • Secure copy uucp style

    - by Alexander Janssen
    I often have the case that I have to make a lot of hops to the remote host, just because there is no direct routing between my client and the remote host. When I need to copy files from a remote host two or more hops away, I always have to: client$ ssh host1 host1$ ssh host2 host2$ scp host3:/myfile . host2$ exit host1$ scp host2:myfile . host1$ exit client$ scp host1:myfile . Back when uucp still was being used this would be as simple as a uucp host1!host2!host3 /myfile . I know that there's uucp over ssh, but unfortunately I don't have the proper privileges on those machines to set it up. Also, I'm not sure if I really want to fiddle around with customer's machines. Does anyone know of a method doing this tasks without the need to setup a lot of tunnels or deploying new software to remote hosts? Maybe some kind of recursive script which clones itself to all the remote hosts, doing the hard work for me? Assume that authentication takes place with public keys and that all hosts do SSH Agent Forwarding. Edit: I'm not looking for a way to automatically forwarding my interactive sesssion to the nexthop host. I want a solution to copy files bangpath-style using scp via multiple hops without the need to install uucp on any of those machines. I don't have the (legal) rights or the privileges to make permanent changes to the ssh-config. Also, I'm sharing this username and hosts with a lot of other people. I'm willing to hack up my own script, but I wanted to know if anyone knows something which already does it. Minimum-invasive changes to hosts on the bangpath, simple invocation from the client. Edit 2: To give you an impression of how it's properly been done in interactive sessions, have a look at the GXPC clustershell. This is basically a Python-script, which spwans itself over to all remote hosts which have connectivity and where your ssh-key is installed. The great thing about it is, that you can tell "I can reach HostC via HostB via HostA." It just works. I want to have this for scp.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Sunday, June 10, 2012

    CodePlex Daily Summary for Sunday, June 10, 2012Popular ReleasesRCon Development Server: BF3DevServer-Console v0.3: Solved issues9 10 11 13 14 15 16 17SVNUG.CodePlex: Cloud Development with Windows Azure: This release contains the slides for the Cloud Development with Windows Azure presentation.Image Cropper for Umbraco 5: Image Cropper for Umbraco 5.1: for Umbraco version 5.1SHA-1 Hash Checker: SHA-1 Hash Checker (for Windows): Fixed major bugs. Removed false negatives.Grid.Mvc: Grid.Mvc 1.3: Added Html helper extension methods (see: Documentation) Fixed minor bugs Changed Namespace to 'GridMvc'AutoUpdaterdotNET: AutoUpdater.NET 1.0: Everything seems perfect if you find any problem you can report to http://www.rbsoft.org/contact.htmlMedia Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...????????API for .Net SDK: SDK for .Net ??? Release 1: ??? - ??.Net 2.0/3.5/4.0????。??????VS2010??????????。VS2008????????,??????????。 ??? - ??.Net 4.0???SDK??????Dynamic????????。 ??? - OAuth??????AccessToken?VerifierAccessToken??。??Token?????????Client?。 ?? - OAuth???2?????。 ?????AccessToken?????????。???AppKey,AppSecret?CallbackUrl ???AccessToken????????API???Client?????。???AppKey,AppSecret?AccessToken ?? - ??OAuth??????????????????????????CallbackUrl??,??GetAuthorizeURL, GetAccessTokenByAuthorizationCode, ClientLogin?????????CallbackUr...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesPython Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...Application Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...myManga: myManga v1.0.0.4: ChangeLogUpdating from Previous Version: Extract contents of Release - myManga v1.0.0.4.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllNew ProjectsDatabase Based Config Management: This project helps you to consolidate all your app configs into DB and access it from single location. eLogistics: My logistics systemFacebook Web Parts for SharePoint 2010: Going beyond authentication with Facebook and SharePoint 2010.FsJson: A JSON Parser in F#Google Web Service API for Windows Phone: Google Web Service API ported to .NET for Windows Phone.Hedge when you can, not when you have to.: Classic Black-Scholes/Merton option hedging assumes options are continuously hedged. This project is for exploring what happens in the real world of option hedging.Infragistics via PRISM: Using Infragistics RibbonBar and DockManager with PRISMLightBus???????: LightBus???????????????;????,????,????,????,????,????;????,????;??????,??????,????,????;????????。 ????????: 1. Silverlight Out-of-Browser?? 2. Windows 8 Metro??metaPost: metaPost provides a MetaWeblog interface for managing content in DotNetNuke modules using MetaWeblog enabled editors such as Windows Live Writer. The metaPost module defines a framework that can be used to easily add MetaWeblog publishing support to existing DotNetNuke modules.MPerfs Tool: MPerfs is a tool of MSSQL Performance Tool Web site, developped in php/javascript with graphicals and tables, using a MSSQL database contained DMVs data aggregations and historicals. Supported Versions : Microsoft SQLServer 2005 and 2008 R1 (2008 R2 soon). Important : The tool doesn't monitor SSAS, SSIS or SSRSNanoMVVM: a lightweight wpf MVVM framework: This is a lightweight C# 4.0 ViewModel-first MVVM framework designed to aid in the creation of desktop wpf applications.Open Personal Response System: OpenPRS is designed to be an audience-feedback tool for presenters to keep audiences engaged in a presentation as well as facilitating information gathering from the audience and presentation to the presenter and other interested parties. Panda TimeManager: Panda TimeManager is a software for management of timesheets.Progetto Sicurezza: A *VERY* basic implementation of a Certification Authority and a Client to use it, made with vb.net, BouncyCastle and iTextSharp.Proyectos de Pruebas de UTB Minor Sql 2012: Proyectos de Pruebas de UTB Minor Sql 2012Really fast Javascript Base64 encoder/decoder with utf-8 suppot: If you wonder why another one, then focus on the title. I’ve seen a lot of implementations (custom ones and in libraries/frameworks) that are fast, but not as this one. What you get is significant performance in encoding and light speed in decoding.Rezerwior - JSF: Projekt aplikacji webowej w technologi Java Server Faces 2.0Rules of Acquisition: Ferengi rules of acquistion for Windows Phone.SCOMA - FIM Connector for System Center Orchestrator: SCOMA is the acronym for the Web Service-based FIM connector (aka Management Agent) for System Center Orchestrator, short SCO. SCOMA is written in C# and based on the new ECMA2 (Extensible Connectivity 2.0 Management Agents) interface that is part of FIM 2010 R2 and FIM 2010 Update 2.SHA-1 Hash Checker: Offline command line tool that generates a SHA-1 hash for a text string or pass-phrase. Additionally, you may check your hash against published lists of compromised hashes, to check whether your password has been compromised or not.Testprojekt: Dies ist nur ein TestTmib Video Downloader: A small youtube video downloader. Created in C#TVGrid: watch several web streams simultaneously??: ????、???????ARPG

    Read the article

  • Weird CSS-behaviour [migrated]

    - by WMRKameleon
    <!DOCTYPE html> <html> <head> <title>PakHet</title> <link rel="stylesheet" type="text/css" href="css/basis.css" /> </head> <body> <div class="wrapper"> <div id='cssmenu'> <ul> <li class="active"><a href='index.html'><span>Start</span></a></li> <li><a href='pakhet.html'><span>Over PakHet</span></a></li> <li><a href='overons.html'><span>Over Ons</span></a></li> <li class='has-sub '><a href='#'><span>Uw pakket</span></a> <ul> <li><a href='aanmelden.php'><span>Aanmelden</span></a></li> <li><a href='traceren.php'><span>Traceren</span></a></li> </ul> </li> </ul> </div> <div class="header"> <h1>Hier komt de titel van de website</h1> </div> <div class="content"> <p>Dit is de tekst van de content. Dit is de indexpagina.</p> </div> </div> </body> </html> And this is the CSS: /* CSS RESET */ html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, canvas, details, embed, figure, figcaption, footer, header, hgroup, menu, nav, output, ruby, section, summary, time, mark, audio, video, *{ margin: 0; padding: 0; border: 0; vertical-align: baseline; } table { border-collapse: collapse; border-spacing: 0; } /* Einde CSS RESET, nu echte code */ html, body{ background:url(../images/bg_picture.jpg) fixed no-repeat; } .wrapper{ margin:0 auto; } .header{ margin:0 auto; background-color:rgba(0, 0, 0, 0.4); } .content{ background-color:rgba(0, 0, 0, 0.4); width:600px; margin:0 auto; margin-top:50px; } .content p{ color:white; text-shadow:1px 1px 0 rgba(0,0,0, 0.5); font-family:"Lucida Grande", sans-serif; } #cssmenu{ height:37px; display:block; padding:0; margin: 0; border:1px solid; } #cssmenu > ul {list-style:inside none; padding:0; margin:0;} #cssmenu > ul > li {list-style:inside none; padding:0; margin:0; float:left; display:block; position:relative;} #cssmenu > ul > li > a{ outline:none; display:block; position:relative; padding:12px 20px; font:bold 13px/100% "Lucida Grande", Helvetica, sans-serif; text-align:center; text-decoration:none; text-shadow:1px 1px 0 rgba(0,0,0, 0.4); } #cssmenu > ul > li:first-child > a{border-radius:5px 0 0 5px;} #cssmenu > ul > li > a:after{ content:''; position:absolute; border-right:1px solid; top:-1px; bottom:-1px; right:-2px; z-index:99; } #cssmenu ul li.has-sub:hover > a:after{top:0; bottom:0;} #cssmenu > ul > li.has-sub > a:before{ content:''; position:absolute; top:18px; right:6px; border:5px solid transparent; border-top:5px solid #fff; } #cssmenu > ul > li.has-sub:hover > a:before{top:19px;} #cssmenu ul li.has-sub:hover > a{ background:#3f3f3f; border-color:#3f3f3f; padding-bottom:13px; padding-top:13px; top:-1px; z-index:999; } #cssmenu ul li.has-sub:hover > ul, #cssmenu ul li.has-sub:hover > div{display:block;} #cssmenu ul li.has-sub > a:hover{background:#3f3f3f; border-color:#3f3f3f;} #cssmenu ul li > ul, #cssmenu ul li > div{ display:none; width:auto; position:absolute; top:38px; padding:10px 0; background:#3f3f3f; border-radius:0 0 5px 5px; z-index:999; } #cssmenu ul li > ul{width:200px;} #cssmenu ul li > ul li{display:block; list-style:inside none; padding:0; margin:0; position:relative;} #cssmenu ul li > ul li a{ outline:none; display:block; position:relative; margin:0; padding:8px 20px; font:10pt "Lucida Grande", Helvetica, sans-serif; color:#fff; text-decoration:none; text-shadow:1px 1px 0 rgba(0,0,0, 0.5); } #cssmenu, #cssmenu > ul > li > ul > li a:hover{ background:#333333; background:-moz-linear-gradient(top, #333333 0%, #222222 100%); background:-webkit-gradient(linear, left top, left bottom, color-stop(0%,#333333), color-stop(100%,#222222)); background:-webkit-linear-gradient(top, #333333 0%,#222222 100%); background:-o-linear-gradient(top, #333333 0%,#222222 100%); background:-ms-linear-gradient(top, #333333 0%,#222222 100%); background:linear-gradient(top, #333333 0%,#222222 100%); filter:progid:DXImageTransform.Microsoft.gradient( startColorstr='#333333', endColorstr='#222222',GradientType=0 ); } #cssmenu{border-color:#000;} #cssmenu > ul > li > a{border-right:1px solid #000; color:#fff;} #cssmenu > ul > li > a:after{border-color:#444;} #cssmenu > ul > li > a:hover{background:#111;} #cssmenu > ul > li.active > a{ color:orange; } .header{ clear:both; } The problem is that, whenever I hover on the dropdown-menu, that a 1px margin appears in between the menu and the header. Can I solve that? I can't seem to find the solution.

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Accessing Oracle 6i and 9i/10g Databases using C#

    - by Mike M
    Hi all, I am making two build files using NAnt. The first aims to automatically compile Oracle 6i forms and reports and the second aims to compile Oracle 9i/10g forms and reports. Within the NAnt task is a C# script which prompts the developer for database credentials (username, password, database) in order to compile the forms and reports. I want to then run these credentials against the relevant database to ensure the credentials entered are correct and, if they are not, prompt the user to re-enter their credentials. My script currently looks as follows: class GetInput { public static void ScriptMain(Project project) { Console.Clear(); Console.WriteLine("==================================================================="); Console.WriteLine("Welcome to the Compile and Deploy Oracle Forms and Reports Facility"); Console.WriteLine("==================================================================="); Console.WriteLine(); Console.WriteLine("Please enter the acronym of the project to work on from the following list:"); Console.WriteLine(); Console.WriteLine("--------"); Console.WriteLine("- BCS"); Console.WriteLine("- COPEN"); Console.WriteLine("- FCDD"); Console.WriteLine("--------"); Console.WriteLine(); Console.Write("Selection: "); project.Properties["project.type"] = Console.ReadLine(); Console.WriteLine(); Console.Write("Please enter username: "); string username = Console.ReadLine(); project.Properties["username"] = username; string password = ReturnPassword(); project.Properties["password"] = password Console.WriteLine(); Console.Write("Please enter database: "); string database = Console.ReadLine(); project.Properties["database"] = database Console.WriteLine(); //Call method to verify user credentials Console.WriteLine(); Console.WriteLine("Compiling files..."; } public static string ReturnPassword() { Console.Write("Please enter password: "); string password = ""; ConsoleKeyInfo nextKey = Console.ReadKey(true); while (nextKey.Key != ConsoleKey.Enter) { if (nextKey.Key == ConsoleKey.Backspace) { if (password.Length > 0) { password = password.Substring(0, password.Length - 1); Console.Write(nextKey.KeyChar); Console.Write(" "); Console.Write(nextKey.KeyChar); } } else { password += nextKey.KeyChar; Console.Write("*"); } nextKey = Console.ReadKey(true); } return password; } } Having done a bit of research, I find that you can connect to Oracle databases using the System.Data.OracleClient namespace clicky. However, as mentioned in the link, Microsoft is discontinuing support for this so it is not a desireable solution. I have also fonud that Oracle provides its own classes for connecting to Oracle databases clicky. However, this only seems to support connecting to Oracle 9 or newer databases (clicky) so it is not feasible solution as I also need to connect to Oracle 6i databases. I could achieve this by calling a bat script from within the C# script, but I would much prefer to have a single build file for simplicity. Ideally, I would like to run a series of commands such as is contained in the following .bat script: rem -- Set Database SID -- set ORACLE_SID=%DBSID% sqlplus -s %nameofuser%/%password%@%dbsid% set cmdsep on set cmdsep '"'; --" set term on set echo off set heading off select '========================================' || CHR(10) || 'Have checked and found both Password and ' || chr(10) || 'Database Identifier are valid, continuing ...' || CHR(10) || '========================================' from dual; exit; This requires me to set the environment variable of ORACLE_SID and then run sqlplus in silent mode (-s) followed by a series of sql set commands (set x), the actual select statement and an exit command. Can I achieve this within a c# script without calling a bat script, or am I forced to call a bat script? Thanks in advance!

    Read the article

  • Writing a code example

    - by Stefano Borini
    I would like to have your feedback regarding code examples. One of the most frustrating experiences I sometimes have when learning a new technology is finding useless examples. I think an example as the most precious thing that comes with a new library, language, or technology. It must be a starting point, a wise and unadulterated explanation on how to achieve a given result. A perfect example must have the following characteristics: Self contained: it should be small enough to be compiled or executed as a single program, without dependencies or complex makefiles. An example is also a strong functional test if you correctly installed the new technology. The more issues could arise, the more likely is that something goes wrong, and the more difficult is to debug and solve the situation. Pertinent: it should demonstrate one, and only one, specific feature of your software/library, involving the minimal additional behavior from external libraries. Helpful: the code should bring you forward, step by step, using comments or self-documenting code. Extensible: the example code should be a small “framework” or blueprint for additional tinkering. A learner can start by adding features to this blueprint. Recyclable: it should be possible to extract parts of the example to use in your own code Easy: An example code is not the place to show your code-fu skillz. Keep it easy. helpful acronym: SPHERE. Prototypical examples of violations of those rules are the following: Violation of self-containedness: an example spanning multiple files without any real need for it. If your example is a python program, keep everything into a single module file. Don’t sub-modularize it. In Java, try to keep everything into a single class, unless you really must partition some entity into a meaningful object you need to pass around (and java mandates one class per file, if I remember correctly). Violation of Pertinency: When showing how many different shapes you can draw, adding radio buttons and complex controls with all the possible choices for point shapes is a bad idea. You de-focalize your example code, introducing code for event handling, controls initialization etc., and this is not part the feature you want to demonstrate, they are unnecessary noise in the understanding of the crucial mechanisms providing the feature. Violation of Helpfulness: code containing dubious naming, wrong comments, hacks, and functions longer than one page of code. Violation of Extensibility: badly factored code that have everything into a single function, with potentially swappable entities embedded within the code. Example: if an example reads data from a file and displays it, create a method getData() returning a useful entity, instead of opening the file raw and plotting the stuff. This way, if the user of the library needs to read data from a HTTP server instead, he just has to modify the getData() module and use the example almost as-is. Another violation of Extensibility comes if the example code is not under a fully liberal (e.g. MIT or BSD) license. Violation of Recyclability: when the code layout is so intermingled that is difficult to easily copy and paste parts of it and recycle them into another program. Again, licensing is also a factor. Violation of Easiness: Yes, you are a functional-programming nerd and want to show how cool you are by doing everything on a single line of map, filter and so on, but that could not be helpful to someone else, who is already under pressure to understand your library, and now has to understand your code as well. And in general, the final rule: if it takes more than 10 minutes to do the following: compile the code, run it, read the source, and understand it fully, it means that the example is not a good one. Please let me know your opinion, either positive or negative, or experience on this regard.

    Read the article

  • XSLT templates and recursion

    - by user333411
    Hi All, Im new to XSLT and am having some problems trying to format an XML document which has recursive nodes. My XML Code: Hopefully my XML shows: All <item> are nested with <items> An item can have either just attributes, or sub nodes The level to which <item> nodes are nested can be infinently deep <?xml version="1.0" encoding="utf-8" ?> - <items> <item groupID="1" name="Home" url="//" /> - <item groupID="2" name="Guides" url="/Guides/"> - <items> - <item groupID="26" name="Online-Poker-Guide" url="/Guides/Online-Poker-Guide/"> - <items> - <item> <id>107</id> - <title> - <![CDATA[ Poker Betting - Online Poker Betting Structures ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-betting-structures ]]> </url> </item> - <item> <id>114</id> - <title> - <![CDATA[ Beginners&#39; Poker - Poker Hand Ranking ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-hand-ranking ]]> </url> </item> - <item> <id>115</id> - <title> - <![CDATA[ Poker Terms - 4th Street and 5th Street ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-poker-terms ]]> </url> </item> - <item> <id>116</id> - <title> - <![CDATA[ Popular Poker - The Popularity of Texas Hold&#39;em ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-popularity-texas-holdem ]]> </url> </item> - <item> <id>364</id> - <title> - <![CDATA[ The Impact of Traditional Poker on Online Poker (and vice versa) ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-tradional-vs-online ]]> </url> </item> - <item> <id>365</id> - <title> - <![CDATA[ The Ultimate, Absolute Online Poker Scandal ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-scandal ]]> </url> </item> </items> - <items> - <item groupID="27" name="Beginners-Poker" url="/Guides/Online-Poker-Guide/Beginners-Poker/"> - <items> + <item> <id>101</id> - <title> - <![CDATA[ Poker Betting - All-in On the Flop ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/poker-betting-all-in-on-the-flop ]]> </url> </item> + <item> <id>102</id> - <title> - <![CDATA[ Beginners&#39; Poker - Choosing an Online Poker Room ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/beginners-poker-choosing-a-room ]]> </url> </item> + <item> <id>105</id> - <title> - <![CDATA[ Beginners&#39; Poker - Choosing What Type of Poker to Play ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/beginners-poker-choosing-type-to-play ]]> </url> </item> + <item> <id>106</id> - <title> - <![CDATA[ Online Poker - Different Types of Online Poker ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker ]]> </url> </item> + <item> <id>109</id> - <title> - <![CDATA[ Online Poker - Opening an Account at an Online Poker Site ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-opening-an-account ]]> </url> </item> + <item> <id>111</id> - <title> - <![CDATA[ Beginners&#39; Poker - Poker Glossary ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/beginners-poker-glossary ]]> </url> </item> + <item> <id>117</id> - <title> - <![CDATA[ Poker Betting - What is a Blind? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/poker-betting-what-is-a-blind ]]> </url> </item> - <item> <id>118</id> - <title> - <![CDATA[ Poker Betting - What is an Ante? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/poker-betting-what-is-an-ante ]]> </url> </item> + <item> <id>119</id> - <title> - <![CDATA[ Beginners Poker - What is Bluffing? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-what-is-bluffing ]]> </url> </item> - <item> <id>120</id> - <title> - <![CDATA[ Poker Games - What is Community Card Poker? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-what-is-community-card-poker ]]> </url> </item> - <item> <id>121</id> - <title> - <![CDATA[ Online Poker - What is Online Poker? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-what-is-online-poker ]]> </url> </item> </items> </item> </items> </item> </items> </item> </items> The XSL code: <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="html" indent="yes"/> <xsl:template name="loop"> <xsl:for-each select="items/item"> <ul> <li><xsl:value-of select="@name" /></li> <xsl:if test="@name and child::node()"> <ul> <xsl:for-each select="items/item"> <li><xsl:value-of select="@name" />test</li> </xsl:for-each> </ul> <xsl:call-template name="loop" /> </xsl:if> <xsl:if test="child::node() and not(@name)"> <xsl:for-each select="/items"> <li><xsl:value-of select="id" /></li> </xsl:for-each> </xsl:if> </ul> </xsl:for-each> <xsl:for-each select="item/items/item"> <li>hi</li> </xsl:for-each> </xsl:template> <xsl:template match="/" name="test"> <xsl:call-template name="loop" /> </xsl:template> </xsl:stylesheet> Im trying to write the XSL so that every <items> node will render a <ul> and every <items> node will render an <li>. The XSL needs to be recursive because i cant tell how deep the nested nodes will go. Can anyone help? Regards, Al

    Read the article

  • .NET 4: &ldquo;Slim&rdquo;-style performance boost!

    - by Vitus
    RTM version of .NET 4 and Visual Studio 2010 is available, and now we can do some test with it. Parallel Extensions is one of the most valuable part of .NET 4.0. It’s a set of good tools for easily consuming multicore hardware power. And it also contains some “upgraded” sync primitives – Slim-version. For example, it include updated variant of widely known ManualResetEvent. For people, who don’t know about it: you can sync concurrency execution of some pieces of code with this sync primitive. Instance of ManualResetEvent can be in 2 states: signaled and non-signaled. Transition between it possible by Set() and Reset() methods call. Some shortly explanation: Thread 1 Thread 2 Time mre.Reset(); mre.WaitOne(); //code execution 0 //wating //code execution 1 //wating //code execution 2 //wating //code execution 3 //wating mre.Set(); 4 //code execution //… 5 Upgraded version of this primitive is ManualResetEventSlim. The idea in decreasing performance cost in case, when only 1 thread use it. Main concept in the “hybrid sync schema”, which can be done as following:   internal sealed class SimpleHybridLock : IDisposable { private Int32 m_waiters = 0; private AutoResetEvent m_waiterLock = new AutoResetEvent(false);   public void Enter() { if (Interlocked.Increment(ref m_waiters) == 1) return; m_waiterLock.WaitOne(); }   public void Leave() { if (Interlocked.Decrement(ref m_waiters) == 0) return; m_waiterLock.Set(); }   public void Dispose() { m_waiterLock.Dispose(); } } It’s a sample from Jeffry Richter’s book “CLR via C#”, 3rd edition. Primitive SimpleHybridLock have two public methods: Enter() and Leave(). You can put your concurrency-critical code between calls of these methods, and it would executed in only one thread at the moment. Code is really simple: first thread, called Enter(), increase counter. Second thread also increase counter, and suspend while m_waiterLock is not signaled. So, if we don’t have concurrent access to our lock, “heavy” methods WaitOne() and Set() will not called. It’s can give some performance bonus. ManualResetEvent use the similar idea. Of course, it have more “smart” technics inside, like a checking of recursive calls, and so on. I want to know a real difference between classic ManualResetEvent realization, and new –Slim. I wrote a simple “benchmark”: class Program { static void Main(string[] args) { ManualResetEventSlim mres = new ManualResetEventSlim(false); ManualResetEventSlim mres2 = new ManualResetEventSlim(false);   ManualResetEvent mre = new ManualResetEvent(false);   long total = 0; int COUNT = 50;   for (int i = 0; i < COUNT; i++) { mres2.Reset(); Stopwatch sw = Stopwatch.StartNew();   ThreadPool.QueueUserWorkItem((obj) => { //Method(mres, true); Method2(mre, true); mres2.Set(); }); //Method(mres, false); Method2(mre, false);   mres2.Wait(); sw.Stop();   Console.WriteLine("Pass {0}: {1} ms", i, sw.ElapsedMilliseconds); total += sw.ElapsedMilliseconds; }   Console.WriteLine(); Console.WriteLine("==============================="); Console.WriteLine("Done in average=" + total / (double)COUNT); Console.ReadLine(); }   private static void Method(ManualResetEventSlim mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } }   private static void Method2(ManualResetEvent mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } } } I use 2 concurrent thread (the main thread and one from thread pool) for setting and resetting ManualResetEvents, and try to run test COUNT times, and calculate average execution time. Here is the results (I get it on my dual core notebook with T7250 CPU and Windows 7 x64): ManualResetEvent ManualResetEventSlim Difference is obvious and serious – in 10 times! So, I think preferable way is using ManualResetEventSlim, because not always on calling Set() and Reset() will be called “heavy” methods for working with Windows kernel-mode objects. It’s a small and nice improvement! ;)

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • Goto for the Java Programming Language

    - by darcy
    Work on JDK 8 is well-underway, but we thought this late-breaking JEP for another language change for the platform couldn't wait another day before being published. Title: Goto for the Java Programming Language Author: Joseph D. Darcy Organization: Oracle. Created: 2012/04/01 Type: Feature State: Funded Exposure: Open Component: core/lang Scope: SE JSR: 901 MR Discussion: compiler dash dev at openjdk dot java dot net Start: 2012/Q2 Effort: XS Duration: S Template: 1.0 Reviewed-by: Duke Endorsed-by: Edsger Dijkstra Funded-by: Blue Sun Corporation Summary Provide the benefits of the time-testing goto control structure to Java programs. The Java language has a history of adding new control structures over time, the assert statement in 1.4, the enhanced for-loop in 1.5,and try-with-resources in 7. Having support for goto is long-overdue and simple to implement since the JVM already has goto instructions. Success Metrics The goto statement will allow inefficient and verbose recursive algorithms and explicit loops to be replaced with more compact code. The effort will be a success if at least twenty five percent of the JDK's explicit loops are replaced with goto's. Coordination with IDE vendors is expected to help facilitate this goal. Motivation The goto construct offers numerous benefits to the Java platform, from increased expressiveness, to more compact code, to providing new programming paradigms to appeal to a broader demographic. In JDK 8, there is a renewed focus on using the Java platform on embedded devices with more modest resources than desktop or server environments. In such contexts, static and dynamic memory footprint is a concern. One significant component of footprint is the code attribute of class files and certain classes of important algorithms can be expressed more compactly using goto than using other constructs, saving footprint. For example, to implement state machines recursively, some parties have asked for the JVM to support tail calls, that is, to perform a complex transformation with security implications to turn a method call into a goto. Such complicated machinery should not be assumed for an embedded context. A better solution is just to expose to the programmer the desired functionality, goto. The web has familiarized users with a model of traversing links among different HTML pages in a free-form fashion with some state being maintained on the side, such as login credentials, to effect behavior. This is exactly the programming model of goto and code. While in the past this has been derided as leading to "spaghetti code," spaghetti is a tasty and nutritious meal for programmers, unlike quiche. The invokedynamic instruction added by JSR 292 exposes the JVM's linkage operation to programmers. This is a low-level operation that can be leveraged by sophisticated programmers. Likewise, goto is a also a low-level operation that should not be hidden from programmers who can use more efficient idioms. Some may object that goto was consciously excluded from the original design of Java as one of the removed feature from C and C++. However, the designers of the Java programming languages have revisited these removals before. The enum construct was also left out only to be added in JDK 5 and multiple inheritance was left out, only to be added back by the virtual extension method methods of Project Lambda. As a living language, the needs of the growing Java community today should be used to judge what features are needed in the platform tomorrow; the language should not be forever bound by the decisions of the past. Description From its initial version, the JVM has had two instructions for unconditional transfer of control within a method, goto (0xa7) and goto_w (0xc8). The goto_w instruction is used for larger jumps. All versions of the Java language have supported labeled statements; however, only the break and continue statements were able to specify a particular label as a target with the onerous restriction that the label must be lexically enclosing. The grammar addition for the goto statement is: GotoStatement: goto Identifier ; The new goto statement similar to break except that the target label can be anywhere inside the method and the identifier is mandatory. The compiler simply translates the goto statement into one of the JVM goto instructions targeting the right offset in the method. Therefore, adding the goto statement to the platform is only a small effort since existing compiler and JVM functionality is reused. Other language changes to support goto include obvious updates to definite assignment analysis, reachability analysis, and exception analysis. Possible future extensions include a computed goto as found in gcc, which would replace the identifier in the goto statement with an expression having the type of a label. Testing Since goto will be implemented using largely existing facilities, only light levels of testing are needed. Impact Compatibility: Since goto is already a keyword, there are no source compatibility implications. Performance/scalability: Performance will improve with more compact code. JVMs already need to handle irreducible flow graphs since goto is a VM instruction.

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • When is my View too smart?

    - by Kyle Burns
    In this posting, I will discuss the motivation behind keeping View code as thin as possible when using patterns such as MVC, MVVM, and MVP.  Once the motivation is identified, I will examine some ways to determine whether a View contains logic that belongs in another part of the application.  While the concepts that I will discuss are applicable to most any pattern which favors a thin View, any concrete examples that I present will center on ASP.NET MVC. Design patterns that include a Model, a View, and other components such as a Controller, ViewModel, or Presenter are not new to application development.  These patterns have, in fact, been around since the early days of building applications with graphical interfaces.  The reason that these patterns emerged is simple – the code running closest to the user tends to be littered with logic and library calls that center around implementation details of showing and manipulating user interface widgets and when this type of code is interspersed with application domain logic it becomes difficult to understand and much more difficult to adequately test.  By removing domain logic from the View, we ensure that the View has a single responsibility of drawing the screen which, in turn, makes our application easier to understand and maintain. I was recently asked to take a look at an ASP.NET MVC View because the developer reviewing it thought that it possibly had too much going on in the view.  I looked at the .CSHTML file and the first thing that occurred to me was that it began with 40 lines of code declaring member variables and performing the necessary calculations to populate these variables, which were later either output directly to the page or used to control some conditional rendering action (such as adding a class name to an HTML element or not rendering another element at all).  This exhibited both of what I consider the primary heuristics (or code smells) indicating that the View is too smart: Member variables – in general, variables in View code are an indication that the Model to which the View is being bound is not sufficient for the needs of the View and that the View has had to augment that Model.  Notable exceptions to this guideline include variables used to hold information specifically related to rendering (such as a dynamically determined CSS class name or the depth within a recursive structure for indentation purposes) and variables which are used to facilitate looping through collections while binding. Arithmetic – as with member variables, the presence of arithmetic operators within View code are an indication that the Model servicing the View is insufficient for its needs.  For example, if the Model represents a line item in a sales order, it might seem perfectly natural to “normalize” the Model by storing the quantity and unit price in the Model and multiply these within the View to show the line total.  While this does seem natural, it introduces a business rule to the View code and makes it impossible to test that the rounding of the result meets the requirement of the business without executing the View.  Within View code, arithmetic should only be used for activities such as incrementing loop counters and calculating element widths. In addition to the two characteristics of a “Smart View” that I’ve discussed already, this View also exhibited another heuristic that commonly indicates to me the need to refactor a View and make it a bit less smart.  That characteristic is the existence of Boolean logic that either does not work directly with properties of the Model or works with too many properties of the Model.  Consider the following code and consider how logic that does not work directly with properties of the Model is just another form of the “member variable” heuristic covered earlier: @if(DateTime.Now.Hour < 12) {     <div>Good Morning!</div> } else {     <div>Greetings</div> } This code performs business logic to determine whether it is morning.  A possible refactoring would be to add an IsMorning property to the Model, but in this particular case there is enough similarity between the branches that the entire branching structure could be collapsed by adding a Greeting property to the Model and using it similarly to the following: <div>@Model.Greeting</div> Now let’s look at some complex logic around multiple Model properties: @if (ModelPageNumber + Model.NumbersToDisplay == Model.PageCount         || (Model.PageCount != Model.CurrentPage             && !Model.DisplayValues.Contains(Model.PageCount))) {     <div>There's more to see!</div> } In this scenario, not only is the View code difficult to read (you shouldn’t have to play “human compiler” to determine the purpose of the code), but it also complex enough to be at risk for logical errors that cannot be detected without executing the View.  Conditional logic that requires more than a single logical operator should be looked at more closely to determine whether the condition should be evaluated elsewhere and exposed as a single property of the Model.  Moving the logic above outside of the View and exposing a new Model property would simplify the View code to: @if(Model.HasMoreToSee) {     <div>There’s more to see!</div> } In this posting I have briefly discussed some of the more prominent heuristics that indicate a need to push code from the View into other pieces of the application.  You should now be able to recognize these symptoms when building or maintaining Views (or the Models that support them) in your applications.

    Read the article

  • Can I read an Outlook (2003/2007) PST file in C#?

    - by Andy May
    Is it possible to read a .PST file using C#? I would like to do this as a standalone application, not as an Outlook addin (if that is possible). If have seen other SO questions similar to this mention MailNavigator but I am looking to do this programmatically in C#. I have looked at the Microsoft.Office.Interop.Outlook namespace but that appears to be just for Outlook addins. LibPST appears to be able to read PST files, but this is in C (sorry Joel, I didn't learn C before graduating). Any help would be greatly appreciated, thanks! EDIT: Thank you all for the responses! I accepted Matthew Ruston's response as the answer because it ultimately led me to the code I was looking for. Here is a simple example of what I got to work (You will need to add a reference to Microsoft.Office.Interop.Outlook): using System; using System.Collections.Generic; using Microsoft.Office.Interop.Outlook; namespace PSTReader { class Program { static void Main () { try { IEnumerable<MailItem> mailItems = readPst(@"C:\temp\PST\Test.pst", "Test PST"); foreach (MailItem mailItem in mailItems) { Console.WriteLine(mailItem.SenderName + " - " + mailItem.Subject); } } catch (System.Exception ex) { Console.WriteLine(ex.Message); } Console.ReadLine(); } private static IEnumerable<MailItem> readPst(string pstFilePath, string pstName) { List<MailItem> mailItems = new List<MailItem>(); Application app = new Application(); NameSpace outlookNs = app.GetNamespace("MAPI"); // Add PST file (Outlook Data File) to Default Profile outlookNs.AddStore(pstFilePath); MAPIFolder rootFolder = outlookNs.Stores[pstName].GetRootFolder(); // Traverse through all folders in the PST file // TODO: This is not recursive, refactor Folders subFolders = rootFolder.Folders; foreach (Folder folder in subFolders) { Items items = folder.Items; foreach (object item in items) { if (item is MailItem) { MailItem mailItem = item as MailItem; mailItems.Add(mailItem); } } } // Remove PST file from Default Profile outlookNs.RemoveStore(rootFolder); return mailItems; } } } Note: This code assumes that Outlook is installed and already configured for the current user. It uses the Default Profile (you can edit the default profile by going to Mail in the Control Panel). One major improvement on this code would be to create a temporary profile to use instead of the Default, then destroy it once completed.

    Read the article

  • SUPER CSV write bean to CSV.

    - by ButtersB
    Here is my class, public class FreebasePeopleResults { public String intendedSearch; public String weight; public Double heightMeters; public Integer age; public String type; public String parents; public String profession; public String alias; public String children; public String siblings; public String spouse; public String degree; public String institution; public String wikipediaId; public String guid; public String id; public String gender; public String name; public String ethnicity; public String articleText; public String dob; public String getWeight() { return weight; } public void setWeight(String weight) { this.weight = weight; } public Double getHeightMeters() { return heightMeters; } public void setHeightMeters(Double heightMeters) { this.heightMeters = heightMeters; } public String getParents() { return parents; } public void setParents(String parents) { this.parents = parents; } public Integer getAge() { return age; } public void setAge(Integer age) { this.age = age; } public String getProfession() { return profession; } public void setProfession(String profession) { this.profession = profession; } public String getAlias() { return alias; } public void setAlias(String alias) { this.alias = alias; } public String getChildren() { return children; } public void setChildren(String children) { this.children = children; } public String getSpouse() { return spouse; } public void setSpouse(String spouse) { this.spouse = spouse; } public String getDegree() { return degree; } public void setDegree(String degree) { this.degree = degree; } public String getInstitution() { return institution; } public void setInstitution(String institution) { this.institution = institution; } public String getWikipediaId() { return wikipediaId; } public void setWikipediaId(String wikipediaId) { this.wikipediaId = wikipediaId; } public String getGuid() { return guid; } public void setGuid(String guid) { this.guid = guid; } public String getId() { return id; } public void setId(String id) { this.id = id; } public String getGender() { return gender; } public void setGender(String gender) { this.gender = gender; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getEthnicity() { return ethnicity; } public void setEthnicity(String ethnicity) { this.ethnicity = ethnicity; } public String getArticleText() { return articleText; } public void setArticleText(String articleText) { this.articleText = articleText; } public String getDob() { return dob; } public void setDob(String dob) { this.dob = dob; } public String getType() { return type; } public void setType(String type) { this.type = type; } public String getSiblings() { return siblings; } public void setSiblings(String siblings) { this.siblings = siblings; } public String getIntendedSearch() { return intendedSearch; } public void setIntendedSearch(String intendedSearch) { this.intendedSearch = intendedSearch; } } Here is my CSV writer method import java.io.FileWriter; import java.io.IOException; import java.util.ArrayList; import org.supercsv.io.CsvBeanWriter; import org.supercsv.prefs.CsvPreference; public class CSVUtils { public static void writeCSVFromList(ArrayList<FreebasePeopleResults> people, boolean writeHeader) throws IOException{ //String[] header = new String []{"title","acronym","globalId","interfaceId","developer","description","publisher","genre","subGenre","platform","esrb","reviewScore","releaseDate","price","cheatArticleId"}; FileWriter file = new FileWriter("/brian/brian/Documents/people-freebase.csv", true); // write the partial data CsvBeanWriter writer = new CsvBeanWriter(file, CsvPreference.EXCEL_PREFERENCE); for(FreebasePeopleResults person:people){ writer.write(person); } writer.close(); // show output } } I keep getting output errors. Here is the error: There is no content to write for line 2 context: Line: 2 Column: 0 Raw line: null Now, I know it is now totally null, so I am confused.

    Read the article

  • Convert IEnumerable to EntitySet

    - by Gregorius
    Hey all, Hoping somebody can shed some light, and perhaps a possible solution to this issue I'm having... I have used LINQ to SQL to pull some data from a database into local entities. They are products from a shopping cart system. A product can contain a collection of KitGroups (which are stored in an EntitySet (System.Data.Linq.EntitySet). KitGroups contain collections of KitItems, and KitItems can contain Nested Products (which link back up to the original Product type - so its recursive). From these entities I'm building XML using LINQ to XML - all good here - my XML looks beautiful, calling a "GenerateProductElement" function, which calls itself recursively to generate the nested products. Wonderful stuff. However, here's where i'm stuck.. i'm now trying to deserialize that XML back to the original objects (all autogenerated by Linq to SQL)... and herein lies the problem. Linq tO Sql expects my collections to be EntitySet collections, however Linq to Xml (which i'm tyring to use to deserailise) is returning IEnumerable. I've experimented with a few ways of casting between the 2, but nothing seems to work... I'm starting to think that I should just deserialise manually (with some funky loops and conditionals to determine which KitGroup KitItems belong to, etc)... however its really quite tricky and that code is likely to be quite ugly, so I'd love to find a more elegant solution to this problem. Any suggestions? Here's a code snippet: private Product GenerateProductFromXML(XDocument inDoc) { var prod = from p in inDoc.Descendants("Product") select new Product { ProductID = (int)p.Attribute("ID"), ProductGUID = (Guid)p.Attribute("GUID"), Name = (string)p.Element("Name"), Summary = (string)p.Element("Summary"), Description = (string)p.Element("Description"), SEName = (string)p.Element("SEName"), SETitle = (string)p.Element("SETitle"), XmlPackage = (string)p.Element("XmlPackage"), IsAKit = (byte)(int)p.Element("IsAKit"), ExtensionData = (string)p.Element("ExtensionData"), }; //TODO: UUGGGGGGG Converting b/w IEnumerable & EntitySet var kitGroups = (from kg in inDoc.Descendants("KitGroups").Elements("KitGroup") select new KitGroup { KitGroupID = (int) kg.Attribute("ID"), KitGroupGUID = (Guid) kg.Attribute("GUID"), Name = (string) kg.Element("Name"), KitItems = // THIS IS WHERE IT FAILS - "Cannot convert source type IEnumerable to target type EntitySet..." (from ki in kg.Descendants("KitItems").Elements("KitItem") select new KitItem { KitItemID = (int) ki.Attribute("ID"), KitItemGUID = (Guid) ki.Attribute("GUID") }); }); Product ImportedProduct = prod.First(); ImportedProduct.KitGroups = new EntitySet<KitGroup>(); ImportedProduct.KitGroups.AddRange(kitGroups); return ImportedProduct; }

    Read the article

  • Sort latitude and longitude coordinates into clockwise ordered quadrilateral

    - by Dave Jarvis
    Problem Users can provide up to four latitude and longitude coordinates, in any order. They do so with Google Maps. Using Google's Polygon API (v3), the coordinates they select should highlight the selected area between the four coordinates. Solutions and Searches http://www.geocodezip.com/map-markers_ConvexHull_Polygon.asp http://softsurfer.com/Archive/algorithm_0103/algorithm_0103.htm http://stackoverflow.com/questions/2374708/how-to-sort-points-in-a-google-maps-polygon-so-that-lines-do-not-cross http://stackoverflow.com/questions/242404/sort-four-points-in-clockwise-order http://en.literateprograms.org/Quickhull_%28Javascript%29 Graham's scan seems too complicated for four coordinates Sort the coordinates into two arrays (one by latitude, the other longitude) ... then? Jarvis March algorithm? Question How do you sort the coordinates in (counter-)clockwise order, using JavaScript? Code Here is what I have so far: // Ensures the markers are sorted: NW, NE, SE, SW function sortMarkers() { var ns = markers.slice( 0 ); var ew = markers.slice( 0 ); ew.sort( function( a, b ) { if( a.position.lat() < b.position.lat() ) { return -1; } else if( a.position.lat() > b.position.lat() ) { return 1; } return 0; }); ns.sort( function( a, b ) { if( a.position.lng() < b.position.lng() ) { return -1; } else if( a.position.lng() > b.position.lng() ) { return 1; } return 0; }); var nw; var ne; var se; var sw; if( ew.indexOf( ns[0] ) > 1 ) { nw = ns[0]; } else { ne = ns[0]; } if( ew.indexOf( ns[1] ) > 1 ) { nw = ns[1]; } else { ne = ns[1]; } if( ew.indexOf( ns[2] ) > 1 ) { sw = ns[2]; } else { se = ns[2]; } if( ew.indexOf( ns[3] ) > 1 ) { sw = ns[3]; } else { se = ns[3]; } markers[0] = nw; markers[1] = ne; markers[2] = se; markers[3] = sw; } What is a better approach? The recursive Convex Hull algorithm is overkill for four points in the data set. Thank you.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49  | Next Page >