Search Results

Search found 544 results on 22 pages for 'marc jr'.

Page 21/22 | < Previous Page | 17 18 19 20 21 22  | Next Page >

  • Delphi Prism and LINQ to SQL / Entity Framework

    - by Vegar
    I have found many posts and examples of using LINQ-syntax in Delphi Prism (Oxygene), but I have never found anything on LINQ to SQL or Entity Framework. Is it possible to use LINQ to SQL or Entity Framework together with Prism? Where can I found such an example? Update: Olaf is giving an answer through his blog The question is now if any visual tools and code generation is provided, or if everything must be done by hand... Second update: Olaf has answered the tool/code generation-question in a comment on his site: The class designer is there, but there is no Pascal code gen. According to marc hoffman that is currently not on their list. For now you have to live with manual mapping. I guess, if you had Visual Studio (not just the VS shell), that you could add a C# library project to your solution, reference that from your Prism project. Then create the Table-Class mapping in the C# project using the visual designer. Maybe somewhat ugly, but possibly the key to get the Designer + CodeGen integrated into Prism. Who cares what language is used for the mapping . I will say this is a 1 - 0 to c# vs prism. If I did not care which language is used for the mapping - why should I care about which language is used for the rest?

    Read the article

  • template engine implementation

    - by qwerty
    I am currently building this small template engine. It takes a string containing the template in parameter, and a dictionary of "tags,values" to fill in the template. In the engine, I have no idea of the tags that will be in the template and the ones that won't. I am currently iterating (foreach) on the dictionnary, parsing my string that I have put in a string builder, and replacing the tags in the template by their corresponding value. Is there a more efficient/convenient way of doing this? I know the main drawback here is that the stringbuilder is parsed everytime entirely for each tag, which is quite bad... (I am also checking, though not included in the sample, after the process that my template does not contain any tag anymore. They are all formated in the same way: @@tag@@) //Dictionary<string, string> tagsValueCorrespondence; //string template; StringBuilder outputBuilder = new StringBuilder(template); foreach (string tag in tagsValueCorrespondence.Keys) { outputBuilder.Replace(tag, tagsValueCorrespondence[tag]); } template = outputBuilder.ToString(); Responses: @Marc: string template = "Some @@foobar@@ text in a @@bar@@ template"; StringDictionary data = new StringDictionary(); data.Add("foo", "value1"); data.Add("bar", "value2"); data.Add("foo2bar", "value3"); Output: "Some text in a value2 template" instead of: "Some @@foobar@@ text in a value2 template"

    Read the article

  • How to route all subdomains to a single host using mDNS?

    - by John Mee
    I have a development webserver hosting as "myhost.local" which is found using Bonjour/mDNS. The server is running avahi-daemon. The webserver also wants to handle any subdomains of itself. Eg "cat.myhost.local" and "dog.myhost.local" and "guppy.myhost.local". Given that myhost.local is on a dynamic ip address from dhcp, is there still a way to route all requests for the subdomains to myhost.local? I'm starting to think it not currently possible... http://marc.info/?l=freedesktop-avahi&m=119561596630960&w=2 You can do this with the /etc/avahi/hosts file. Alternatively you can use avahi-publish-host-name. No, he cannot. Since he wants to define an alias, not a new hostname. I.e. he only wants to register an A RR, no reverse PTR RR. But if you stick something into /etc/avahi/hosts then it registers both, and detects a collision if the PTR RR is non-unique, which would be the case for an alias.

    Read the article

  • Refactoring ADO.NET - SqlTransaction vs. TransactionScope

    - by marc_s
    I have "inherited" a little C# method that creates an ADO.NET SqlCommand object and loops over a list of items to be saved to the database (SQL Server 2005). Right now, the traditional SqlConnection/SqlCommand approach is used, and to make sure everything works, the two steps (delete old entries, then insert new ones) are wrapped into an ADO.NET SqlTransaction. using (SqlConnection _con = new SqlConnection(_connectionString)) { using (SqlTransaction _tran = _con.BeginTransaction()) { try { SqlCommand _deleteOld = new SqlCommand(......., _con); _deleteOld.Transaction = _tran; _deleteOld.Parameters.AddWithValue("@ID", 5); _con.Open(); _deleteOld.ExecuteNonQuery(); SqlCommand _insertCmd = new SqlCommand(......, _con); _insertCmd.Transaction = _tran; // add parameters to _insertCmd foreach (Item item in listOfItem) { _insertCmd.ExecuteNonQuery(); } _tran.Commit(); _con.Close(); } catch (Exception ex) { // log exception _tran.Rollback(); throw; } } } Now, I've been reading a lot about the .NET TransactionScope class lately, and I was wondering, what's the preferred approach here? Would I gain anything (readibility, speed, reliability) by switching to using using (TransactionScope _scope = new TransactionScope()) { using (SqlConnection _con = new SqlConnection(_connectionString)) { .... } _scope.Complete(); } What you would prefer, and why? Marc

    Read the article

  • How to implement properly plugins in C#?

    - by MartyIX
    I'm trying to add plugins to my game and what I'm trying to implement is this: Plugins will be either mine or 3rd party's so I would like a solution where crashing of the plugin would not mean crashing of the main application. Methods of plugins are called very often (for example because of drawing of game objects). What I've found so far: 1) http://www.codeproject.com/KB/cs/pluginsincsharp.aspx - simple concept that seems like it should work nicely. Since plugins are used in my game for every round I would suffice to add the Restart() method and if a plugin is no longer needed Unload() method + GC should take care of that. 2) http://mef.codeplex.com/Wikipage - Managed Extensibility Framework - my program should work on .NET 3.5 and I don't want to add any other framework separately I want to write my plugin system myself. Therefore this solution is out of question. 3) Microsoft provides: http://msdn.microsoft.com/en-us/library/system.addin.aspx but according to a few articles I've read it is very complex. 4) Different AppDomains for plugins. According to Marc Gravell ( http://stackoverflow.com/questions/665668/usage-of-appdomain-in-c ) different AppDomains allow isolation. Unloading of plugins would be easy. What would the performance load be? I need to call methods of plugins very often (to draw objects for example). Using Application Domains - http://msdn.microsoft.com/en-us/library/yb506139.aspx A few tutorials on java2s.com Could you please comment on my findings? New approaches are also welcomed! Thanks!

    Read the article

  • phpmyadmin shows numbers or blob for mysql's utf8_bin callation columns?

    - by marc40000
    Hi ! I have a table with a varchar column. Its collation is set to utf8_bin. My software using this table and column works perfectly. But when I look at the content in phpmyadmin, I only see some hex values or [Blob xB]. Can I make phpmyadmin show the content correctly? Besides, when I set the collation to utf8_general_ci or utf8_unicode_ci, the phpmyadmin shows the content correctly. Thx Marc [edit]Hah, I found out, there is a small "+Options" link above every table in phpmyadmin. It opens several options including "Show BLOB contents" - which makes the [blob] to readable text when enabled and "Show binary contents as HEX" which shows the hex codes as text when disabled. No idea why there are two options though and why sometimes there is a [Blob] and sometimes hex values. Well. Now I'm still wondering: Setting these options get lost when I go to another table. I have to set them every time I go there. Is there a way to save those options? [/edit]

    Read the article

  • Can protobuf-net serialize this combination of interface and generic collection?

    - by tsupe
    I am trying to serialize a ItemTransaction and protobuf-net (r282) is having a problem. ItemTransaction : IEnumerable<KeyValuePair<Type, IItemCollection>></code> and ItemCollection is like this: FooCollection : ItemCollection<Foo> ItemCollection<T> : BindingList<T>, IItemCollection IItemCollection : IList<Item> where T is a derived type of Item. ItemCollection also has a property of type IItemCollection. I am serializing like this: IItemCollection itemCol = someService.Blah(...); ... SerializeWithLengthPrefix<IItemCollection>(stream, itemCol, PrefixStyle.Base128); My eventual goal is to serialize ItemTransaction, but am snagged with IItemCollection. Item and it's derived types can be [de]serialized with no issues, see [1], but deserializing an IItemCollection fails (serializing works). ItemCollection has a ItemExpression property and when deserializing protobuf can't create an abstract class. This makes sense to me, but I'm not sure how to get through it. ItemExpression<T> : ItemExpression, IItemExpression ItemExpression : Expression ItemExpression is abstract as is Expression How do I get this to work properly? Also, I am concerned that ItemTransaction will fail since the IItemCollections are going to be differing and unknown at compile time (an ItemTransaction will have FooCollection, BarCollection, FlimCollection, FlamCollection, etc). What am I missing (Marc) ? [1] http://stackoverflow.com/questions/2276104/protobuf-net-deserializing-across-assembly-boundaries

    Read the article

  • Possible to Inspect Innards of Core C# Functionality

    - by Nick Babcock
    I was struck today, with the inclination to compare the innards of Buffer.BlockCopy and Array.CopyTo. I am curious to see if Array.CopyTo called Buffer.BlockCopy behind the scenes. There is no practical purpose behind this, I just want to further my understanding of the C# language and how it is implemented. Don't jump the gun and accuse me of micro-optimization, but you can accuse me of being curious! When I ran ILasm on mscorlib.dll I received this for Array.CopyTo .method public hidebysig newslot virtual final instance void CopyTo(class System.Array 'array', int32 index) cil managed { // Code size 0 (0x0) } // end of method Array::CopyTo and this for Buffer.BlockCopy .method public hidebysig static void BlockCopy(class System.Array src, int32 srcOffset, class System.Array dst, int32 dstOffset, int32 count) cil managed internalcall { .custom instance void System.Security.SecuritySafeCriticalAttribute::.ctor() = ( 01 00 00 00 ) } // end of method Buffer::BlockCopy Which, frankly, baffles me. I've never run ILasm on a dll/exe I didn't create. Does this mean that I won't be able to see how these functions are implemented? Searching around only revealed a stackoverflow question, which Marc Gravell said [Buffer.BlockCopy] is basically a wrapper over a raw mem-copy While insightful, it doesn't answer my question if Array.CopyTo calls Buffer.BlockCopy. I'm specifically interested in if I'm able to see how these two functions are implemented, and if I had future questions about the internals of C#, if it is possible for me to investigate it. Or am I out of luck?

    Read the article

  • Android Multiple Handlers Design Question

    - by Soumya Simanta
    This question is related to an existing question I asked. I though I'll ask a new question instead of replying back to the other question. Cannot "comment" on my previous question because of a word limit. Marc wrote - I've more than one Handlers in an Activity." Why? If you do not want a complicated handleMessage() method, then use post() (on Handler or View) to break the logic up into individual Runnables. Multiple Handlers makes me nervous. I'm new to Android. Is having multiple handlers in a single activity a bad design ? I'm new to Android. My question is - is having multiple handlers in a single activity a bad design ? Here is the sketch of my current implementation. I've a mapActivity that creates a data thread (a UDP socket that listens for data). My first handler is responsible for sending data from the data thread to the activity. On the map I've a bunch of "dynamic" markers that are refreshed frequently. Some of these markers are video markers i.e., if the user clicks a video marker, I add a ViewView that extends a android.opengl.GLSurfaceView to my map activity and display video on this new vide. I use my second handler to send information about the marker that the user tapped on ItemizedOverlay onTap(int index) method. The user can close the video view by tapping on the video view. I use my third handler for this. I would appreciate if people can tell me what's wrong with this approach and suggest better ways to implement this. Thanks.

    Read the article

  • How can I stop SQL Server Management Studio replacing 'SELECT *' with the column list ?

    - by Ben McIntyre
    SQL Server Mgmt Studio is driving me crazy. If I create a view and SELECT '*' from a table, it's all OK and I can save the view. Looking at the SQL for the view (eg.by scripting a CREATE) reveals that the 'SELECT *' really is saved to the view's SQL. But as soon as I reopen the view using the GUI (right click modify), SELECT * is replaced with a column list of all the columns in the table. How can I stop Management Studio from doing this ? I want my 'SELECT *' to remain just that. Perhaps it's just the difficulty of googling 'SELECT *' that prevented me from finding anything remotely relevant to this (i did put it in double quotes). Please, I am highly experienced in Transact-SQL, so please DON'T give me a lecture on why I shouldn't be using SELECT *. I know all the pros and cons and I do use it at times. It's a language feature, and like all language features can be used for good or evil (I emphatically do NOT agree that it is never appropriate to use it). Edit: I'm giving Marc the answer, since it seems it is not possible to turn this behaviour off. Problem is considered closed. I note that Enterprise Manager did no similar thing. The workaround is to either edit SQL as text, or go to a product other than Managment Studio. Or constantly edit out the column list and replace the * every time you edit a view. Sigh.

    Read the article

  • Java: split a List into two sub-Lists?

    - by Chris Conway
    What's the simplest, most standard, and/or most efficient way to split a List into two sub-Lists in Java? It's OK to mutate the original List, so no copying should be necessary. The method signature could be /** Split a list into two sublists. The original list will be modified to * have size i and will contain exactly the same elements at indices 0 * through i-1 as it had originally; the returned list will have size * len-i (where len is the size of the original list before the call) * and will have the same elements at indices 0 through len-(i+1) as * the original list had at indices i through len-1. */ <T> List<T> split(List<T> list, int i); [EDIT] List.subList returns a view on the original list, which becomes invalid if the original is modified. So split can't use subList unless it also dispenses with the original reference (or, as in Marc Novakowski's answer, uses subList but immediately copies the result).

    Read the article

  • Can the Singleton be replaced by Factory?

    - by lostiniceland
    Hello Everyone There are already quite some posts about the Singleton-Pattern around, but I would like to start another one on this topic since I would like to know if the Factory-Pattern would be the right approach to remove this "anti-pattern". In the past I used the singleton quite a lot, also did my fellow collegues since it is so easy to use. For example, the Eclipse IDE or better its workbench-model makes heavy usage of singletons as well. It was due to some posts about E4 (the next big Eclipse version) that made me start to rethink the singleton. The bottom line was that due to this singletons the dependecies in Eclipse 3.x are tightly coupled. Lets assume I want to get rid of all singletons completely and instead use factories. My thoughts were as follows: hide complexity less coupling I have control over how many instances are created (just store the reference I a private field of the factory) mock the factory for testing (with Dependency Injection) when it is behind an interface In some cases the factories can make more than one singleton obsolete (depending on business logic/component composition) Does this make sense? If not, please give good reasons for why you think so. An alternative solution is also appreciated. Thanks Marc

    Read the article

  • Doctrine Default Primary Key Problem (Again)

    - by 01010011
    Hi, Should I change all of my uniquely-named MySQL database primary keys to 'id' to avoid getting errors related to Doctrine's default primary key set in the plugin 'doctrine_pi.php'? To further elaborate, I am getting the following reoccurring error, this time after trying to login to my login page: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'u.book_id' in 'field list'' in... I suspect the problem resides at a MySQL table used for my login, of which has a primary key called id Marc B originally solved an identical problem for me in this post http://stackoverflow.com/questions/2702229/doctrine-codeigniter-mysql-crud-errors when I had the same problem with a different table within the same database. Following his suggestion, I changed the default primary key located at system/application/plugins/doctrine_pi.php from 'id' to 'book_id': <?php // system/application/plugins/doctrine_pi.php ... // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'book_id', 'type' => 'integer', 'length' => 4)); and that solved my previous problem. However, my login page stopped working. So what is the safe thing to do? Change all of my primary keys to 'id' (will that solve the problem without causing some other problem I am not aware of). Or should I add some lines of code in doctrine_pi.php?

    Read the article

  • Checking instance of non-class constrained type parameter for null in generic method

    - by casperOne
    I currently have a generic method where I want to do some validation on the parameters before working on them. Specifically, if the instance of the type parameter T is a reference type, I want to check to see if it's null and throw an ArgumentNullException if it's null. Something along the lines of: // This can be a method on a generic class, it does not matter. public void DoSomething<T>(T instance) { if (instance == null) throw new ArgumentNullException("instance"); Note, I do not wish to constrain my type parameter using the class constraint. I thought I could use Marc Gravell's answer on "How do I compare a generic type to its default value?", and use the EqualityComparer<T> class like so: static void DoSomething<T>(T instance) { if (EqualityComparer<T>.Default.Equals(instance, null)) throw new ArgumentNullException("instance"); But it gives a very ambiguous error on the call to Equals: Member 'object.Equals(object, object)' cannot be accessed with an instance reference; qualify it with a type name instead How can I check an instance of T against null when T is not constrained on being a value or reference type?

    Read the article

  • Installing a Windows Service from a separate GUI - how to install .config file along with it?

    - by Shaul
    I have written a GUI (call it MyGUI) for ClickOnce deployment on any given client site. That GUI installs and configures a Windows Service (MyService), using the method described here by @Marc Gravell. Here's my code, run from inside MyGUI, which contains a reference to MyService: using (var inst = new AssemblyInstaller(typeof(MyService.Program).Assembly, new string[] { })) { IDictionary state = new Hashtable(); inst.UseNewContext = true; try { if (uninstall) { inst.Uninstall(state); } else { inst.Install(state); inst.Commit(state); } } catch { try { inst.Rollback(state); } catch { } throw; } } Take note of that first line: I'm grabbing the assembly for MyService, and installing that. Now, trouble is, the way I've done the deployment, I'm effectively referencing the service's EXE file from the GUI's app folder. So now the service fires up and starts looking for stuff in the MyService.config file, and can't find it, because it's living in someone else's app folder, with only the GUI's MyGUI.config file present. So, how do I get MyService.config to be available to the service?

    Read the article

  • MSI install sequence - run DB scripts before services start

    - by marc_s
    Folks, we're running into some sequencing troubles with our MSI install. As part of our app, we install a bunch of services and allow the user to pick whether to start them right away or later. When they start right away, they seem to start too early in the install sequence - before our database manager had a chance to update the database. Right now, our custom action to run the database updater looks like this - it's being run after "InstallFinalize" - very late in the process. <InstallExecuteSequence> <RemoveExistingProducts After='InstallInitialize' /> <Custom Action='RunDbUpdateManagerAction' After='InstallFinalize'> DbUpdateManager=3</Custom> </InstallExecuteSequence> What would be the more appropriate step to run after or before, to make sure the DB scripts are executed before any of the installed services start up? Is there a "BeforeServiceStart" step? EDIT: Just defining the "Before='StartServices'" attribute on the tag didn't solve my problem. I am assuming the issue is this: the custom action has an "inner text", which represents a condition, and this condition is: "&DbUpdateManager=3". From what I can deduce from trial & error, this probably means "the DbUpdateManager feature must be published". Now, trouble is: "PublishFeature" comes way at the end in the install sequence, just before "InstallFinalize", and definitely AFTER InstallServices / StartServices. So when I specify the "Before=StartServices" requirement, the condition "DbUpdateManager feature must be published" isn't true yet, so the DbUpdateManager doesn't get executed :-( I tried removing the condition - in that case, my DbUpdateManager sometimes doesn't execute at all, sometimes more than once - no real clear pattern as to what happens when..... Any more ideas?? Is there a way I could check for a condition "the DbUpdateManager feature is installed" which would be true after the "InstallFiles" step?? Marc

    Read the article

  • Updating the $PATH for running an command through SSH with LDAP user account

    - by Guillaume Bodi
    Hi all, I am setting up a Mac OSX 1.6 server to host Git repositories. As such we need to push commits to the server through SSH. The server has only an admin account and uses a user list from a LDAP server. Now, since it is accessing the server through a non interactive shell, git operations are not able to complete since git executables are not in the default path. As the users are network users, they do not have a local home folder. So I cannot use a ~/.bashrc and the like solution. I browsed over several articles here and there but could not get it working in a nice and clean setup. Here are the infos on the methods I gathered so far: I could update the default PATH environment to include the git executables folder. However, I could not manage to do it successfully. Updating /etc/paths didn't change anything and since it's not an interactive shell, /etc/profile and /etc/bashrc are ignored. From the ssh manpage, I read that a BASH_ENV variable can be set to get an optional script to be executed. However I cannot figure how to set it system wide on the server. If it needs to be set up on the client machine, this is not an acceptable solution. If someone has some info on how it is supposed to be done, please, by all means! I can fix this problem by creating a .bashrc with PATH correction in the system root (since all network users would start here as they do not have home). But it just feels wrong. Additionally, if we do create a home folder for an user, then the git command would fail again. I can install a third party application to set up hooks on the login and then run a script creating a home directory with the necessary path corrections. This smells like a backyard tinkering and duct tape solution. I can install a small script on the server and ForceCommand the sshd to this script on login. This script will then look for a command to execute ($SSH_ORIGINAL_COMMAND) and trigger a login shell to run this command, or just trigger a regular login shell for an interactive session. The full details of this method can be found here: http://marc.info/?l=git&m=121378876831164 The last one is the best method I found so far. Any suggestions on how to deal with this properly?

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • Mark Hurd on the Customer Revolution: Oracle's Top 10 Insights

    - by Richard Lefebvre
    Reprint of an article from Forbes Businesses that fail to focus on customer experience will hear a giant sucking sound from their vanishing profitability. Because in today’s dynamic global marketplace, consumers now hold the power in the buyer-seller equation, and sellers need to revamp their strategy for this new world order. The ability to relentlessly deliver connected, personalized and rewarding customer experiences is rapidly becoming one of the primary sources of competitive advantage in today’s dynamic global marketplace. And the inability or unwillingness to realize that the customer is a company’s most important asset will lead, inevitably, to decline and failure. Welcome to the lifecycle of customer experience, in which consumers explore, engage, shop, buy, ask, compare, complain, socialize, exchange, and more across multiple channels with the unconditional expectation that each of those interactions will be completed in an efficient and personalized manner however, wherever, and whenever the customer wants. While many niche companies are offering point solutions within that sprawling and complex spectrum of needs and requirements, businesses looking to deliver superb customer experiences are still left having to do multiple product evaluations, multiple contract negotiations, multiple test projects, multiple deployments, and–perhaps most annoying of all–multiple and never-ending integration projects to string together all those niche products from all those niche vendors. With its new suite of customer-experience solutions, Oracle believes it can help companies unravel these challenges and move at the speed of their customers, anticipating their needs and desires and creating enduring and profitable relationships. Those solutions span the full range of marketing, selling, commerce, service, listening/insights, and social and collaboration tools for employees. When Oracle launched its suite of Customer Experience solutions at a recent event in New York City, president Mark Hurd analyzed the customer experience revolution taking place and presented Oracle’s strategy for empowering companies to capitalize on this important market shift. From Hurd’s presentation and related materials, I’ve extracted a list of Hurd’s Top 10 Insights into the Customer Revolution. 1. Please Don’t Feed the Competitor’s Pipeline!After enduring a poor experience, 89% of consumers say they would immediately take their business to your competitor. (Except where noted, the source for these findings is the 2011 Customer Experience Impact (CEI) Report including a survey commissioned by RightNow (acquired by Oracle in March 2012) and conducted by Harris Interactive.) 2. The Addressable Market Is Massive. Only 1% of consumers say their expectations were always met by their actual experiences. 3. They’re Willing to Pay More! In return for a great experience, 86% of consumers say they’ll pay up to 25% more. 4. The Social Media Microphone Is Always Live. After suffering through a poor experience, more than 25% of consumers said they posted a negative comment on Twitter or Facebook or other social media sites. Conversely, of those consumers who got a response after complaining, 22% posted positive comments about the company. 5.  The New Deal Is Never Done: Embrace the Entire Customer Lifecycle. An appropriately active and engaged relationship, says Hurd, extends across every step of the entire processs: need, research, select, purchase, receive, use, maintain, and recommend. 6. The 360-Degree Commitment. Customers want to do business with companies that actively and openly demonstrate the desire to establish strong and seamless connections across employees, the company, and the customer, says research firm Temkin Group in its report called “The CX Competencies.” 7. Understand the Emotional Drivers Behind Brand Love. What makes consumers fall in love with a brand? Among the top factors are friendly employees and customer reps (73%), easy access to information and support (55%), and personalized experiences, such as when companies know precisely what products or services customers have purchased in the past and what issues those customers have raised (36%). 8.  The Importance of Immediate Action. You’ve got one week to respond–and then the opportunity’s lost. If your company needs more than a week to answer a prospect’s question or request, most of those prospects will terminate the relationship. 9.  Want More Revenue, Less Churn, and More Referrals? Then improve the overall customer experience: Forrester’s research says that approach put an extra $900 million in the pockets of wireless service providers, $800 million for hotels, and $400 million for airlines. 10. The Formula for CX Success.  Hurd says it includes three elegantly interlaced factors: Connected Engagement, to personalize the experience; Actionable Insight, to maximize the engagement; and Optimized Execution, to deliver on the promise of value. RECOMMENDED READING: The Top 10 Strategic CIO Issues For 2013 Wal-Mart, Amazon, eBay: Who’s the Speed King of Retail? Career Suicide and the CIO: 4 Deadly New Threats Memo to Marc Benioff: Social Is a Tool, Not an App

    Read the article

  • How do we, as a community, help encourage programming in public schools? (Or state Schools for the U

    - by NoMoreZealots
    PRIMARY MOTIVATION My office gets involved with the "First Robotics" competitions and one thing that lingers year to year is the students typically have no preparation for doing even simple programming as part of the public schools system. While the science classes provide some basic grasp of mechanical and electrical concepts, by in large computer programming gets no coverage from the curriculum. (This my be different in other areas of the country/world.) What makes it worse is there is only a short period of time you have to prepare the student's and help them design the robot. Talking to some professors from local colleges, it's a problem because you can't assume even the most basic understanding for freshman CS majors. Languages like Python, Lua and BASIC are simple enough for at least high school level students, if not younger. SCOPE So how do you get public schools to support a programming, at least to the level of "Try it in BASIC" examples that used to be at the end of a chapter in my Algebra book? At least enough to prepare them for event's such as the FIRST Robotic competitions. Which the primary objectives are to teach problem solving and team work, and to possible foster an interest in Math, Science and Engineering in general. (Not force feed to them, as some people her seem to be implying.) Edit: Why teach kids: (Since 2000 CS enrollment in US colleges has decreased by 70% while college enrollment has increased, this is a PROBLEM.) Saying there is no value in teaching someone programming in Jr./High school because they might think "they know programming." Is like saying there's no value in teaching High school science and physics, because they might decide they "know physics." Leading to abuse like: "I passed a high school physics class, I'm going to develop a Unified Quantum Gravitational Theory." Better Prepared students are better students. Instead it would allows college programs to raise the bar on the entry level courses, allowing students to be weeded out based on their understanding of more advanced material. Plus people who did poorly in that in topic in High school aren't as likely to say "I think there's money in computer's so I'll computer science." Plus if people take it in high school and decide THEN that it's not for them, it's better than them wasting their money to PAY a college to figure that out. The result is that people who take the degree are more likely to succeed and be there for the RIGHT reasons. (i.e. It's what they REALLY want to do. And that's REALLY the key to being good at anything.) Programming is like anything else, the more practice and genuine interest you have the better you get. If you start them later, they get less practice. The earlier give them the opportunity to start, the more practice they will get. All other things equal, the more practice the better the programmer.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #048

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Order of Result Set of SELECT Statement on Clustered Indexed Table When ORDER BY is Not Used Above theory is true in most of the cases. However SQL Server does not use that logic when returning the resultset. SQL Server always returns the resultset which it can return fastest.In most of the cases the resultset which can be returned fastest is the resultset which is returned using clustered index. Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT One of the Jr. Developer asked me this question (What will be the Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT?) while I was rushing to an important meeting. I was getting late so I asked him to talk with his Application Tech Lead. When I came back from meeting both of them were looking for me. They said they are confused. I quickly wrote down following example for them. 2008 SQL SERVER – Guidelines and Coding Standards Complete List Download Coding standards and guidelines are very important for any developer on the path of a successful career. A coding standard is a set of guidelines, rules and regulations on how to write code. Coding standards should be flexible enough or should take care of the situation where they should not prevent best practices for coding. They are basically the guidelines that one should follow for better understanding. Download Guidelines and Coding Standards complete List Download Get Answer in Float When Dividing of Two Integer Many times we have requirements of some calculations amongst different fields in Tables. One of the software developers here was trying to calculate some fields having integer values and divide it which gave incorrect results in integer where accurate results including decimals was expected. Puzzle – Computed Columns Datatype Explanation SQL Server automatically does a cast to the data type having the highest precedence. So the result of INT and INT will be INT, but INT and FLOAT will be FLOAT because FLOAT has a higher precedence. If you want a different data type, you need to do an EXPLICIT cast. Renaming SP is Not Good Idea – Renaming Stored Procedure Does Not Update sys.procedures I have written many articles about renaming a tables, columns and procedures SQL SERVER – How to Rename a Column Name or Table Name, here I found something interesting about renaming the stored procedures and felt like sharing it with you all. The interesting fact is that when we rename a stored procedure using SP_Rename command, the Stored Procedure is successfully renamed. But when we try to test the procedure using sp_helptext, the procedure will be having the old name instead of new names. 2009 Insert Values of Stored Procedure in Table – Use Table Valued Function It is clear from the result set that , where I have converted stored procedure logic into the table valued function, is much better in terms of logic as it saves a large number of operations. However, this option should be used carefully. The performance of the stored procedure is “usually” better than that of functions. Interesting Observation – Index on Index View Used in Similar Query Recently, I was working on an optimization project for one of the largest organizations. While working on one of the queries, we came across a very interesting observation. We found that there was a query on the base table and when the query was run, it used the index, which did not exist in the base table. On careful examination, we found that the query was using the index that was on another view. This was very interesting as I have personally never experienced a scenario like this. In simple words, “Query on the base table can use the index created on the indexed view of the same base table.” Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries Working with SQL Server has never seemed to be monotonous – no matter how long one has worked with it. Quite often, I come across some excellent comments that I feel like acknowledging them as blog posts. Recently, I wrote an article on SQL SERVER – Execution Plan and Results of Aggregate Concatenation Queries Depend Upon Expression Location, which is well received in the community. 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 2011 Startup Parameters Easy to Configure If you are a regular reader of this blog, you must be aware that I have written about SQL Server Denali recently. Here is the quickest way to reach into the screen where we can change the startup parameters. Go to SQL Server Configuration Manager >> SQL Server Services >> Right Click on the Server >> Properties >> Startup Parameters 2012 Validating Unique Columnname Across Whole Database I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. For example “Our business requirement is when we add new column we want it unique across current database.” Read the solution to this strange request in this blog post. Excel Losing Decimal Values When Value Pasted from SSMS ResultSet It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Basic Calculation and PEMDAS Order of Operation Read this interesting blog post for fantastic conversation about the subject. Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video http://www.youtube.com/watch?v=x_-3tLqTRv0 Delete From Multiple Table – Update Multiple Table in Single Statement There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? Read the answer in the blog post. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #050

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Executing Remote Stored Procedure – Calling Stored Procedure on Linked Server In this example we see two different methods of how to call Stored Procedures remotely.  Connection Property of SQL Server Management Studio SSMS A very simple example of the how to build connection properties for SQL Server with the help of SSMS. Sample Example of RANKING Functions – ROW_NUMBER, RANK, DENSE_RANK, NTILE SQL Server has a total of 4 ranking functions. Ranking functions return a ranking value for each row in a partition. All the ranking functions are non-deterministic. T-SQL Script to Add Clustered Primary Key Jr. DBA asked me three times in a day, how to create Clustered Primary Key. I gave him following sample example. That was the last time he asked “How to create Clustered Primary Key to table?” 2008 2008 – TRIM() Function – User Defined Function SQL Server does not have functions which can trim leading or trailing spaces of any string at the same time. SQL does have LTRIM() and RTRIM() which can trim leading and trailing spaces respectively. SQL Server 2008 also does not have TRIM() function. User can easily use LTRIM() and RTRIM() together and simulate TRIM() functionality. http://www.youtube.com/watch?v=1-hhApy6MHM 2009 Earlier I have written two different articles on the subject Remove Bookmark Lookup. This article is as part 3 of original article. Please read the first two articles here before continuing reading this article. Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 Interesting Observation – Query Hint – FORCE ORDER SQL Server never stops to amaze me. As regular readers of this blog already know that besides conducting corporate training, I work on large-scale projects on query optimizations and server tuning projects. In one of the recent projects, I have noticed that a Junior Database Developer used the query hint Force Order; when I asked for details, I found out that the basic concept was not properly understood by him. Queries Waiting for Memory Allocation to Execute In one of the recent projects, I was asked to create a report of queries that are waiting for memory allocation. The reason was that we were doubtful regarding whether the memory was sufficient for the application. The following query can be useful in similar cases. Queries that do not have to wait on a memory grant will not appear in the result set of following query. 2010 Quickest Way to Identify Blocking Query and Resolution – Dirty Solution As the title suggests, this is quite a dirty solution; it’s not as elegant as you expect. However, it works totally fine. Simple Explanation of Data Type Precedence While I was working on creating a question for SQL SERVER – SQL Quiz – The View, The Table and The Clustered Index Confusion, I had actually created yet another question along with this question. However, I felt that the one which is posted on the SQL Quiz is much better than this one because what makes that more challenging question is that it has a multiple answer. Encrypted Stored Procedure and Activity Monitor I recently had received questionable if any stored procedure is encrypted can we see its definition in Activity Monitor.Answer is - No. Let us do a quick test. Let us create following Stored Procedure and then launch the Activity Monitor and check the text. Indexed View always Use Index on Table A single table can have maximum 249 non clustered indexes and 1 clustered index. In SQL Server 2008, a single table can have maximum 999 non clustered indexes and 1 clustered index. It is widely believed that a table can have only 1 clustered index, and this belief is true. I have some questions for all of you. Let us assume that I am creating view from the table itself and then create a clustered index on it. In my view, I am selecting the complete table itself. 2011 Detecting Database Case Sensitive Property using fn_helpcollations() I received a question on how to determine the case sensitivity of the database. The quick answer to this is to identify the collation of the database and check the properties of the collation. I have previously written how one can identify database collation. Once you have figured out the collation of the database, you can put that in the WHERE condition of the following T-SQL and then check the case sensitivity from the description. Server Side Paging in SQL Server CE (Compact Edition) SQL Server Denali is coming up with new T-SQL of Paging. I have written about the same earlier.SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative,  SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison, SQL SERVER – Server Side Paging in SQL Server Denali – Part2 What is very interesting is that SQL Server CE 4.0 have the same feature introduced. Here is the quick example of the same. To run the script in the example, you will have to do installWebmatrix 4.0 and download sample database. Once done you can run following script. Why I am Going to Attend PASS Summit Unite 2011 The four-day event will be marked by a lot of learning, sharing, and networking, which will help me increase both my knowledge and contacts. Every year, PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. 2012 Manage Help Settings – CTRL + ALT + F1 This is very interesting read as my daughter once accidently came across a screen in SQL Server Management Studio. It took me 2-3 minutes to figure out how she has created the same screen. Recover the Accidentally Renamed Table “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you an idea about what your table name can be if you are lucky. Identify Numbers of Non Clustered Index on Tables for Entire Database Here is the script which will give you numbers of non clustered indexes on any table in entire database. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video Here is the complete complete script which I have used in the SQL in Sixty Seconds Video. Thanks Harsh for important Tip in the comment. http://www.youtube.com/watch?v=3kDHC_Tjrns Advanced Data Quality Services with Melissa Data – Azure Data Market For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ASP.NET 3.5 GridView - row editing - dynamic binding to a DropDownList

    - by marc_s
    This is driving me crazy :-) I'm trying to get a ASP.NET 3.5 GridView to show a selected value as string when being displayed, and to show a DropDownList to allow me to pick a value from a given list of options when being edited. Seems simple enough? My gridview looks like this (simplified): <asp:GridView ID="grvSecondaryLocations" runat="server" DataKeyNames="ID" OnInit="grvSecondaryLocations_Init" OnRowCommand="grvSecondaryLocations_RowCommand" OnRowCancelingEdit="grvSecondaryLocations_RowCancelingEdit" OnRowDeleting="grvSecondaryLocations_RowDeleting" OnRowEditing="grvSecondaryLocations_RowEditing" OnRowUpdating="grvSecondaryLocations_RowUpdating" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:Label ID="lblPbxTypeCaption" runat="server" Text='<%# Eval("PBXTypeCaptionValue") %>' /> </ItemTemplate> <EditItemTemplate> <asp:DropDownList ID="ddlPBXTypeNS" runat="server" Width="200px" DataTextField="CaptionValue" DataValueField="OID" /> </EditItemTemplate> </asp:TemplateField> </asp:GridView> The grid gets displayed OK when not in editing mode - the selected PBX type shows its value in the asp:Label control. No surprise there. I load the list of values for the DropDownList into a local member called _pbxTypes in the OnLoad event of the form. I verified this - it works, the values are there. Now my challenge is: when the grid goes into editing mode for a particular row, I need to bind the list of PBX's stored in _pbxTypes. Simple enough, I thought - just grab the drop down list object in the RowEditing event and attach the list: protected void grvSecondaryLocations_RowEditing(object sender, GridViewEditEventArgs e) { grvSecondaryLocations.EditIndex = e.NewEditIndex; GridViewRow editingRow = grvSecondaryLocations.Rows[e.NewEditIndex]; DropDownList ddlPbx = (editingRow.FindControl("ddlPBXTypeNS") as DropDownList); if (ddlPbx != null) { ddlPbx.DataSource = _pbxTypes; ddlPbx.DataBind(); } .... (more stuff) } Trouble is - I never get anything back from the FindControl call - seems like the ddlPBXTypeNS doesn't exist (or can't be found). What am I missing?? Must be something really stupid.... but so far, all my Googling, reading up on GridView controls, and asking buddies hasn't helped. Who can spot the missing link? ;-) Marc

    Read the article

  • C# searching for new Tool for the tool box, how to template this code

    - by Nix
    All i have something i have been trying to do for a while and have yet to find a good strategy to do it, i am not sure C# can even support what i am trying to do. Example imagine a template like this, repeated in manager code overarching cocept function Returns a result consisting of a success flag and error list. public Result<Boolean> RemoveLocation(LocationKey key) { List<Error> errorList = new List<Error>(); Boolean result = null; try{ result = locationDAO.RemoveLocation(key); }catch(UpdateException ue){ //Error happened less pass this back to the user! errorList = ue.ErrorList; } return new Result<Boolean>(result, errorList); } Looking to turn it into a template like the below where Do Something is some call (preferably not static) that returns a Boolean. I know i could do this in a stack sense, but i am really looking for a way to do it via object reference. public Result<Boolean> RemoveLocation(LocationKey key) { var magic = locationDAO.RemoveLocation(key); return ProtectedDAOCall(magic); } public Result<Boolean> CreateLocation(LocationKey key) { var magic = locationDAO.CreateLocation(key); return ProtectedDAOCall(magic); } public Result<Boolean> ProtectedDAOCall(Func<..., bool> doSomething) { List<Error> errorList = new List<Error>(); Boolean result = null; try{ result = doSomething(); }catch(UpdateException ue){ //Error happened less pass this back to the user! errorList = ue.ErrorList; } return new Result<Boolean>(result, errorList); } If there is any more information you may need let me know. I am interested to see what someone else can come up with. Marc solution applied to the code above public Result<Boolean> CreateLocation(LocationKey key) { LocationDAO locationDAO = new LocationDAO(); return WrapMethod(() => locationDAO.CreateLocation(key)); } public Result<Boolean> RemoveLocation(LocationKey key) { LocationDAO locationDAO = new LocationDAO(); return WrapMethod(() => locationDAO.RemoveLocation(key)); } static Result<T> WrapMethod<T>(Func<Result<T>> func) { try { return func(); } catch (UpdateException ue) { return new Result<T>(default(T), ue.Errors); } }

    Read the article

  • Using XmlDiffPatch when writing to stream

    - by Mark Smith
    I am trying to use xmldiffpatch when comparing two Xmls(one from a stream, the other from a file) and writing the diff patch to a stream. The first method is to write my xml to a memory stream. The second method loads an xml from a file and creates a stream for the patched file to be written into. The third method actually compares the two files and writes the third. The xmldiff.Compare(originalFile, finalFile, dgw); method takes (XmlReader, XmlReader, XmlWriter). I'm always getting that both files are identical, even though they are not, so I know that I am missing something. Any help is appreciated! public MemoryStream FirstXml() { string[] names = { "John", "Mohammed", "Marc", "Tamara", "joy" }; MemoryStream ms = new MemoryStream(); XmlTextWriter xtw= new XmlTextWriter(ms, Encoding.UTF8); xtw.WriteStartDocument(); xtw.WriteStartElement("root"); foreach (string s in names) { xtw.WriteStartElement(s); xtw.WriteEndElement(); } xtw.WriteEndElement(); xtw.WriteEndDocument(); return ms; } public Stream SecondXml() { XmlReader finalFile =XmlReader.Create(@"c:\......\something.xml"); MemoryStream ms = FirstXml(); XmlReader originalFile = XmlReader.Create(ms); MemoryStream ms2 = new MemoryStream(); XmlTextWriter dgw = new XmlTextWriter(ms2, Encoding.UTF8); GenerateDiffGram(originalFile, finalFile, dgw); return ms2; } public void GenerateDiffGram(XmlReader originalFile, XmlReader finalFile, XmlWriter dgw) { XmlDiff xmldiff = new XmlDiff(); bool bIdentical = xmldiff.Compare(originalFile, finalFile, dgw); dgw.Close(); StreamReader sr = new StreamReader(SecondXml()); string xmlOutput = sr.ReadToEnd(); if(xmlOutput.Contains("</xd:xmldiff>")) {Console.WriteLine("Xml files are not identical"); Console.Read();} else {Console.WriteLine("Xml files are identical");Console.Read();} }

    Read the article

< Previous Page | 17 18 19 20 21 22  | Next Page >