Search Results

Search found 600 results on 24 pages for 'splitting'.

Page 22/24 | < Previous Page | 18 19 20 21 22 23 24  | Next Page >

  • How do i return integers from a string ?

    - by kannan.ambadi
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Suppose you are passing a string(for e.g.: “My name has 1 K, 2 A and 3 N”)  which may contain integers, letters or special characters. I want to retrieve only numbers from the input string. We can implement it in many ways such as splitting the string into an array or by using TryParse method. I would like to share another idea, that’s by using Regular expressions. All you have to do is, create an instance of Regular Expression with a specified pattern for integer. Regular expression class defines a method called Split, which splits the specified input string based on the pattern provided during object initialization.     We can write the code as given below:   public static int[] SplitIdSeqenceValues(object combinedArgs)         {             var _argsSeperator = new Regex(@"\D+", RegexOptions.Compiled);               string[] splitedIntegers = _argsSeperator.Split(combinedArgs.ToString());               var args = new int[splitedIntegers.Length];               for (int i = 0; i < splitedIntegers.Length; i++)                 args[i] = MakeSafe.ToSafeInt32(splitedIntegers[i]);                           return args;         }    It would be better, if we set to RegexOptions.Compiled so that the regular expression will have performance boost by faster compilation.   Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Happy Programming  :))   

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • Creating Property Set Expression Trees In A Developer Friendly Way

    - by Paulo Morgado
    In a previous post I showed how to create expression trees to set properties on an object. The way I did it was not very developer friendly. It involved explicitly creating the necessary expressions because the compiler won’t generate expression trees with property or field set expressions. Recently someone contacted me the help develop some kind of command pattern framework that used developer friendly lambdas to generate property set expression trees. Simply putting, given this entity class: public class Person { public string Name { get; set; } } The person in question wanted to write code like this: var et = Set((Person p) => p.Name = "me"); Where et is the expression tree that represents the property assignment. So, if we can’t do this, let’s try the next best thing that is splitting retrieving the property information from the retrieving the value to assign o the property: var et = Set((Person p) => p.Name, () => "me"); And this is something that the compiler can handle. The implementation of Set receives an expression to retrieve the property information from and another expression the retrieve the value to assign to the property: public static Expression<Action<TEntity>> Set<TEntity, TValue>( Expression<Func<TEntity, TValue>> propertyGetExpression, Expression<Func<TValue>> valueExpression) The implementation of this method gets the property information form the body of the property get expression (propertyGetExpression) and the value expression (valueExpression) to build an assign expression and builds a lambda expression using the same parameter of the property get expression as its parameter: public static Expression<Action<TEntity>> Set<TEntity, TValue>( Expression<Func<TEntity, TValue>> propertyGetExpression, Expression<Func<TValue>> valueExpression) { var entityParameterExpression = (ParameterExpression)(((MemberExpression)(propertyGetExpression.Body)).Expression); return Expression.Lambda<Action<TEntity>>( Expression.Assign(propertyGetExpression.Body, valueExpression.Body), entityParameterExpression); } And now we can use the expression to translate to another context or just compile and use it: var et = Set((Person p) => p.Name, () => name); Console.WriteLine(person.Name); // Prints: p => (p.Name = “me”) var d = et.Compile(); d(person); Console.WriteLine(person.Name); // Prints: me It can even support closures: var et = Set((Person p) => p.Name, () => name); Console.WriteLine(person.Name); // Prints: p => (p.Name = value(<>c__DisplayClass0).name) var d = et.Compile(); name = "me"; d(person); Console.WriteLine(person.Name); // Prints: me name = "you"; d(person); Console.WriteLine(person.Name); // Prints: you Not so useful in the intended scenario (but still possible) is building an expression tree that receives the value to assign to the property as a parameter: public static Expression<Action<TEntity, TValue>> Set<TEntity, TValue>(Expression<Func<TEntity, TValue>> propertyGetExpression) { var entityParameterExpression = (ParameterExpression)(((MemberExpression)(propertyGetExpression.Body)).Expression); var valueParameterExpression = Expression.Parameter(typeof(TValue)); return Expression.Lambda<Action<TEntity, TValue>>( Expression.Assign(propertyGetExpression.Body, valueParameterExpression), entityParameterExpression, valueParameterExpression); } This new expression can be used like this: var et = Set((Person p) => p.Name); Console.WriteLine(person.Name); // Prints: (p, Param_0) => (p.Name = Param_0) var d = et.Compile(); d(person, "me"); Console.WriteLine(person.Name); // Prints: me d(person, "you"); Console.WriteLine(person.Name); // Prints: you The only caveat is that we need to be able to write code to read the property in order to write to it.

    Read the article

  • Functional Adaptation

    - by Charles Courchaine
    In real life and OO programming we’re often faced with using adapters, DVI to VGA, 1/4” to 1/8” audio connections, 110V to 220V, wrapping an incompatible interface with a new one, and so on.  Where the adapter pattern is generally considered for interfaces and classes a similar technique can be applied to method signatures.  To be fair, this adaptation is generally used to reduce the number of parameters but I’m sure there are other clever possibilities to be had.  As Jan questioned in the last post, how can we use a common method to execute an action if the action has a differing number of parameters, going back to the greeting example it was suggested having an AddName method that takes a first and last name as parameters.  This is exactly what we’ll address in this post. Let’s set the stage with some review and some code changes.  First, our method that handles the setup/tear-down infrastructure for our WCF service: 1: private static TResult ExecuteGreetingFunc<TResult>(Func<IGreeting, TResult> theGreetingFunc) 2: { 3: IGreeting aGreetingService = null; 4: try 5: { 6: aGreetingService = GetGreetingChannel(); 7: return theGreetingFunc(aGreetingService); 8: } 9: finally 10: { 11: CloseWCFChannel((IChannel)aGreetingService); 12: } 13: } Our original AddName method: 1: private static string AddName(string theName) 2: { 3: return ExecuteGreetingFunc<string>(theGreetingService => theGreetingService.AddName(theName)); 4: } Our new AddName method: 1: private static int AddName(string firstName, string lastName) 2: { 3: return ExecuteGreetingFunc<int>(theGreetingService => theGreetingService.AddName(firstName, lastName)); 4: } Let’s change the AddName method, just a little bit more for this example and have it take the greeting service as a parameter. 1: private static int AddName(IGreeting greetingService, string firstName, string lastName) 2: { 3: return greetingService.AddName(firstName, lastName); 4: } The new signature of AddName using the Func delegate is now Func<IGreeting, string, string, int>, which can’t be used with ExecuteGreetingFunc as is because it expects Func<IGreeting, TResult>.  Somehow we have to eliminate the two string parameters before we can use this with our existing method.  This is where we need to adapt AddName to match what ExecuteGreetingFunc expects, and we’ll do so in the following progression. 1: Func<IGreeting, string, string, int> -> Func<IGreeting, string, int> 2: Func<IGreeting, string, int> -> Func<IGreeting, int>   For the first step, we’ll create a method using the lambda syntax that will “eliminate” the last name parameter: 1: string lastNameToAdd = "Smith"; 2: //Func<IGreeting, string, string, int> -> Func<IGreeting, string, int> 3: Func<IGreeting, string, int> addName = (greetingService, firstName) => AddName(greetingService, firstName, lastNameToAdd); The new addName method gets us one step close to the signature we need.  Let’s say we’re going to call this in a loop to add several names, we’ll take the final step from Func<IGreeting, string, int> -> Func<IGreeting, int> in line as a lambda passed to ExecuteGreetingFunc like so: 1: List<string> firstNames = new List<string>() { "Bob", "John" }; 2: int aID; 3: foreach (string firstName in firstNames) 4: { 5: //Func<IGreeting, string, int> -> Func<IGreeting, int> 6: aID = ExecuteGreetingFunc<int>(greetingService => addName(greetingService, firstName)); 7: Console.WriteLine(GetGreeting(aID)); 8: } If for some reason you needed to break out the lambda on line 6 you could replace it with 1: aID = ExecuteGreetingFunc<int>(ApplyAddName(addName, firstName)); and use this method: 1: private static Func<IGreeting, int> ApplyAddName(Func<IGreeting, string, int> addName, string lastName) 2: { 3: return greetingService => addName(greetingService, lastName); 4: } Splitting out a lambda into its own method is useful both in this style of coding as well as LINQ queries to improve the debugging experience.  It is not strictly necessary to break apart the steps & functions as was shown above; the lambda in line 6 (of the foreach example) could include both the last name and first name instead of being composed of two functions.  The process demonstrated above is one of partially applying functions, this could have also been done with Currying (also see Dustin Campbell’s excellent post on Currying for the canonical curried add example).  Matthew Podwysocki also has some good posts explaining both Currying and partial application and a follow up post that further clarifies the difference between Currying and partial application.  In either technique the ultimate goal is to reduce the number of parameters passed to a function.  Currying makes it a single parameter passed at each step, where partial application allows one to use multiple parameters at a time as we’ve done here.  This technique isn’t for everyone or every problem, but can be extremely handy when you need to adapt a call to something you don’t control.

    Read the article

  • Project of Projects with team Foundation Server 2010

    - by Martin Hinshelwood
    It is pretty much accepted that you should use Areas instead of having many small Team Projects when you are using Team Foundation Server 2010. I have implemented this scenario many times and this is the current iteration of layout and considerations. If like me you work with many customers you will find that you get into a grove for how to set these things up to make them as easily understandable for everyone, while giving the best functionality. The trick is in making it as intuitive as possible for both you and the developers that need to work with it. There are five main places where you need to have the Product or Project name in prominence of any other value. Area Iteration Source Code Work Item Queries Build Once you decide how you are doing this in each of these places you need to keep to it religiously. Evan if you have one source code file to keep, make sure it is in the right place. This makes your developers and others working with the format familiar with where everything should go, as well as building up mussel memory. This prevents the neat system degenerating into a nasty mess. Areas Areas are traditionally used to separate out parts of your product / project so that you can see how much effort has gone into each. Figure: The top level areas are for reporting and work item separation There are massive advantages of using this method. You can: move work from one project to another rename a project / product It is far more likely that a project or product gets renamed than a department. Tip: If you have many projects, over 100, you should consider categorising them here, but make sure that the actual project name always sits at the same level so you know which is which. Figure: Always keep things that are the same at the same level Note: You may use these categories only at the Area/Iteration level to make it easier to select on drop down lists. You may not want to use them everywhere. On the other hand, for consistency it would be better to. Iterations Iterations are usually used to some sort of time based consideration. Here I am splitting into Iterations with periodic releases. Figure: Each product needs to be able to have its own cadence The ability to have each project run at its own pace and to enable them to have their own release schedule is often of paramount importance and you don’t want to fix your 100+ projects to all be released on the same date. Source Code Having a good structure for your source even if you are not branching or having multiple products under the same structure is always a good idea. Figure: Separate out your products source You need to think about both your branches as well as the structure of your source. All your code should be under “Source” and everything you need to build your solution including Build Scripts and 3rd party tools should be under your “Main” (branch) folder. This should them be branched by “Quality”, “Release” or both to get the most out of your branching structure. The important thing is to make sure you branch (or be able to branch) everything you need to build, test and deploy your application to an environment. That environment may be development, test or even production, but I can’t stress the importance of having everything your need. Note: You usually will not be able to install custom software on your build server. Store any *.dll’s or *.exe’s that you need under the “Tools\Tool1” folder. Note: Consult the Branching Guidance for Team Foundation Server 2010 for more on branching Figure: Adding category may be a necessary evil Even if you have to have a couple of categories called “Default”, it is better than not knowing the difference between a folder, Product and Branch. Work Item Queries Queries are used to load lists of Work Items out of TFS so you can see what work you have. This means that you want to also separate queries out by Product / project to make it easier to Figure: Again you have the same first level structure Having Folders also in Work Item Tracking we do the same thing. We put all the queries under a folder named for the Product / Project and change each query to have “AreaPath=[TeamProject]\[ProductX]” in the query instead of the standard “Project=@Project”. Tip: Don’t have a folder with new queries for each iteration. Instead have a single “Current” folder that has queries that point to the current iteration. Just change the queries as you move from one iteration to another. Tip: You can ctrl+drag the “Product1” folder to create your “Product2” folder. Builds You may have many builds both for individual products but also for different quality's. This can be further complicated by having some builds that action “Gated Check-In” and others that are specifically for “Release”, “Test” or another purpose. Figure: There are no folders, yet, for the builds so you need a good naming convention Its a pity that there are no folders under builds, some way to categorise would be nice. In lue of that at the moment you can use a functional naming convention that at least allows you to find what you want. Conclusion It is really easy to both achieve and to stick to this format if you take the time to do it. Unless you have 1000+ builds or 100+ Products you are unlikely run into any issues. Even then there are things you can do to mitigate the issues and I have describes some of them above. Let me know if you can think of any other things to make this easier.

    Read the article

  • Modularity through HTTP

    - by Michael Williamson
    As programmers, we strive for modularity in the code we write. We hope that splitting the problem up makes it easier to solve, and allows us to reuse parts of our code in other applications. Object-orientation is the most obvious of many attempts to get us closer to this ideal, and yet one of the most successful approaches is almost accidental: the web. Programming languages provide us with functions and classes, and plenty of other ways to modularize our code. This allows us to take our large problem, split it into small parts, and solve those small parts without having to worry about the whole. It also makes it easier to reason about our code. So far, so good, but now that we’ve written our small, independent module, for example to send out e-mails to my customers, we’d like to reuse it in another application. By creating DLLs, JARs or our platform’s package container of choice, we can do just that – provided our new application is on the same platform. Want to use a Java library from C#? Well, good luck – it might be possible, but it’s not going to be smooth sailing. Even if a library exists, it doesn’t mean that using it going to be a pleasant experience. Say I want to use Java to write out an XML document to an output stream. You’d imagine this would be a simple one-liner. You’d be wrong: import org.w3c.dom.*; import java.io.*; import javax.xml.transform.*; import javax.xml.transform.dom.*; import javax.xml.transform.stream.*; private static final void writeDoc(Document doc, OutputStream out) throws IOException { try { Transformer t = TransformerFactory.newInstance().newTransformer(); t.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM, doc.getDoctype().getSystemId()); t.transform(new DOMSource(doc), new StreamResult(out)); } catch (TransformerException e) { throw new AssertionError(e); // Can't happen! } } Most of the time, there is a good chance somebody else has written the code before, but if nobody can understand the interface to that code, nobody’s going to use it. The result is that most of the code we write is just a variation on a theme. Despite our best efforts, we’ve fallen a little short of our ideal, but the web brings us closer. If we want to send e-mails to our customers, we could write an e-mail-sending library. More likely, we’d use an existing one for our language. Even then, we probably wouldn’t have niceties like A/B testing or DKIM signing. Alternatively, we could just fire some HTTP requests at MailChimp, and get a whole slew of features without getting anywhere near the code that implements them. The web is inherently language agnostic. So long as your language can send and receive text over HTTP, and probably parse some JSON, you’re about as well equipped as anybody. Instead of building libraries for a specific language, we can build a service that almost every language can reuse. The text-based nature of HTTP also helps to limit the complexity of the API. As SOAP will attest, you can still make a horrible mess using HTTP, but at least it is an obvious horrible mess. Complex data structures are tedious to marshal to and from text, providing a strong incentive to keep things simple. By contrast, spotting the complexities in a class hierarchy is often not as easy. HTTP doesn’t solve every problem. It probably isn’t such a good idea to use it inside an inner loop that’s executed thousands of times per second. What’s more, the HTTP approach might introduce some new problems. We often need to add a thin shim to each application that we wish to communicate over HTTP. For instance, we might need to write a small plugin in PHP if we want to integrate WordPress into our system. Suddenly, instead of a system written in one language, we’re maintaining a system with several distinct languages and platforms. Even then, we should strive to avoid re-implementing the same old thing. As programmers, we consistently underestimate both the cost of building a system and the ongoing maintenance. If we allow ourselves to integrate existing applications, even if they’re in unfamiliar languages, we save ourselves those development and maintenance costs, as well as being able to pick the best solution for our problem. Thanks to the web, HTTP is often the easiest way to get there.

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • Separating text strings into a table of individual words in SQL via XML.

    - by Phil Factor
    p.MsoNormal {margin-top:0cm; margin-right:0cm; margin-bottom:10.0pt; margin-left:0cm; line-height:115%; font-size:11.0pt; font-family:"Calibri","sans-serif"; } Nearly nine years ago, Mike Rorke of the SQL Server 2005 XML team blogged ‘Querying Over Constructed XML Using Sub-queries’. I remember reading it at the time without being able to think of a use for what he was demonstrating. Just a few weeks ago, whilst preparing my article on searching strings, I got out my trusty function for splitting strings into words and something reminded me of the old blog. I’d been trying to think of a way of using XML to split strings reliably into words. The routine I devised turned out to be slightly slower than the iterative word chop I’ve always used in the past, so I didn’t publish it. It was then I suddenly remembered the old routine. Here is my version of it. I’ve unwrapped it from its obvious home in a function or procedure just so it is easy to appreciate. What it does is to chop a text string into individual words using XQuery and the good old nodes() method. I’ve benchmarked it and it is quicker than any of the SQL ways of doing it that I know about. Obviously, you can’t use the trick I described here to do it, because it is awkward to use REPLACE() on 1…n characters of whitespace. I’ll carry on using my iterative function since it is able to tell me the location of each word as a character-offset from the start, and also because this method leaves punctuation in (removing it takes time!). However, I can see other uses for this in passing lists as input or output parameters, or as return values.   if exists (Select * from sys.xml_schema_collections where name like 'WordList')   drop XML SCHEMA COLLECTION WordList go create xml schema collection WordList as ' <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="words">        <xs:simpleType>               <xs:list itemType="xs:string" />        </xs:simpleType> </xs:element> </xs:schema>'   go   DECLARE @string VARCHAR(MAX) –we'll get some sample data from the great Ogden Nash Select @String='This is a song to celebrate banks, Because they are full of money and you go into them and all you hear is clinks and clanks, Or maybe a sound like the wind in the trees on the hills, Which is the rustling of the thousand dollar bills. Most bankers dwell in marble halls, Which they get to dwell in because they encourage deposits and discourage withdrawals, And particularly because they all observe one rule which woe betides the banker who fails to heed it, Which is you must never lend any money to anybody unless they don''t need it. I know you, you cautious conservative banks! If people are worried about their rent it is your duty to deny them the loan of one nickel, yes, even one copper engraving of the martyred son of the late Nancy Hanks; Yes, if they request fifty dollars to pay for a baby you must look at them like Tarzan looking at an uppity ape in the jungle, And tell them what do they think a bank is, anyhow, they had better go get the money from their wife''s aunt or ungle. But suppose people come in and they have a million and they want another million to pile on top of it, Why, you brim with the milk of human kindness and you urge them to accept every drop of it, And you lend them the million so then they have two million and this gives them the idea that they would be better off with four, So they already have two million as security so you have no hesitation in lending them two more, And all the vice-presidents nod their heads in rhythm, And the only question asked is do the borrowers want the money sent or do they want to take it withm. Because I think they deserve our appreciation and thanks, the jackasses who go around saying that health and happi- ness are everything and money isn''t essential, Because as soon as they have to borrow some unimportant money to maintain their health and happiness they starve to death so they can''t go around any more sneering at good old money, which is nothing short of providential. '   –we now turn it into XML declare @xml_data xml(WordList)  set @xml_data='<words>'+ replace(@string,'&', '&amp;')+'</words>'    select T.ref.value('.', 'nvarchar(100)')  from (Select @xml_data.query('                      for $i in data(/words) return                      element li { $i }               '))  A(list) cross apply A.List.nodes('/li') T(ref)     …which gives (truncated, of course)…

    Read the article

  • How to avoid open-redirect vulnerability and safely redirect on successful login (HINT: ASP.NET MVC

    - by Brad B.
    Normally, when a site requires that you are logged in before you can access a certain page, you are taken to the login screen and after successfully authenticating yourself, you are redirected back to the originally requested page. This is great for usability - but without careful scrutiny, this feature can easily become an open redirect vulnerability. Sadly, for an example of this vulnerability, look no further than the default LogOn action provided by ASP.NET MVC 2: [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (MembershipService.ValidateUser(model.UserName, model.Password)) { FormsService.SignIn(model.UserName, model.RememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); // open redirect vulnerability HERE } else { return RedirectToAction("Index", "Home"); } } else { ModelState.AddModelError("", "User name or password incorrect..."); } } return View(model); } If a user is successfully authenticated, they are redirected to "returnUrl" (if it was provided via the login form submission). Here is a simple example attack (one of many, actually) that exploits this vulnerability: Attacker, pretending to be victim's bank, sends an email to victim containing a link, like this: http://www.mybank.com/logon?returnUrl=http://www.badsite.com Having been taught to verify the ENTIRE domain name (e.g., google.com = GOOD, google.com.as31x.example.com = BAD), the victim knows the link is OK - there isn't any tricky sub-domain phishing going on. The victim clicks the link, sees their actual familiar banking website and is asked to logon Victim logs on and is subsequently redirected to http://www.badsite.com which is made to look exactly like victim's bank's website, so victim doesn't know he is now on a different site. http://www.badsite.com says something like "We need to update our records - please type in some extremely personal information below: [ssn], [address], [phone number], etc." Victim, still thinking he is on his banking website, falls for the ploy and provides attacker with the information Any ideas on how to maintain this redirect-on-successful-login functionality yet avoid the open-redirect vulnerability? I'm leaning toward the option of splitting the "returnUrl" parameter into controller/action parts and use "RedirectToRouteResult" instead of simply "Redirect". Does this approach open any new vulnerabilities? Side note: I know this open-redirect may not seem to be a big deal compared to the likes of XSS and CSRF, but us developers are the only thing protecting our customers from the bad guys - anything we can do to make the bad guys' job harder is a win in my book. Thanks, Brad

    Read the article

  • WPF FlowDocument: force calculation of height etc. "off screen"

    - by Lars
    My target: a DocumentPaginator which takes a FlowDocument with a table, which splits the table to fit the pagesize and repeat the header/footer (special tagged TableRowGroups) on every page. For splitting the table I have to know the heights of its rows. While building the FlowDocument-table by code, the height/width of the TableRows are 0 (of course). If I assign this document to a FlowDocumentScrollViewer (PageSize is set), the heights etc. are calculated. Is this possible without using an UI-bound object? Instantiating a FlowDocumentScrollViewer which is not bound to a window doesn't force the pagination/calculation of the heights. This is how I determine the height of a TableRow (which works perfectly for documents shown by a FlowDocumentScrollViewer): FlowDocument doc = BuildNewDocument(); // what is the scrollviewer doing with the FlowDocument? FlowDocumentScrollViewer dv = new FlowDocumentScrollViewer(); dv.Document = doc; dv.Arrange(new Rect(0, 0, 0, 0)); TableRowGroup dataRows = null; foreach (Block b in doc.Blocks) { if (b is Table) { Table t = b as Table; foreach (TableRowGroup g in t.RowGroups) { if ((g.Tag is String) && ((String)g.Tag == "dataRows")) { dataRows = g; break; } } } if (dataRows != null) break; } if (dataRows != null) { foreach (TableRow r in dataRows.Rows) { double maxCellHeight = 0.0; foreach (TableCell c in r.Cells) { Rect start = c.ElementStart.GetCharacterRect(LogicalDirection.Forward); Rect end = c.ElementEnd.GetNextInsertionPosition(LogicalDirection.Backward).GetCharacterRect(LogicalDirection.Forward); double cellHeight = end.Bottom - start.Top; if (cellHeight > maxCellHeight) maxCellHeight = cellHeight; } System.Diagnostics.Trace.WriteLine("row " + dataRows.Rows.IndexOf(r) + " = " + maxCellHeight); } } Edit: I added the FlowDocumentScrollViewer to my example. The call of "Arrange" forces the FlowDocument to calculate its heights etc. I would like to know, what the FlowDocumentScrollViewer is doing with the FlowDocument, so I can do it without the UIElement. Is it possible?

    Read the article

  • Windows Azure : Server Error , 404 - File or directory not found.

    - by veda
    I want to upload some files of size 35MB on to the blob container. I have coded for splitting the data into blocks and upload it on to the blob container and form a blob using PUT. I tested the code for some files of Size 2MB or something... It worked well. But When I tried it for a large MB file, its giving me this error Server Error 404 - File or directory not found. The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable. when I tried it for files of size 6MB, it gives me this error.. Server Error in '/' Application. Runtime Error Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine. Details: To enable the details of this specific error message to be viewable on remote machines, please create a <customErrors> tag within a "web.config" configuration file located in the root directory of the current web application. This <customErrors> tag should then have its "mode" attribute set to "Off". <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Notes: The current error page you are seeing can be replaced by a custom error page by modifying the "defaultRedirect" attribute of the application's <customErrors> configuration tag to point to a custom error page URL. <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="RemoteOnly" defaultRedirect="mycustompage.htm"/> </system.web> </configuration> Can anyone tell me, How to solve this...

    Read the article

  • EJB3 Transaction Propogation

    - by Matt S.
    I have a stateless bean something like: @Stateless public class MyStatelessBean implements MyStatelessLocal, MyStatelessRemote { @PersistenceContext(unitName="myPC") private EntityManager mgr; @TransationAttribute(TransactionAttributeType.SUPPORTED) public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.process(obj); } } @TransationAttribute(TransactionAttributeType.REQUIRES_NEW) public void process(Object obj) { // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); } } The typically usage then is the client would call processObjects(...), which doesn't actually interact with the entity manager. It does what it needs to do and calls process(...) individually for each object to process. The duration of process(...) is relatively short, but processObjects(...) could take a very long time to run through everything. Therefore I don't want it to maintain an open transaction. I do need the individual process(...) operations to operate within their own transaction. This should be a new transaction for every call. Lastly I'd like to keep the option open for the client to call process(...) directly. I've tried a number of different transaction types: never, not supported, supported (on processObjects) and required, requires new (on process) but I get TransactionRequiredException every time merge() is called. I've been able to make it work by splitting up the methods into two different beans: @Stateless @TransationAttribute(TransactionAttributeType.NOT_SUPPORTED) public class MyStatelessBean1 implements MyStatelessLocal1, MyStatelessRemote1 { @EJB private MyStatelessBean2 myBean2; public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.myBean2.process(obj); } } } @Stateless public class MyStatelessBean2 implements MyStatelessLocal2, MyStatelessRemote2 { @PersistenceContext(unitName="myPC") private EntityManager mgr; @TransationAttribute(TransactionAttributeType.REQUIRES_NEW) public void process(Object obj) { // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); } } but I'm still curious if it's possible to accomplish this in one class. It looks to me like the transaction manager only operates at the bean level, even when individual methods are given more specific annotations. So if I mark one method in a way to prevent the transaction from starting calling other methods within that same instance will also not create a transaction, no matter how they're marked? I'm using JBoss Application Server 4.2.1.GA, but non-specific answers are welcome / preferred.

    Read the article

  • How to eager load sibling data using LINQ to SQL?

    - by Scott
    The goal is to issue the fewest queries to SQL Server using LINQ to SQL without using anonymous types. The return type for the method will need to be IList<Child1>. The relationships are as follows: Parent Child1 Child2 Grandchild1 Parent Child1 is a one-to-many relationship Child1 Grandchild1 is a one-to-n relationship (where n is zero to infinity) Parent Child2 is a one-to-n relationship (where n is zero to infinity) I am able to eager load the Parent, Child1 and Grandchild1 data resulting in one query to SQL Server. This query with load options eager loads all of the data, except the sibling data (Child2): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; I need to load the sibling data as well. One approach I have tried is splitting the query into two LINQ to SQL queries and merging the result sets together (not pretty), however upon accessing the sibling data it is lazy loaded anyway. Adding the sibling load option will issue a query to SQL Server for each Grandchild1 and Child2 record (which is exactly what I am trying to avoid): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); loadOptions.LoadWith<Parent>(o => o.Child2List); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=1 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=2 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=3 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=4 I've also written LINQ to SQL queries to join in all of the data in hopes that it would eager load the data, however when the LINQ to SQL EntitySet of Child2 or Grandchild1 are accessed it lazy loads the data. The reason for returning the IList<Child1> is to hydrate business objects. My thoughts are I am either: Approaching this problem the wrong way. Have the option of calling a stored procedure? My organization should not be using LINQ to SQL as an ORM? Any help is greatly appreciated. Thank you, -Scott

    Read the article

  • Faster Matrix Multiplication in C#

    - by Kyle Lahnakoski
    I have as small c# project that involves matrices. I am processing large amounts of data by splitting it into n-length chunks, treating the chucks as vectors, and multiplying by a Vandermonde** matrix. The problem is, depending on the conditions, the size of the chucks and corresponding Vandermonde** matrix can vary. I have a general solution which is easy to read, but way too slow: public byte[] addBlockRedundancy(byte[] data) { if (data.Length!=numGood) D.error("Expecting data to be just "+numGood+" bytes long"); aMatrix d=aMatrix.newColumnMatrix(this.mod, data); var r=vandermonde.multiplyBy(d); return r.ToByteArray(); }//method This can process about 1/4 megabytes per second on my i5 U470 @ 1.33GHz. I can make this faster by manually inlining the matrix multiplication: int o=0; int d=0; for (d=0; d<data.Length-numGood; d+=numGood) { for (int r=0; r<numGood+numRedundant; r++) { Byte value=0; for (int c=0; c<numGood; c++) { value=mod.Add(value, mod.Multiply(vandermonde.get(r, c), data[d+c])); }//for output[r][o]=value; }//for o++; }//for This can process about 1 meg a second. (Please note the "mod" is performing operations over GF(2^8) modulo my favorite irreducible polynomial.) I know this can get a lot faster: After all, the Vandermonde** matrix is mostly zeros. I should be able to make a routine, or find a routine, that can take my matrix and return a optimized method which will effectively multiply vectors by the given matrix, but faster. Then, when I give this routine a 5x5 Vandermonde matrix (the identity matrix), there is simply no arithmetic to perform, and the original data is just copied. ** Please note: What I use the term "Vandermonde", I actually mean an Identity matrix with some number of rows from the Vandermonde matrix appended (see comments). This matrix is wonderful because of all the zeros, and because if you remove enough rows (of your choosing) to make it square, it is an invertible matrix. And, of course, I would like to use this same routine to convert any one of those inverted matrices into an optimized series of instructions. How can I make this matrix multiplication faster? Thanks! (edited to correct my mistake with Vandermonde matrix)

    Read the article

  • Create links programmatically inside an EmberJS view

    - by Michael Gallego
    I have a pretty complex view to render which involves some kind of recursion (the typical folder/file nested list). The fact that it contains heterogeneous objects (folders and files) make it even harder to write Handlebars templates. Therefore, the only solution I've found is to create a view, and manually fill the render buffer. I came with the following solution: App.LibraryContentList = Ember.View.extend({ tagName: 'ol', classNames: ['project-list', 'dd-list'], nameChanged: function() { this.rerender(); }.observes('[email protected]'), render: function(buffer) { // We only start with depth of zero var content = this.get('content').filterProperty('depth', 0); content.forEach(function(item) { this.renderItem(buffer, item); }, this); }, renderItem: function(buffer, item) { switch (item.constructor.toString()) { case 'Photo.Folder': this.renderFolder(buffer, item); break; case 'Photo.File': this.renderFile(buffer, item); break; } }, renderFolder: function(buffer, folder) { buffer.push('<li class="folder dd-item">'); buffer.push('<span class="dd-handle">' + folder.get('name') + '</span>'); // Merge sub folders and files, and sort them by sort order var content = this.mergeAndSort(); if (content.get('length') > 0) { buffer.push('<ol>'); content.forEach(function(item) { this.renderItem(buffer, item); }, this); buffer.push('</ol>'); } buffer.push('</li>'); }, renderFile: function(buffer, album) { buffer.push('<li class="album dd-item">'); buffer.push('<span class="dd-handle">' + file.get('name') + '</span>'); buffer.push('</li>'); } }); Now, what I'd like is to be able to add links so that each folder and each file is clickable and redirect to another route. But how am I supposed to do that, as I don't have access to the linkTo helper? I've tried to play with the LinkView view, but without any success. Should I register handlers manually for each item? I've also thought about breaking that with a CollectionView instead, and splitting the content by depth so that I could render it using templates, but it seems more complicated. Any thoughts?

    Read the article

  • SQLiteException and SQLite error near "(": syntax error with Subsonic ActiveRecord

    - by nvuono
    I ran into an interesting error with the following LiNQ query using LiNQPad and when using Subsonic 3.0.x w/ActiveRecord within my project and wanted to share the error and resolution for anyone else who runs into it. The linq statement below is meant to group entries in the tblSystemsValues collection into their appropriate system and then extract the system with the highest ID. from ksf in KeySafetyFunction where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystems on ksf.ID equals sys.KeySafetyFunction join xval in (from t in tblSystemsValues group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g=>g.ID), MaxText = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).Checked }) on sys.ID equals xval.sysId select new {KSFDesc=ksf.Description, sys.Description, xval.MaxText, xval.MaxChecked} On its own, the subquery for grouping into groupedT works perfectly and the query to match up KeySafetyFunctions with their System in tblSystems also works perfectly on its own. However, when trying to run the completed query in linqpad or within my project I kept running into a SQLiteException SQLite Error Near "(" First I tried splitting the queries up within my project because I knew that I could just run a foreach loop over the results if necessary. However, I continued to receive the same exception! I eventually separated the query into three separate parts before I realized that it was the lazy execution of the queries that was killing me. It then became clear that adding the .ToList() specifier after the myProtectedSystem query below was the key to avoiding the lazy execution after combining and optimizing the query and being able to get my results despite the problems I encountered with the SQLite driver. // determine the max Text/Checked values for each system in tblSystemsValue var myProtectedValue = from t in tblSystemsValue.All() group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g => g.ID), MaxText = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).Checked}; // get the system description information and filter by Unit/Condition ID var myProtectedSystem = (from ksf in KeySafetyFunction.All() where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystem.All() on ksf.ID equals sys.KeySafetyFunction select new {KSFDesc = ksf.Description, sys.Description, sys.ID}).ToList(); // finally join everything together AFTER forcing execution with .ToList() var joined = from protectedSys in myProtectedSystem join protectedVal in myProtectedValue on protectedSys.ID equals protectedVal.sysId select new {protectedSys.KSFDesc, protectedSys.Description, protectedVal.MaxChecked, protectedVal.MaxText}; // print the gratifying debug results foreach(var protectedItem in joined) { System.Diagnostics.Debug.WriteLine(protectedItem.Description + ", " + protectedItem.KSFDesc + ", " + protectedItem.MaxText + ", " + protectedItem.MaxChecked); }

    Read the article

  • GUI Application becomes unresponsive while http request is being done

    - by JW
    I have just made my first proper little desktop GUI application that basically wraps a GUI (php-gtk) interface around a SimpleTest Web Test case to make it act as a remote testing client. Each time the local The Web Test case runs, it sends an HTTP request to another SimpleTest case (that has an XHTML interface) sitting on my server. The application, allows me to run one local test that collates information from multiple remote tests. It just has a 'Start Test' button, 'Stop Test' button and a setting to increase/decrease the number of remote tests conducted in each HTTP Request. Each test-run takes about an hour to complete. The trouble is, most of the time the application is making http requests. Furthermore, whenever an HTTP Request is being made, the application's GUI is unresponsive. I have taken to making the application wait a few seconds (iterating through the Gtk::main_iteration ) between requests in order to give the user time to re-size the window, press the Stop button, etc. But, this makes the whole test run take a lot a longer than is necessary. <?php require_once('simpletest/web_tester.php'); class TestRemoteTestingClient extends WebTestCase { function testRunIterations() { ... $this->assertTrue($this->get($nextUrl), 'getting from pointer:'. $this->_remoteMementoPointer); $this->assertResponse(200, "checking response for " . $nextUrl ); $this->assertText('RemoteNodeGreen'); $this->doGtkIterationsForMinNSeconds($secs); ... } public function doGtkIterationsForMinNSeconds($secs) { $this->appendStatusMessage("Waiting " . $secs); $start = time(); $end = $start + $secs; while( (time() < $end) ) { $this->appendStatusMessage("Waiting " . ($end - time())); while(gtk::events_pending()) Gtk::main_iteration(); } } } Is there a way to keep the application responsive whilst, at the same time making an HTTP request? I am considering splitting the application into two, where: Test Controller Application - Acts as a settings-writer / report-reader and this writes to settings file and reads a report file. Test Runner Application - Acts as a settings-reader / report-writer and, for each iteration reads the settings file, Runs the test, then write a report. So to tell it to close down - I'd: Press the Stop Button on the 'Test Controller Application', which writes to the settings file, which is read by the 'Test Runner Application' which stops, then writes to the report file to say it stopped the 'Test Controller Application' reads the report and updates the status and so on... However, before I go ahead and split the application in two - I am wondering if there is any other obvious way to deal with, this issue. I suspect it is probably quite common and a well-trodden path. Also is there an easier way to send messages between two applications?

    Read the article

  • I need help with creating a data structure in PHP

    - by alex
    What I need to do is have a data structure that shows jobs organised into 14 day periods, but only when an id is the same. I've implemented all sorts of stuff, but they have failed miserably. Ideally, maybe a SQL expert could handle all of this in the query. Here is some of my code. You can assume all library stuff works as expected. $query = 'SELECT date, rig_id, comments FROM dor ORDER BY date DESC'; $dors = Db::query(Database::SELECT, $query)->execute()->as_array(); This will return all jobs, but I need to have them organised by 14 day period with the same rig_id value. $hitches = array(); foreach($dors as $dor) { $rigId = $dor['rig_id']; $date = strtotime($dor['date']); if (empty($hitches)) { $hitches[] = array( 'rigId' => $rigId, 'startDate' => $date, 'dors' => array($dor) ); } else { $found = false; foreach($hitches as $key => $hitch) { $hitchStartDate = $hitch['startDate']; $dateDifference = abs($hitchStartDate - $date); $isSameHitchTimeFrame = $dateDifference < (Date::DAY * 14); if ($rigId == $hitch['rigId'] AND $isSameHitchTimeFrame) { $found = true; $hitches[$key]['dors'][] = $dor; } } if ($found === false) { $hitches[] = array( 'rigId' => $rigId, 'startDate' => $date, 'dors' => array($dor) ); } } } This seems to work OK splitting up by rig_id, but not by date. I also think I'm doing it wrong because I need to check the earliest date. Is it possible at all to do any of this in the database query? To recap, here is my problem I have a list of jobs with all have a rig_id (many jobs can have the same) and a date. I need the data to be organised into hitches. That is, the rig_id must be the same per hitch, and they must span a 14 day period, in which the next 14 days with the same rig_id will be a new hitch. Can someone please point me on the right track? Cheers

    Read the article

  • How do I solve "405 Method Not Allowed" for our subversion setup?

    - by macke
    We're serving our source code using VisualSVN running on Windows Server 2003. Recently, we split a portion of a project into a new project in it's own repository, and then linked it back to the original project using svn:externals. Since then, we've been having issues when we try to commit files with Subclipse. The error we're getting is: svn: Commit failed (details follow): svn: PROPFIND of '/svn': 405 Method Not Allowed (https://svn.ourserver.com) Googling for a while didn't really help, our config seems to be correct. It should also be noted that we've been running this server for a while no without these problems and apart from splitting the project into two repositories, no changes have been made to the server (ie, config files are the same). It should also be noted that these errors only appear when we try to check in multiple files at once. If we check in one file at a time there are no errors. Also, it only appears in Subclipse as far as we know right now, Versions.app (OS X) seems to work fine so that is our current workaround. So, the questions is how do I analyze the error to find the cause and subsequently fix it? I'm by no means a svn guru and right now I'm clueless. EDIT: It seems we can check in multiple files in the same package, but not files from multiple packages. Also, when I "split" the project into two repositories, I imported the original repository with a new name. I did not do a dump and then import that dump. Could that be the source of our issues, and if so, how would I solve that? EDIT: After some jerking around it seems as though it is indeed related to when checking in files in different repositories. If I try to do a single commit in both Repo A and Repo B (referenced by svn:externals) at the same time, I get the error. Versions.app handles this correctly, but I guess it might just be doing two commits, not a single one. Subclipse fails miserably. For now, we simply do multiple commits, one for Repo A and one for Repo B, that works just fine. If anyone smarter than me could fill in the details why this is happening, whether or not this kind of setup is stupid etc, please go right ahead.

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Split a binary file into chunks c++

    - by L4nce0
    I've been bashing my head against trying to first divide up a file into chunks, for the purpose of sending over sockets. I can read / write a file easily without splitting it into chunks. The code below runs, works, kinda. It will write a textfile and has a garbage character. Which if this was just for txt, no problem. Jpegs aren't working with said garbage. Been at it for a few days, so I've done my research, and it's time to get some help. I do want to stick strictly to binary readers, as this need to handle any file. I've seen a lot of slick examples out there. (none of them worked for me with jpgs) Mostly something along the lines of while(file)... I subscribe to the, if you know the size, use a for-loop, not a while-loop camp. Thank you for the help!! vector<char*> readFile(const char* fn){ vector<char*> v; ifstream::pos_type size; char * memblock; ifstream file; file.open(fn,ios::in|ios::binary|ios::ate); if (file.is_open()) { size = fileS(fn); file.seekg (0, ios::beg); int bs = size/3; // arbitrary. Actual program will use the socket send size int ws = 0; int i = 0; for(i = 0; i < size; i+=bs){ if(i+bs > size) ws = size%bs; else ws = bs; memblock = new char [ws]; file.read (memblock, ws); v.push_back(memblock); } } else{ exit(-4); } return v; } int main(int argc, char **argv) { vector<char*> v = readFile("foo.txt"); ofstream myFile ("bar.txt", ios::out | ios::binary); for(vector<char*>::iterator it = v.begin(); it!=v.end(); ++it ){ myFile.write(*it,strlen(*it)); } }

    Read the article

  • Does this prove a network bandwidth bottleneck?

    - by Yuji Tomita
    I've incorrectly assumed that my internal AB testing means my server can handle 1k concurrency @3k hits per second. My theory at at the moment is that the network is the bottleneck. The server can't send enough data fast enough. External testing from blitz.io at 1k concurrency shows my hits/s capping off at 180, with pages taking longer and longer to respond as the server is only able to return 180 per second. I've served a blank file from nginx and benched it: it scales 1:1 with concurrency. Now to rule out IO / memcached bottlenecks (nginx normally pulls from memcached), I serve up a static version of the cached page from the filesystem. The results are very similar to my original test; I'm capped at around 180 RPS. Splitting the HTML page in half gives me double the RPS, so it's definitely limited by the size of the page. If I internally ApacheBench from the local server, I get consistent results of around 4k RPS on both the Full Page and the Half Page, at high transfer rates. Transfer rate: 62586.14 [Kbytes/sec] received If I AB from an external server, I get around 180RPS - same as the blitz.io results. How do I know it's not intentional throttling? If I benchmark from multiple external servers, all results become poor which leads me to believe the problem is in MY servers outbound traffic, not a download speed issue with my benchmarking servers / blitz.io. So I'm back to my conclusion that my server can't send data fast enough. Am I right? Are there other ways to interpret this data? Is the solution/optimization to set up multiple servers + load balancing that can each serve 180 hits per second? I'm quite new to server optimization, so I'd appreciate any confirmation interpreting this data. Outbound traffic Here's more information about the outbound bandwidth: The network graph shows a maximum output of 16 Mb/s: 16 megabits per second. Doesn't sound like much at all. Due to a suggestion about throttling, I looked into this and found that linode has a 50mbps cap (which I'm not even close to hitting, apparently). I had it raised to 100mbps. Since linode caps my traffic, and I'm not even hitting it, does this mean that my server should indeed be capable of outputting up to 100mbps but is limited by some other internal bottleneck? I just don't understand how networks at this large of a scale work; can they literally send data as fast as they can read from the HDD? Is the network pipe that big? In conclusion 1: Based on the above, I'm thinking I can definitely raise my 180RPS by adding an nginx load balancer on top of a multi nginx server setup at exactly 180RPS per server behind the LB. 2: If linode has a 50/100mbit limit that I'm not hitting at all, there must be something I can do to hit that limit with my single server setup. If I can read / transmit data fast enough locally, and linode even bothers to have a 50mbit/100mbit cap, there must be an internal bottleneck that's not allowing me to hit those caps that I'm not sure how to detect. Correct? I realize the question is huge and vague now, but I'm not sure how to condense it. Any input is appreciated on any conclusion I've made.

    Read the article

  • CPU Utilization LAMP stack

    - by Max
    We've got an ec2 m2.4xlarge running Magento (centos 5.6, httpd 2.2, php 5.2.17 with eaccelerator 0.9.5.3, mysql 5.1.52). Right now we're getting a large traffic spike, and our top looks like this: top - 09:41:29 up 31 days, 1:12, 1 user, load average: 120.01, 129.03, 113.23 Tasks: 1190 total, 18 running, 1172 sleeping, 0 stopped, 0 zombie Cpu(s): 97.3%us, 1.8%sy, 0.0%ni, 0.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 71687720k total, 36898928k used, 34788792k free, 49692k buffers Swap: 880737784k total, 0k used, 880737784k free, 1586524k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2433 mysql 15 0 23.6g 4.5g 7112 S 564.7 6.6 33607:34 mysqld 24046 apache 16 0 411m 65m 28m S 26.4 0.1 0:09.05 httpd 24360 apache 15 0 410m 60m 25m S 26.4 0.1 0:03.65 httpd 24993 apache 16 0 410m 57m 21m S 26.1 0.1 0:01.41 httpd 24838 apache 16 0 428m 74m 20m S 24.8 0.1 0:02.37 httpd 24359 apache 16 0 411m 62m 26m R 22.3 0.1 0:08.12 httpd 23850 apache 15 0 411m 64m 27m S 16.8 0.1 0:14.54 httpd 25229 apache 16 0 404m 46m 17m R 10.2 0.1 0:00.71 httpd 14594 apache 15 0 404m 63m 34m S 8.4 0.1 1:10.26 httpd 24955 apache 16 0 404m 50m 21m R 8.4 0.1 0:01.66 httpd 24313 apache 16 0 399m 46m 22m R 8.1 0.1 0:02.30 httpd 25119 apache 16 0 411m 59m 23m S 6.8 0.1 0:01.45 httpd Questions: Would giving msyqld more memory help it cache queries and react faster? If so, how? Other than splitting mysql and php to separate servers (which we're about to do) is there anything else we could/should be doing? Thanks! UPDATE: Here's our my.cnf along with the output of mysqltuner. It looks like a cache problem. Thanks again! # cat /etc/my.cnf [client] port = **** socket = /var/lib/mysql/mysql.sock [mysqld] datadir=/mnt/persistent/mysql port=**** socket=/var/lib/mysql/mysql.sock key_buffer = 512M max_allowed_packet = 64M table_cache = 1024 sort_buffer_size = 8M read_buffer_size = 4M read_rnd_buffer_size = 2M myisam_sort_buffer_size = 64M thread_cache_size = 128M tmp_table_size = 128M join_buffer_size = 1M query_cache_limit = 2M query_cache_size= 64M query_cache_type = 1 max_connections = 1000 thread_stack = 128K thread_concurrency = 48 log-bin=mysql-bin server-id = 1 wait_timeout = 300 innodb_data_home_dir = /mnt/persistent/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size = 20G innodb_additional_mem_pool_size = 20M innodb_log_file_size = 64M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_thread_concurrency = 48 ft_min_word_len=3 [myisamchk] ft_min_word_len=3 key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M # ./mysqltuner.pl >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.52-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB +Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 2G (Tables: 26) [--] Data in InnoDB tables: 749M (Tables: 250) [!!] Total fragmented tables: 262 -------- Security Recommendations ------------------------------------------- -------- Performance Metrics ------------------------------------------------- [--] Up for: 31d 2h 30m 38s (680M q [253.371 qps], 2M conn, TX: 4825B, RX: 236B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 20.6G global + 15.1M per thread (1000 max threads) [OK] Maximum possible memory usage: 35.4G (51% of installed RAM) [OK] Slow queries: 0% (35K/680M) [OK] Highest usage of available connections: 53% (537/1000) [OK] Key buffer size / total MyISAM indexes: 512.0M/457.2M [OK] Key buffer hit rate: 100.0% (9B cached / 264K reads) [OK] Query cache efficiency: 42.3% (260M cached / 615M selects) [!!] Query cache prunes per day: 4384652 [OK] Sorts requiring temporary tables: 0% (1K temp sorts / 38M sorts) [!!] Joins performed without indexes: 100404 [OK] Temporary tables created on disk: 17% (7M on disk / 45M total) [OK] Thread cache hit rate: 99% (537 created / 2M connections) [!!] Table cache hit rate: 0% (1K open / 946K opened) [OK] Open file limit used: 9% (453/5K) [OK] Table locks acquired immediately: 99% (758M immediate / 758M locks) [OK] InnoDB data size / buffer pool: 749.3M/20.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 64M) join_buffer_size (> 1.0M, or always use indexes with joins) table_cache (> 1024)

    Read the article

< Previous Page | 18 19 20 21 22 23 24  | Next Page >