Search Results

Search found 17627 results on 706 pages for 'hierarchical query'.

Page 526/706 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Using Phing's dbdeploy task with transactions

    - by Gordon
    I am using Phing's dbdeploy task to manage my database schema. This is working fine, as long as there is no errors in my delta file. However, if there is an error, dbdeploy will just run the delta files up to the query with the error and then abort. This causes me some frustration, because I have to manually rollback the entry in the changelog table then. If I don't, dbdeploy will assume the migration was successful on a subsequent try. So the question is, there any way to get dbdeploy use transactions?

    Read the article

  • SQL: Return "true" if list of records exists?

    - by User
    An alternative title might be: Check for existence of multiple rows? Using a combination of SQL and C# I want a method to return true if all products in a list exist in a table. If it can be done in all in SQL that would be preferable. I have written a method that returns whether a single productID exists using the following SQL: SELECT productID FROM Products WHERE ProductID = @productID If this returns a row, then the c# method returns true, false otherwise. Now I'm wondering if I have a list of product IDs (not a huge list mind you, normally under 20). How can I write a query that will return a row if all the product id's exist and no row if one or more product id's does not exist? (Maybe something involving "IN" like: SELECT * FROM Products WHERE ProductID IN (1, 10, 100))

    Read the article

  • Extracting ""((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun"" from Text (Justeson & Katz, 1995)

    - by ssuhan
    I would like to query if it is possible to extract ((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun proposed by Justeson and Katz (1995) in R package openNLP? That is, I would like to use this linguistic filtering to extract candidate noun phrases. I cannot well understand its meaning. Could you do me a favor to explain it or transform such representation into R language. Many thanks. Maybe we can start the sample code from: library("openNLP") acq <- "This paper describes a novel optical thread plug gauge (OTPG) for internal thread inspection using machine vision. The OTPG is composed of a rigid industrial endoscope, a charge-coupled device camera, and a two degree-of-freedom motion control unit. A sequence of partial wall images of an internal thread are retrieved and reconstructed into a 2D unwrapped image. Then, a digital image processing and classification procedure is used to normalize, segment, and determine the quality of the internal thread." acqTag <- tagPOS(acq) acqTagSplit = strsplit(acqTag," ")

    Read the article

  • JPQL / SQL: How to select * from a table with group by on a single column?

    - by DavidD
    I would like to select every column of a table, but want to have distinct values on a single attribute of my rows (City in the example). I don't want extra columns like counts or anything, just a limited number of results, and it seems like it is not possible to directly LIMIT results in a JPQL query. Original table: ID | Name | City --------------------------- 1 | John | NY 2 | Maria | LA 3 | John | LA 4 | Albert | NY Wanted result, if I do the distinct on the City: ID | Name | City --------------------------- 1 | John | NY 2 | Maria | LA What is the best way to do that? Thank you for your help.

    Read the article

  • Using named pipes in Rails

    - by FancyDancy
    I need to read and write some data through named pipes. I have tested it in a simple Ruby app, and it works nice. But I dont know, where should i put it in my Rails app? I have heard about Rake tasks, but i don't sure, is it right solution. I need to open a pipe-file, and listen to data. If there is any, i need to process it and make a DB-query. Then, write some data to another pipe. I know, how it works, but the only problem - how to run it with Rails? Give me some examples, please.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Why does TheyWorkForYou (TWFY) web API always returns '{}'

    - by Tartley
    I'm calling a web API exposed by TheyWorkForYou (TWFI). http://www.theyworkforyou.com/api/ I'm using the Python bindings provided by twfython: http://code.google.com/p/twfython/ I wrote some code to call this API a few months ago, at which time it worked fine. But now I dig it out to run it again, no matter what query I ask of the API, it always returns '{}' (an empty dictionary). For example the following code, which should return a list of all MPs: from twfy import TWFY API_KEY = 'XXXXXXXXXXXXXXXXXXXXXX' twfy = TWFY.TWFY(API_KEY) print twfy.api.getMPs(output='js') Am I being really dumb? What else should I check?

    Read the article

  • Nhibernate Criteria Group By clause and select other data that is not grouped

    - by Peter R
    Hi, I have a parent child relationship, let's say class and children. Each child belongs to a class and has a grade. I need to select the children (or the ids of the children) with the lowest grade per class. session.CreateCriteria(typeof(Classs)) .CreateAlias("Children", "children") .SetProjection(Projections.ProjectionList() .Add(Projections.Min("children.Grade")) .Add(Projections.GroupProperty("Id")) ) .List<Object[]>(); This query returns me the lowest grade per class, but I don't know which child got the grade. When I add the children's Id to the group, the group is wrong and every child gets returned. I was hoping we could just select get the id's of those childs without grouping them. If this is not possible, then maybe there is a way to solve this with subqueries?

    Read the article

  • How to correctly filter a datatable (datatable.select)

    - by Phil
    Dim dt As New DataTable Dim da As New SqlDataAdapter(s, c) c.Open() If Not IsNothing(da) Then da.Fill(dt) dt.Select("GroupingID = 0") End If GridView1.DataSource = dt GridView1.DataBind() c.Close() When I call da.fill I am inserting all records from my query. I was then hoping to filter them to display only those where the GroupingID is equal to 0. When I run the above code. I am presented with all the data, the filter did not work. Please can you tell me how to get this working correctly. Thanks.

    Read the article

  • MySQL SELECT MAX multiple tables : foreach parent return eldest son's picture

    - by Guillermo
    **Table parent** parentId | name **Table children** childId | parentId | pictureId | age **Table childrenPictures** pictureId | imgUrl no i would like to return all parent names with their eldest son's picture (only return parents that have children, and only consider children that have pictures) so i thought of something like : SELECT c.childId AS childId, p.name AS parentName, cp.imgUrl AS imgUrl, MAX(c.age) AS age FROM parent AS p RIGHT JOIN children AS c ON (p.parentId = c.parentId) RIGHT JOIN childrenPictures AS cp ON (c.pictureId = cp.pictureId)) GROUP BY p.name This query will return each parent's eldest son's age, but the childId will not correspond to the eldest sons id, so the output does not show the right sons picture. Well if anyone has a hint i'd appreciate very much Thank you very much, G

    Read the article

  • using Spring JdbcTemplate for multiple database operations

    - by Joel Carranza
    I like the apparent simplicity of JdbcTemplate but am a little confused as to how it works. It appears that each operation (query() or update()) fetches a connection from a datasource and closes it. Beautiful, but how do you perform multiple SQL queries within the same connection? I might want to perform multiple operations in sequence (for example SELECT followed by an INSERT followed by a commit) or I might want to perform nested queries (SELECT and then perform a second SELECT based on result of each row). How do I do that with JdbcTemplate. Am I using the right class?

    Read the article

  • MySql timeouts - Should I set autoReconnect=true in Spring application?

    - by George
    After periods of inactivity on my website (Using Spring 2.5 and MySql), I get the following error: org.springframework.dao.RecoverableDataAccessException: The last packet sent successfully to the server was 52,847,830 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. According to this question, and the linked bug, I shouldn't just set autoReconnect=true. Does this mean I have to catch this exception on any queries I do and then retry the transaction? Should that logic be in the data access layer, or the model layer? Is there an easy way to handle this instead of wrapping every single query to catch this?

    Read the article

  • WPF absolute positioning in InkCanvas

    - by Nilu
    Hi, I'm trying to position a rectangle in an InkCanvas. I am using the following method. Unfortunately when I add the rectangle it gets displayed at (0,0). Although when I query to see the whether the left property is 0 I get a non zero values. Does anyone know why this might be? Cheers, Nilu InkCanvas _parent = new InkCanvas(); private void AddDisplayRect(Color annoColour, Rect bounds) { Rectangle displayRect = new Rectangle(); Canvas.SetTop(displayRect, bounds.Y); Canvas.SetLeft(displayRect, bounds.X); // check to see if the property is set Trace.WriteLine(Canvas.GetLeft(displayRect)); displayRect.Width = bounds.Width; displayRect.Height = bounds.Height; displayRect.Stroke = new SolidColorBrush(annoColour); displayRect.StrokeThickness = 1; _parent.Children.Add(displayRect); }

    Read the article

  • [ASP.NET] Odd HttpRequest behaviour

    - by barguast
    I have a web service which runs with a HttpHandler class. In this class, I inspect the request stream for form / query string parameters. In some circumstances, it seemed as though these parameters weren't getting through. After a bit of digging around, I came across some behaviour I don't quite understand. See below: // The request contains 'a=1&b=2&c=3' // TEST ONLY: Read the entire request string contents; using (StreamReader sr = new StreamReader(context.Request.InputStream)) { contents = sr.ReadToEnd(); } // Here 'contents' is usually correct - containing 'a=1&b=2&c=3'. Sometimes it is empty. string a = context.Request["a"]; // Here, a = null, regardless of whether the 'contents' variable above is correct Can anyone explain to me why this might be happening? I'm using a .NET WebClient and UploadDataAsync to perform the request on the client if that makes any difference. If you need any more information, please let me know.

    Read the article

  • OpenDataSource fails pls help

    - by Vivek Chandraprakash
    I'm trying export records from SQL Server 2008 to mdb file using OpenDataSource. It works when I log in using Windows authentication. But it fails when I use SQL Server authentication. This is the error i get OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" returned message "Could not delete from specified tables.". Msg 7320, Level 16, State 2, Procedure EXPORT_Employee, Line 110 Cannot execute the query "DELETE FROM employee_export " against OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)". Thanks -Vivek

    Read the article

  • Select all links from a Html table using XPath (and HtmlAgilityPack)

    - by Adam Asham
    What I am trying to achieve is to extract all links with a href attribute that starts with http://, https:// or /. These links lie within a table (tbody tr td etc) with a certain class. I thought I could specify just the the a element without the whole path to it but it does not seem to work. I get a NullReferenceException at the line that selects the links: var table = doc.DocumentNode.SelectSingleNode("//table[@class='containerTable']"); if (table != null) { foreach (HtmlNode item in table.SelectNodes("a[starts-with(@href, 'https://')]")) { //not working I don't know about any recommendations or best practices when it comes to XPath. Do I create overhead when I query the document two times?

    Read the article

  • Returning to last viewed List page after insert/edit with ASP.NET Dynamic Data

    - by Pat James
    With a pretty standard Dynamic Data site, when the user edits or inserts a new item and saves, the page does a Response.Redirect(table.ListActionPath), which takes the user back to page 1 of the table. If they were editing an item on page 10 (or whatever) and want to edit the next item on that page, they have to remember the page number and navigate back to it. What's the best way to return the user to the list page they last viewed? I can conceive of some solutions using cookies, session state, or query string values to retain this state and making my own Page Template to incorporate it, but I can't help thinking this must be something that was considered when Dynamic Data was created, and there must be something simpler or built-in to the framework that I'm missing here.

    Read the article

  • SQL Server 2008 ContainsTable, CTE, and Paging

    - by David Murdoch
    I'd like to perform efficient paging using containstable. The following query selects the top 10 ranked results from my database using containstable when searching for a name (first or last) that begins with "Joh". DECLARE @Limit int; SET @Limit = 10; SELECT TOP @Limit c.ChildID, c.PersonID, c.DOB, c.Gender FROM [Person].[vFullName] AS v INNER JOIN CONTAINSTABLE( [Person].[vFullName], (FullName), IS ABOUT ( "Joh*" WEIGHT (.4), "Joh" WEIGHT (.6)) ) AS k3 ON v.PersonID = k3.[KEY] JOIN [Child].[Details] c ON c.PersonID = v.PersonID JOIN [Person].[Details] p ON p.PersonID = c.PersonID ORDER BY k3.RANK DESC, FullName ASC, p.Active DESC, c.ChildID ASC I'd like to combine it with the following CTE which returns the 10th-20th results ordered by ChildID (the primary key): DECLARE @Start int; DECLARE @Limit int; SET @Start = 10; SET @Limit = 10; WITH ChildEntities AS ( SELECT ROW_NUMBER() OVER (ORDER BY ChildID) AS Row, ChildID FROM Child.Details ) SELECT c.ChildID, c.PersonID, c.DOB, c.Gender FROM ChildEntities cte INNER JOIN Child.Details c ON cte.ChildID = c.ChildID WHERE cte.Row BETWEEN @Start+1 AND @Start+@Limit ORDER BY cte.Row ASC

    Read the article

  • Hierarchy based aggregation

    - by Ganapathy Subramaniam
    I have a hierarchy table in SQL Server 2005 which contains employees - managers - department - location - state. Sample table for hierarchy table: ID Name ParentID Type 1 PA NULL 0 (group) 2 Pittsburgh 1 1 (subgroup) 3 Accounts 2 1 4 Alex 3 2 (employee) 5 Robin 3 2 6 HR 2 1 7 Robert 6 2 Second one is fact table which contains employee salary details ID and Salary. Sample data for fact table: ID Salary 4 6000 5 5000 7 4000 Is there any good to way to display the hierarchy from hierarchy table with aggregated sum of salary based on employees. Expected result is like Name Salary PA 15000 (Pittsburgh + others(if any)) Pittusburgh 15000 (Accounts + HR) Accounts 11000 (Alex + Robin) Alex 6000 (direct values) Robin 5000 HR 4000 Robert 4000 In my production environment, hierarchy table may contain 23000+ rows and fact table may contain 300,000+ rows. So, I thought of providing any level of groupid to the query to retrieve just its children and its corresponding aggregated value. Any better solution?

    Read the article

  • Trouble with MySQL: CONCAT_WS(' ', name_first, name_middle, name_last) like '%keyword%'

    - by AJB
    hey folks, I'm setting up a keyword search across multiple fields: name_first, name_middle, name_last but I'm not getting the results I'd like. Here's the query: "SELECT accounts_users.user_ID, users.name_first, users.name_middle, users.name_last, users.company FROM accounts_users, users WHERE accounts_users.account_ID = '$account_ID' AND accounts_users.user_ID = users.id AND CONCAT_WS(' ', users.name_first, users.name_middle, users.name_last) LIKE '$user_keyword%' ORDER BY users.name_first ASC" So, if I've got three names in the DB: Aaron J Ban Aaron J Can Bob L Lawblaw And if the user_keyword == "bob lawblaw" I get no result. If user_keyword == "bob L" then it returns Bob L Lawblaw. Obviously I can't force people to include the persons middle name in their keyword search but I'm stuck for the proper way to do this. All help is greatly appreciated.

    Read the article

  • SQL Server 2008 DBNETLIB error

    - by Joseph
    Hi all Our ASp.net applications getting error as below" [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied " But i can connect with Enterprise manager management studio and Query analaizer with out any issue. It was running these applications with out any issue long time. last one week we are getting this error.If we restart the server .it works then it will come again after 3to 4 hours. We are running on Windows 2003 server. I was seraching and didnt find a solution yet. If anybody knows anything for this error, please post the details to resolve. Thank you in Advance Joseph

    Read the article

  • Switching from LinqToXYZ to LinqToObjects

    - by spender
    In answering this question, it got me thinking... I often use this pattern: collectionofsomestuff //here it's LinqToEntities .Select(something=>new{something.Name,something.SomeGuid}) .ToArray() //From here on it's LinqToObjects .Select(s=>new SelectListItem() { Text = s.Name, Value = s.SomeGuid.ToString(), Selected = false }) Perhaps I'd split it over a couple of lines, but essentially, at the ToArray point, I'm effectively enumerating my query and storing the resulting sequence so that I can further process it with all the goodness of a full CLR to hand. As I have no interest in any kind of manipulation of the intermediate list, I use ToArray over ToList as there's less overhead. I do this all the time, but I wonder if there is a better pattern for this kind of problem?

    Read the article

  • Read XML Files using LINQ to XML and Extension Methods

    - by psheriff
    In previous blog posts I have discussed how to use XML files to store data in your applications. I showed you how to read those XML files from your project and get XML from a WCF service. One of the problems with reading XML files is when elements or attributes are missing. If you try to read that missing data, then a null value is returned. This can cause a problem if you are trying to load that data into an object and a null is read. This blog post will show you how to create extension methods to detect null values and return valid values to load into your object. The XML Data An XML data file called Product.xml is located in the \Xml folder of the Silverlight sample project for this blog post. This XML file contains several rows of product data that will be used in each of the samples for this post. Each row has 4 attributes; namely ProductId, ProductName, IntroductionDate and Price. <Products>  <Product ProductId="1"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="07/01/2010"  Price="799" />  <Product ProductId="2"           ProductName="ASP.Net Jumpstart Samples"           IntroductionDate="05/24/2005"  Price="0" />  ...  ...</Products> The Product Class Just as you create an Entity class to map each column in a table to a property in a class, you should do the same for an XML file too. In this case you will create a Product class with properties for each of the attributes in each element of product data. The following code listing shows the Product class. public class Product : CommonBase{  public const string XmlFile = @"Xml/Product.xml";   private string _ProductName;  private int _ProductId;  private DateTime _IntroductionDate;  private decimal _Price;   public string ProductName  {    get { return _ProductName; }    set {      if (_ProductName != value) {        _ProductName = value;        RaisePropertyChanged("ProductName");      }    }  }   public int ProductId  {    get { return _ProductId; }    set {      if (_ProductId != value) {        _ProductId = value;        RaisePropertyChanged("ProductId");      }    }  }   public DateTime IntroductionDate  {    get { return _IntroductionDate; }    set {      if (_IntroductionDate != value) {        _IntroductionDate = value;        RaisePropertyChanged("IntroductionDate");      }    }  }   public decimal Price  {    get { return _Price; }    set {      if (_Price != value) {        _Price = value;        RaisePropertyChanged("Price");      }    }  }} NOTE: The CommonBase class that the Product class inherits from simply implements the INotifyPropertyChanged event in order to inform your XAML UI of any property changes. You can see this class in the sample you download for this blog post. Reading Data When using LINQ to XML you call the Load method of the XElement class to load the XML file. Once the XML file has been loaded, you write a LINQ query to iterate over the “Product” Descendants in the XML file. The “select” portion of the LINQ query creates a new Product object for each row in the XML file. You retrieve each attribute by passing each attribute name to the Attribute() method and retrieving the data from the “Value” property. The Value property will return a null if there is no data, or will return the string value of the attribute. The Convert class is used to convert the value retrieved into the appropriate data type required by the Product class. private void LoadProducts(){  XElement xElem = null;   try  {    xElem = XElement.Load(Product.XmlFile);     // The following will NOT work if you have missing attributes    var products =         from elem in xElem.Descendants("Product")        orderby elem.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(            elem.Attribute("ProductId").Value),          ProductName = Convert.ToString(            elem.Attribute("ProductName").Value),          IntroductionDate = Convert.ToDateTime(            elem.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(elem.Attribute("Price").Value)        };     lstData.DataContext = products;  }  catch (Exception ex)  {    MessageBox.Show(ex.Message);  }} This is where the problem comes in. If you have any missing attributes in any of the rows in the XML file, or if the data in the ProductId or IntroductionDate is not of the appropriate type, then this code will fail! The reason? There is no built-in check to ensure that the correct type of data is contained in the XML file. This is where extension methods can come in real handy. Using Extension Methods Instead of using the Convert class to perform type conversions as you just saw, create a set of extension methods attached to the XAttribute class. These extension methods will perform null-checking and ensure that a valid value is passed back instead of an exception being thrown if there is invalid data in your XML file. private void LoadProducts(){  var xElem = XElement.Load(Product.XmlFile);   var products =       from elem in xElem.Descendants("Product")      orderby elem.Attribute("ProductName").Value      select new Product      {        ProductId = elem.Attribute("ProductId").GetAsInteger(),        ProductName = elem.Attribute("ProductName").GetAsString(),        IntroductionDate =            elem.Attribute("IntroductionDate").GetAsDateTime(),        Price = elem.Attribute("Price").GetAsDecimal()      };   lstData.DataContext = products;} Writing Extension Methods To create an extension method you will create a class with any name you like. In the code listing below is a class named XmlExtensionMethods. This listing just shows a couple of the available methods such as GetAsString and GetAsInteger. These methods are just like any other method you would write except when you pass in the parameter you prefix the type with the keyword “this”. This lets the compiler know that it should add this method to the class specified in the parameter. public static class XmlExtensionMethods{  public static string GetAsString(this XAttribute attr)  {    string ret = string.Empty;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      ret = attr.Value;    }     return ret;  }   public static int GetAsInteger(this XAttribute attr)  {    int ret = 0;    int value = 0;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      if(int.TryParse(attr.Value, out value))        ret = value;    }     return ret;  }   ...  ...} Each of the methods in the XmlExtensionMethods class should inspect the XAttribute to ensure it is not null and that the value in the attribute is not null. If the value is null, then a default value will be returned such as an empty string or a 0 for a numeric value. Summary Extension methods are a great way to simplify your code and provide protection to ensure problems do not occur when reading data. You will probably want to create more extension methods to handle XElement objects as well for when you use element-based XML. Feel free to extend these extension methods to accept a parameter which would be the default value if a null value is detected, or any other parameters you wish. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose “Tips & Tricks”, then "Read XML Files using LINQ to XML and Extension Methods" from the drop-down. Good Luck with your Coding,Paul D. Sheriff  

    Read the article

  • Retreive a value inside a stored procedure and use it inside that stored procedure

    - by sai
    delimiter // CREATE DEFINER=root@localhost PROCEDUREgetData(IN templateName VARCHAR(45),IN templateVersion VARCHAR(45),IN userId VARCHAR(45)) BEGIN set @version = CONCAT("SELECT saveOEMsData_answersVersion FROMsaveOEMsData WHERE saveOEMsData_templateName = '",templateName,"' ANDsaveOEMsData_templateVersion = ",templateVersion," AND saveOEMsData_userId= ",userId); PREPARE s1 from @version; EXECUTE S1; END // delimiter ; I am retreiving saveOEMsData_answersVersion, but I have to use it in an IF loop, as in if the version == 1, then I would use a query, else I would use something else. But I am not able to use the version. Could someone help with this?? I am only able to print but not able to use the version for data manipulation. The procedure works fine but I am unable to proceed to next step which is the if condition. The if condition would have something like the below mentioned. IF(ver == 1) THEN SELECT "1"; END IF;

    Read the article

  • Storing user info in Session using an Object vs. normal variables

    - by justinl
    I'm in the process of implementing a user authentication system for my website. I'm using an open source library that maintains user information by creating a User object and storing that object inside my php SESSION variable. Is this the best way to store and access that information? I find it a bit of a hassle to access the user variables because I have to create an object to access them first: $userObj = $_SESSION['userObject']; $userObj->userId; instead of just accessing the user id like this how I would usually store the user ID: $_SESSION['userId']; Is there an advantage to storing a bunch of user data as an object instead of just storing them as individual SESSION variables? ps - The library also seems to store a handful of variables inside the user object (id, username, date joined, email, last user db query) but I really don't care to have all that information stored in my session. I only really want to keep the user id and username.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >