Search Results

Search found 6311 results on 253 pages for 'limit clause'.

Page 159/253 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Best way to detect integer overflow in C/C++

    - by Chris Johnson
    I was writing a program in C++ to find all solutions of a^b = c (a to the power of b), where a, b and c together use all the digits 0-9 exactly once. The program looped over values of a and b, and ran a digit-counting routine each time on a, b and a^b to check if the digits condition was satisfied. However, spurious solutions can be generated when a^b overflows the integer limit. I ended up checking for this using code like: unsigned long b, c, c_test; ... c_test=c*b; // Possible overflow if (c_test/b != c) {/* There has been an overflow*/} else c=c_test; // No overflow Is there a better way of testing for overflow? I know that some chips have an internal flag that is set when overflow occurs, but I've never seen it accessed through C or C++.

    Read the article

  • Asp.net Mvc 2: Repository, Paging, and Filtering how to?

    - by Dr. Zim
    It makes sense to pass a filter object to the repository so it can limit what records return: var myFilterObject = myFilterFactory.GetBlank(); myFilterObject.AddFilter( new Filter { "transmission", "eq", "Automatic"} ); var myCars = myRepository.GetCars(myfilterObject); Key question: how would you implement paging and where? Any links on how to return a LazyList from a Repository as it would apply here? Would this be part of the filter object? Something like: myFilterObject.AddFilter( new Filter { "StartAtRecord", "eq", "45"} ); myFilterObject.AddFilter( new Filter { "GetQuantity", "eq", "15"} ); var myCars = myRepository.GetCars(myfilterObject); I assume the repository must implement filtering, otherwise you would get all records.

    Read the article

  • Embedded Glassfish - logging

    - by Walter White
    Hi all, I have migrated from log4j to logback and also am transitioning to Glassfish from Jetty. I haven't updated my logback configuration from what I had used with Jetty and consequently am not seeing any logs being written. What logging provider should I use? Should I just do my configuration with the Glassfish loggers in domain.xml? <access-log rotation-interval-in-minutes="15" rotation-suffix="yyyy-MM-dd"/> <log-service file="${com.sun.aas.instanceRoot}/logs/server.log" log-rotation-limit-in-bytes="2000000"> <module-log-levels/> </log-service> These are the defaults in domain.xml. I'd like to split the longs up into several files as well as control log level for each package. I think I can figure out how to configure them, but should I use Glassfish logging or can I use logback? Walter

    Read the article

  • Slow query with unexpected index scan

    - by zerkms
    Hello I have this query: SELECT * FROM sample INNER JOIN test ON sample.sample_number = test.sample_number INNER JOIN result ON test.test_number = result.test_number WHERE sampled_date BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan (over it's PRIMARY KEY (result.result_number, which actually doesn't take part in query)) over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows index seek (over result.test_number index) if i replace * in SELECT clause to result.test_number (covered with index) - then all become fast in first case too. this points to hdd IO issues, but doesn't clarifies changing plan. so, any ideas? UPDATE: sampled_date is in table sample and covered by index. other fields from this query: test.sample_number is covered by index and result.test_number too. UPDATE 2: obviously than sql server in any reasons don't want to use index. i did a small experiment: i remove INNER JOIN with result, select all test.test_number and after that do SELECT * FROM RESULT WHERE TEST_NUMBER IN (...) this, of course, works fast. but i cannot get what is the difference and why query optimizer choose such inappropriate way to select data in 1st case.

    Read the article

  • Tiny, very-basic outgoing MTA?

    - by Xeoncross
    I am looking for a very tiny Mail-Transfer-Agent (MTA) for my smaller VPS which only need to send emails for new accounts or notifications. Originally, I used PHP sockets and wrote a 4kb STMP class to connect to google SMTP and send the emails - but I would like to be free of the 500 emails a day (or whatever) limit they have. On larger VPS I can run Postfix - but it takes about 50MB which makes it too big for a 128/256MB VPS. So are there any very tiny MTA's out there - or is there a rundown of the protocol of sending email (I guess it works by forwarding the mail through other servers till it reaches the correct end mail server right?) so I could build a smaller bash or PHP script for this outgoing stuff?

    Read the article

  • We have multiple app servers running against a single database. How do I ensure that each row in a q

    - by Dave
    We have about 7 app servers running .NET windows services that ping a single sql server 2005 queue table and fetch a fixed amount of records to process at fixed intervals. The amount of records to process and the amount of time between fetches are both configurable and are initially set to 100 and 30 seconds initially. Currently, my queue table has an int status column which can be either "Ready, Processing, Complete, Error". The proc that fetches the records has a sql transaction with the following code inside the transaction: 1) Fetch x number of records into temp table where the status is "Ready". The select uses a holdlock hint 2) Update the status on those records in the Queue table to "Processing" The .NET services do some processing that may take seconds or even minutes per record. Another proc is called per record that simply updates the status to "Complete". The update proc has no transaction as I'm leaning on the implicit transaction as part of the update clause here. I don't know the traffic exceptions for this but figure it will be under 10k records per day. Is this the best way to handle this scenario? If so, are there any details that I've left out, such as a hint here or there? Thanks! Dave

    Read the article

  • cpptask ordering of static libraries in gcc command line

    - by AC
    How do I force cpptask to move the static libraries to the end on arg list issued to the compiler? Here is the clause I am using <cpptasks:cc description="appname" subsystem="console" objdir="obj" outfile="dist/app_test"> <compiler refid="testsslcc" /> <linkerarg value="-L${libdir}" /> <linkerarg value="-L/usr/local/devl/lib" /> <linkerarg value="-Wl,-rpath,../lib" /> <libset libs="unittest ${libs} dsg readline ncurses gcov" /> <fileset dir="test/obj" includes="main.o" /> <fileset dir="." includes="${TCFILES}" /> <fileset dir="../lib" includes="libboost_thread.a libboost_date_time.a" /> </cpptasks:cc> when this executes, libboost_thread.a libboost_date_time.a are first files in the argument list passed the compiler, gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k ../../lib/libboost_date_time.a ../../lib/libboost_thread.a x.cpp ... which causes compiler error. By manually moving them to the end of the argument list, the application compiles without error. gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k x.cpp ... ../../lib/libboost_date_time.a ../../lib/libboost_thread.a And yes I have tried changing the order in the xml, and that of course didn't work. For now I am using an exec task to call gcc with the files in the correct order but this of course is a hack.

    Read the article

  • Preserving Language across inline Calculated Members in SSAS

    - by Tullo
    Problem: I need to retrieve the language of a given cell from the cube. The cell is defined by code-generated MDX, which can have an arbitrary level of indirection as far as calculated members and sets go (defined in the WITH clause). SSAS appears to ignore the Language of the specified members when you declare a calculated member inline in the query. Example: The cube's default locale is 1033 (en-US) The cube contains a Calculated Measure called [Net Pounds] which is defined as [Net Amt], language=2057 (en-GB) The query requests this measure alongside an inline calculated measure which is simply an alias to the [Net Pounds] When used directly, the measure is formatted in the en-GB locale, but when aliased, the measure falls back to using the cube default of en-US. Here's what the query looks like: WITH MEMBER [Measures].[Pounds Indirect] AS [Measures].[Net Pounds] SELECT { [Measures].[Pounds Indirect], [Measures].[Net Pounds] } ON AXIS (0) FROM [Cube] CELL PROPERTIES language, value, formatted_value The query returns the expected two cells, but only uses the [Net Pounds] locale when used directly. Is there an option or switch somewhere in SSAS that will allow locale information to be visible in calculated members? I realise that it is possible to declare the inline calculated member in a particular locale, but this would involve extracting the locale from the tuple first, which (since the cube's member is isolated in the application's query schema) is unknown.

    Read the article

  • WF 4.0 can't get to resume workflow on the staging/production environment

    - by Yasmine Atta Hajjaj
    I have developed various registeration workflows using WF4.0. Each work flow has various bookmarks. I am using the registeration wf for an asp.net application. I tested the asp.net application locally and it is working fine( Starting WF, Persisting to db and resuming bookmarks). When I try to test it on the staging server, everything goes messy. I can no longer resume wfs and I get an error message : System.Runtime.DurableInstancing.InstancePersistenceCommandException was unhandled by user code Message=The execution of the InstancePersistenceCommand named {urn:schemas-microsoft-com:System.Activities.Persistence/command}LoadWorkflow was interrupted by an error. Source=System.Runtime.DurableInstancing StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Runtime.DurableInstancing.InstancePersistenceContext.OuterExecute(InstanceHandle initialInstanceHandle, InstancePersistenceCommand command, Transaction transaction, TimeSpan timeout) at System.Runtime.DurableInstancing.InstanceStore.Execute(InstanceHandle handle, InstancePersistenceCommand command, TimeSpan timeout) at System.Activities.WorkflowApplication.PersistenceManager.Load(TimeSpan timeout) at System.Activities.WorkflowApplication.LoadCore(TimeSpan timeout, Boolean loadAny) at System.Activities.WorkflowApplication.Load(Guid instanceId, TimeSpan timeout) at System.Activities.WorkflowApplication.Load(Guid instanceId) at CEO_StartUpCEORegisterationTest.LoadInstance(Guid wfInstanceId) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 64 at CEO_StartUpCEORegisterationTest.Page_Load(Object sender, EventArgs e) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 44 at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: System.Data.SqlClient.SqlException Message=Index 'NCIX_KeysTable_SurrogateInstanceId' on table 'KeysTable' (specified in the FROM clause) does not exist. Source=.Net SqlClient Data Provider ErrorCode=-2146232060 Class=16 LineNumber=211 Number=308 Procedure=LoadInstance Server= State=1 StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Activities.DurableInstancing.SqlWorkflowInstanceStoreAsyncResult.SqlCommandAsyncResultCallback(IAsyncResult result) I know that this is quite verbose. But I have been banging my head against the wall for more than a week. I did search and all I came to know was to work on ms dtc. I enabled it on the staging server , I installed application server on the staging server and I am still getting the same error. I would appreciate if anyone could help with the problem. Thanks in advance :)

    Read the article

  • How to run batched WCF service calls in Silverlight BackgroundWorker

    - by Simon
    Is there any existing plumbing to run WCF calls in batches in a BackgroundWorker? Obviously since all Silverlight WCF calls are async - if I run them all in a backgroundworker they will all return instantly. I just don't want to implement a nasty hack if theres a nice way to run service calls and collect the results. Doesnt matter what order they are done in All operations are independent I'd like to have no more than 5 items running at once Edit: i've also noticed (when using Fiddler) that no more than about 7 calls are able to be sent at any one time. Even when running out-of-browser this limit applies. Is this due to my default browser settings - or configurable also. obviously its a poor man's solution (and not suitable for what i want) but something I'll probably need to take account of to make sure the rest of my app remains responsive if i'm running this as a background task and don't want it using up all my connections.

    Read the article

  • Django JSON serializable error

    - by Hulk
    With the following code below, There is an error saying File "/home/user/web_pro/info/views.py", line 184, in headerview, raise TypeError("%r is not JSON serializable" % (o,)) TypeError: <lastname: jerry> is not JSON serializable In the models code header(models.Model): firstname = models.ForeignKey(Firstname) lastname = models.ForeignKey(Lastname) In the views code headerview(request): header = header.objects.filter(created_by=my_id).order_by(order_by)[offset:limit] l_array = [] l_array_obj = [] for obj in header: l_array_obj = [obj.title, obj.lastname ,obj.firstname ] l_array.append(l_array_obj) dictionary_l.update({'Data': l_array}) ; return HttpResponse(simplejson.dumps(dictionary_l), mimetype='application/javascript') what is this error and how to resolve this? thanks..

    Read the article

  • WinRT GridView scrolling setup work differently on mouse/kb and touch

    - by Jay Kannan
    I'm trying to mimic the functionality of the NetFlix app, with a strip on the left that collapses on scrolling, I had to offset the tiles on the GridView a bit to the right so that they can accomodate that behavior. They seem to work alright in keyboard and scroll completely to the right (although I noticed the scrollbar suddenly grows in size when I hit the left boundaries. this totally changes when I use it on touch - I seem to have a limit on the right and the scrolling doesnt scroll past the last 100 pixels or so. how do I take care of this. I'm assuming it is related to the bug here, but didn't seem to solve the problem with that solution there. "Sticky scrolling" issue in WinRT XAML GridView control

    Read the article

  • nServiceBus with large XML messages

    - by Sean
    Hello, I have read about the true messaging and that instead of sending payload on the bus, it sends an identifier. In our case, we have a lot of legacy apps/services and those were designed to receive the payload of messages (xml) that is close to 4MB (close MSMQ limit). Is there a way for nService bus to handle large payload and persist messages automatically or another work-around, so that the publisher/subscriber services don't have to worry neither about the payload size, nor about how to de/re-hydrate the payload? Thank you in advance.

    Read the article

  • Is tcerl for Mnesia production ready? Is there any alternatives?

    - by Sanoj
    I would like to create a scalable web service using Mnesia as database. However Mnesia per default isn't scalable for persistent storgage since it is using Dets (which has a 2GB limit) as backend. I have seen discussions about extending Mnesia with MnesiaEx and use tcerl as backend. It sounds good and have showed good performance. However, I have seen in a talk about Tokyo Cabinet and CouchDB with Mnesia that there are some issues: issues with durability issues with memory leaks issues with crashes Is tcerl + Mnesia really production ready? And is there any other alternatives? How doe´s companies overcome these issues if they use Mnesia in bigger systems? Is there a working solution with Mnesia and Tokyo Tyrant that is working better?

    Read the article

  • ASP.NET What's the best way to produce a trial version for customers to download?

    - by Craig Izard
    Hi all, I've written a ASP.NET app that I hope to sell to businesses, I could host the trial but it's designed to connect to the customers data so customers will certainly want to install it to do a successful evaluation. I've never produced anything commercial before so I'm looking for advice on how best to limit the trial, a 30 day trial seems most common, do you simply rely on the clock of the PC/Server they install it on? Any other suggestions welcome, please keep in mind this is ASP.NET app so will be installed on their web server. Thanks Craig

    Read the article

  • Entity Framework query not returning correctly enumerated results.

    - by SkippyFire
    I have this really strange problem where my entity framework query isn't enumerating correctly. The SQL Server table I'm using has a table with a Sku field, and the column is "distinct". It isn't a key, but it doesn't contain any duplicate values. Using actual SQL with where, distinct and group by cluases I have confirmed this. However, when I do this: // Not good foreach(var product in dc.Products) or // Not good foreach(var product in dc.Products.ToList()) or // Not good foreach(var product in dc.Products.OrderBy(p => p.Sku)) the first two objects that are returned ARE THE SAME!!! The third item was technically the second item in the table, but then the fourth item was the first row from the table again!!! The only solution I have found is to use the Distinct extension method, which shouldn't really do anything in this situation: // Good foreach(var product in dc.Products.ToList().Distinct()) Another weird thing about this is that the count of the resulting queries is the same!!! So whether or not the resulting enumerable has the correct results or duplicates, I always get the number of rows in the actual table! (No I don't have a limit clause anywhere). What could possibly cause this!?!?!?

    Read the article

  • Multiplying char and int together in C

    - by teehoo
    Today I found the following: #include <stdio.h> int main(){ char x = 255; int z = ((int)x)*2; printf("%d\n", z); //prints -2 return 0; } So basically I'm getting an overflow because the size limit is determined by the operands on the right side of the = sign?? Why doesn't casting it to int before multiplying work? In this case I'm using a char and int, but if I use "long" and "long long int" (c99), then I get similar behaviour. Is it generally advised against doing arithmetic with operands of different sizes?

    Read the article

  • Extra white space, on HTML output, on PHP MVC

    - by user316841
    Hi, I'm getting extra white space, that is not CSS, or nothing like it on the view output: The HTML. I've checked for ? (removed, where I could), saved UTF8 without BOM. Checked for existent white space in the beginning of each file, even at end. This is the structure: index.php - this is the entry point; MODEL/ CONTROLLER/ VIEW/ Let's say, that trough method GET, its sent the var TPL with some value. Let's call it LIST, so it pulls the LIST model, with all data and then show the right template to the user, with the right data. I used and tested, with require_once, include_once, include, even tested with readfile (just to test). The LIST Template opens the header.tpl and footer.tpl; I also tryed to remove this both includes from LIST template, but still, the extra white space continued. This is where the extra white space is coming from. This controller is placed between controller activity runs here , this is where the extra white space is coming from: $model_works-getRows(); $rows = $model_works-rows; if ( !require_once('views/list_works.tpl.php') ) { echo "Error."; } // end if clause The list_works.tpl.php, is basicly HTML with tags; I've t tested by changing the extension to something else, like html. Also, just to remember that at top of this file, we are using require_once to open the header.tpl and at bottom the footer.tpl. I've tested by removing both and the extra white space was still generated. The extra white space is being generated here: EXTRA WHITE SPACE HERE Thanks a lot for looking, ;D

    Read the article

  • Best XPath tools

    - by Sayed Ibrahim Hashimi
    What tools are you guys using for XPath and why? Right now I'm using SketchPath because its totally awesome, but its a windows app that needs to be installed WhiteBeam online XPath test bedbecause you can test expressions from the website SketchPath seems to stand out the most to me because it actually helps you create the xpath and it is very advanced. If you haven't tried it you should. Cons to SketchPath: you have to install it on the machine, otherwise it is fantastic. Cons to WhiteBeam: you have to upload your file which I don't always want to do for security reasons and the file size you can upload has some limit on it, and uploading a file is annoying anyways. Also I think there might be some subtle differences between the xpath used for that tool and when running a .NET app. But don't remember any right now. Just keep it in mind.

    Read the article

  • Does it make sense to use a NSFetchedResultsController without an UITableViewController? How are the

    - by dontWatchMyProfile
    I mean... could I also just create a plain old UIViewController and then set up a UITableView myself, plus an NSFetchedResultsController? How much do UITableViewController and NSFetchedResultsController interact with eachother? As far as I see it, UITableViewController is NOT by default already adopting the NSFetchedResultsControllerDelegate protocol. It almost looks like if UITableViewController has been developed without knowing about NSFetchedResultsController. Probably they even did that before developing FRC. Anyways, just a raw guess because the UITableViewController lacks of mentioning FRC at all. So the only thing I see in UITableViewController is that it is already the delegate for a UITableView by adopting the protocol, and it sets up the UITableView instance for me and assigns it internally to it's tableView property. Is that the whole magic of UITableViewController? (note: the nsfetchedresultscontrolle tag is not a typo. SO has a limit for the num of chars...too bad for that missing r, that's why I avoided this tag in my other buch of questions like the plague)

    Read the article

  • OleDBDataAdapter UNPIVOT Query not working with Microsoft.ACE.OLEDB.12.0 DataSource

    - by JayT
    I am reading in an excel file with an OleDBDataAdapter. I am using a select statement to UNPIVOT the data and insert into DataSet. However, the compiler is genereating this error: {"Syntax error in FROM clause."} But the SQL Statement is correct as I have used it in other DB's Here is the code: string strConn = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + FileName + ";Extended Properties=\"Excel 12.0 Xml;HDR=" + HDR + ";IMEX=1\""; OleDbConnection conn = new OleDbConnection(strConn); conn.Open(); string SQL = "select Packhouse, Rm, Quantity , Product " + " FROM " + " ( " + " SELECT Date,Packhouse, Rm,[FG XL], [FG L] " + " FROM [" + xlSheet + "] " + " ) Main " + " UNPIVOT " + " ( " + " Quantity FOR Product in ([FG XL], [FG L]) " + " ) Sub " + " WHERE (Date = '2010/03/08') and Quantity <> '0' and Packhouse = 'A' and Rm = '1' "; OleDbDataAdapter adapter = new OleDbDataAdapter(); adapter.SelectCommand = new OleDbCommand(SQL, conn); ds[sequencecounter] = new DataSet(); adapter.Fill(ds[sequencecounter], xlSheet); If I copy and paste the excel data into a DB, then the select query works, but the data presented to me is in excel spreadsheets. If anyone could provide help on this it will be much appreciated. Regards, J

    Read the article

  • Are there disadvantages to using VARCHAR(MAX) in a table?

    - by Meiscooldude
    Here is my predicament. Basically, I need a column in a table to hold up an unknown length of characters. But I was curious if in Sql Server performance problems could arise using a VARCHAR(MAX) or NVARCHAR(MAX) in a column, such as: 'This time' I only need to store 3 characters and most of the time I only need to store 10 characters. But there is a small chances that It could be up to a couple thousand characters in that column, or even possibly a million, It is unpredictable. But, I can guarantee that it will not go over the 2GB limit. I was just curious if there are any performance issues, or possibly better ways of solving this problem where available.

    Read the article

  • WCF via Windows Service - Authenticating Clients

    - by Sean
    I am a WCF / Security Newb. I have created a WCF service which is hosted via a windows service. The WCF service grabs data from a 3rd party data source that is secured via windows authentication. I need to either: Pass the client's privileges through the windows service, through the WCF service and into the 3rd party data source, or... Limit who can call the windows service / WCF service to members of a particular AD group. Any suggestions on how I can do either of these tasks?

    Read the article

  • zend_form display group inside foreach

    - by Mike
    I want to create a display group generated from foreach() clause output. I can't seem to get the syntax correct. Here's the business logic: for each category row find the associated fees output the category description as a label and the fees as radio buttons then create a display group with the fees as the group elements and the category description as the legend. I then have a style sheet format the elements on the page. and here's the code: foreach ($categoryData as $categoryRow) { $fees[$i] = new Zend_Form_Element_Radio("fees[$i]"); $fees[$i]->setDescription(strval($categoryRow['description'])); foreach ($feeData as $feeRow) { if ($feeRow['categories_idCategory'] == $categoryRow['idCategory']){ $fees[$i] ->addMultiOption($feeRow['idFees'] . '-' . $feeRow['categories_idCategory'], $feeRow['amount'] . '-' . $feeRow['name']); } } $fees[$i]->setRequired(TRUE); $this->addElements(array($fees[$i])); $this->addDisplayGroup($feeRow['name'], 'feeGroup',array('legend' => strval($feeRow['description']))); $i++; } I tried placing the addDisplayGroup() code within the foreach(), but I get the error: Message: No valid elements specified for display group So, my guess is that I'm making some kind of novice mistake that you experts will spot right away. I appreciate your time and attention to this matter.

    Read the article

  • MySQL use certain columns, based on other columns

    - by Rabbott
    I have this query: SELECT COUNT(articles.id) AS count FROM articles, xml_documents, streams WHERE articles.xml_document_id = xml_documents.id AND xml_documents.stream_id = streams.id AND articles.published_at BETWEEN '2010-01-01' AND '2010-04-01' AND streams.brand_id = 7 Which just uses the default equajoin by specifying three tables in csv format in the FROM clause.. What I need to do is group this by a value found within articles.source (raw xml).. so it could turn into this: SELECT COUNT(articles.id) AS count, ExtractValue(articles.source, "/article/media_type") AS media_type FROM articles, xml_documents, streams WHERE articles.xml_document_id = xml_documents.id AND xml_documents.stream_id = streams.id AND articles.published_at BETWEEN '2010-01-01' AND '2010-04-01' AND streams.brand_id = 7 GROUP BY media_type which works fine, the problem is, I'm using rails, and using STI for the xml_documents table. The articles.source that is provided to the ExtractValue method will be of a couple different formats.. So what I need to be able to do is use "/article/media_type" IF xml_documents.type = 'source one' and use "/article/source" if xml_documents.type = 'source two' This is just because the two document types format their XML differently, but I don't want to have to run multiple queries to retrieve this information.. It would be nice if one could use a ternary operator, but i don't think this is possible.. EDIT At this Point I am looking at making a temp table, or simply using UNION to place multiple result sets together..

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >