Search Results

Search found 2750 results on 110 pages for 'recursive subquery factor'.

Page 79/110 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • C# managed dll call or unmanaged dll call?

    - by 5YrsLaterDBA
    I was asking to do two dll calls from our application. These two dlls are from other group and other company. Have read a little about managed and unmanaged. I would prefer to do managed call. But whether use managed or unmanaged is the decision of the caller only or it also depends on the callee? All dlls can be called with managed code? If callee is also a factor, how can I know this dll can be called with managed code?

    Read the article

  • fastest c++ file compression library available?

    - by Fabien Hure
    I have a need to find the best library to compress in memory data. I am currently using zlib but I am wondering if there is a better compression library; better in terms of performance and memory footprint. It should be able to handle multiple files in the same archive. I am looking for a C/C++ library. Performance is the key factor. The files that are being compressed are small to large XML files.

    Read the article

  • What's the fastest way to get directory and subdirs size on unix using Perl?

    - by ivicas
    I am using Perl stat() function to get the size of directory and its subdirectories. I have a list of about 20 parent directories which have few thousand recursive subdirs and every subdir has few hundred records. Main computing part of script looks like this: sub getDirSize { my $dirSize = 0; my @dirContent = <*>; my $sizeOfFilesInDir = 0; foreach my $dirContent (@dirContent) { if (-f $dirContent) { my $size = (stat($dirContent))[7]; $dirSize += $size; } elsif (-d $dirContent) { $dirSize += getDirSize($dirContent); } } return $dirSize; } The script is executing for more than one hour and I want to make it faster. I was trying with the shell du command, but the output of du (transfered to bytes) is not accurate. And it is also quite time consuming. I am working on HP-UNIX 11i v1.

    Read the article

  • How do I split the output from mysqldump into smaller files?

    - by lindelof
    I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB. Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server. Or is there another solution I have missed? Edit The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.

    Read the article

  • python-wordmatching

    - by challarao
    Write two functions, called countSubStringMatch and countSubStringMatchRecursive that take two arguments, a key string and a target string. These functions iteratively and recursively count the number of instances of the key in the target string. You should complete definitions for def countSubStringMatch(target,key): and def countSubStringMatchRecursive (target, key): For the remaining problems, we are going to explore other substring matching ideas. These problems can be solved with either an iterative function or a recursive one. You are welcome to use either approach, though you may find iterative approaches more intuitive in these cases of matching linear structures.

    Read the article

  • Delete files in C# that windows does not want me to delete?

    - by user315881
    At my company, we are writing a script to take care of simple tasks that we usually would do by hand. I am using c# to delete profiles in c:\documents and settings\, except a few. These will simply be left alone. The problem is that even with code that sets the files to normal and marks the admin user as an owner, they won't delete. They say that the quick launch folder has access denied. I am using a recursive permissions change method and I know that it works. Same thing with file attributes. Why won't it work? How do I fix this?

    Read the article

  • What's some simple F# code that generates the .tail IL instruction?

    - by kld2010
    I'd like to see the .tail IL instruction, but the simple recursive functions using tail calls that I've been writing are apparently optimized into loops. I'm actually guessing on this, as I'm not entirely sure what a loop looks like in Reflector. I definitely don't see any .tail opcodes though. I have "Generate tail calls" checked in my project's properties. I've also tried both Debug and Release builds in Reflector. The code I used is from Programming F# by Chris Smith, page 190: let factorial x = // Keep track of both x and an accumulator value (acc) let rec tailRecursiveFactorial x acc = if x <= 1 then acc else tailRecursiveFactorial (x - 1) (acc * x) tailRecursiveFactorial x 1 Can anyone suggest some simple F# code which will indeed generate .tail?

    Read the article

  • Ignoring specific differences in diff

    - by naumcho
    When doing recursive diffs I want to ignore expected differences/translations - is there a way to do that with standard unix tools? E.g. file1: 1 ... 2 /path/to/something/ver1/blah/blah 3 /path/to/something/ver1/blah/blah 4 ... file2: 1 ... 2 /path/to/something/ver2/blah/blah 3 /path/to/something/ver3/blah/blah 4 ... I want to be able to do something like: diff file1 file2 --ignore-transltion "ver1>ver2" This should show only show me that line 3 is different Does anyone know of a good way to do that? I can easily write a perl script to do it but i will end up re-implementing most of the rest of the functionality of 'diff'. Update: My goal is to run this on directories with different versions of the same files with "diff -r" so I can spot unexpected differences in versions.

    Read the article

  • Delete from empty table taking forver

    - by Will
    Hello, I have an empty table that previously had a large amount of rows. The table has about 10 columns and indexes on many of them, as well as indexes on multiple columns. DELETE FROM item WHERE 1=1 This takes approximately 40 seconds to complete SELECT * FROM item this takes 4 seconds. The execution plan of SELECT * FROM ITEM shows the following; SQL> select * from midas_item; no rows selected Elapsed: 00:00:04.29 Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=19 Card=123 Bytes=73 80) 1 0 TABLE ACCESS (FULL) OF 'MIDAS_ITEM' (Cost=19 Card=123 Byte s=7380) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 5263 consistent gets 5252 physical reads 0 redo size 1030 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 0 rows processed any idea why these would be taking so long and how to fix it would be greatly appreciated!!

    Read the article

  • Multiply with negative integer just by shifting.

    - by stex
    Hi, I'm trying to find a way to multiply an integer value with negative value just with bit shifting. Usually I do this by shifting with the power of 2 which is closest to my factor and just adding / subtracting the rest, e.g. x * 7 = ((x << 3) - x) Let's say I'd want to calculate x * -112. The only way I can imagine is -((x << 7) - (x << 4), so to calculate x * 112 and negate it afterwards. Is there a "prettier" way to do this?

    Read the article

  • can i use perl to download files in a php script?

    - by jision
    i have a file sharing website made in php which gives users to download files.But as i am using a hosted server there is limitation on time out and other factors that come to play while for a download script coded in php. This causes large files to get corrupted while downloading. i am looking for a solution for this i have tried using symbolic links but the force download factor is not taking place.... i am thinking of using perl to download files bt dont have any clue what so ever.... can any one help me outwith this problem???

    Read the article

  • Writing out to a file in scheme

    - by Ceelos
    The goal of this is to check if the character taken into account is a number or operand and then output it into a list which will be written out to a txt file. I'm wondering which process would be more efficient, whether to do it as I stated above (writing it to a list and then writing that list out into a file) or being writing out into a txt file right from the procedure. I'm new with scheme so I apologize if I am not using the correct terminology (define input '("3" "+" "4")) (define check (if (number? (car input)) (write this out to a list or directly to a file) (check the rest of file))) Another question I had in mind, how can I make it so that the check process is recursive? I know it's a lot of asking but I've getting a little frustrated with checking out the methods that I have found on other sites. I really appreciate the help!

    Read the article

  • Extract rows for the first occurrence of a variable in a data frame

    - by user2614883
    I have a data frame with two variables, Date and Taxa and want to get the date for the first time each taxa occurs. There are 9 different dates and 40 different taxa in the data frame consisting of 172 rows, but my answer should only have 40 rows. Taxa is a factor and Date is a date. For example, my data frame (called 'species') is set up like this: Date Taxa 2013-07-12 A 2011-08-31 B 2012-09-06 C 2012-05-17 A 2013-07-12 C 2012-09-07 B and I would be looking for an answer like this: Date Taxa 2012-05-17 A 2011-08-31 B 2012-09-06 C I tried using: t.first <- species[unique(species$Taxa),] and it gave me the correct number of rows but there were Taxa repeated. If I just use unique(species$Taxa) it appears to give me the right answer, but then I don't know the date when it first occurred. Thanks for any help.

    Read the article

  • Oracle SQL Developer Data Modeler: What Tables Aren’t In At Least One SubView?

    - by thatjeffsmith
    Organizing your data model makes the information easier to consume. One of the organizational tools provided by Oracle SQL Developer Data Modeler is the ‘SubView.’ In a nutshell, a SubView is a subset of your model. The Challenge: I’ve just created a model which represents my entire ____________ application. We’ll call it ‘residential lending.’ Instead of having all 100+ tables in a single model diagram, I want to break out the tables by module, e.g. appraisals, credit reports, work histories, customers, etc. I’ve spent several hours breaking out the tables to one or more SubViews, but I think i may have missed a few. Is there an easy way to see what tables aren’t in at least ONE subview? The Answer Yes, mostly. The mostly comes about from the way I’m going to accomplish this task. It involves querying the SQL Developer Data Modeler Reporting Schema. So if you don’t have the Reporting Schema setup, you’ll need to do so. Got it? Good, let’s proceed. Before you start querying your Reporting Schema, you might need a data model for the actual reporting schema…meta-meta data! You could reverse engineer the data modeler reporting schema to a new data model, or you could just reference the PDFs in \datamodeler\reports\Reporting Schema diagrams directory. Here’s a hint, it’s THIS one The Query Well, it’s actually going to be at least 2 queries. We need to get a list of distinct designs stored in your repository. For giggles, I’m going to get a listing including each version of the model. So I can query based on design and version, or in this case, timestamp of when it was added to the repository. We’ll get that from the DMRS_DESIGNS table: SELECT DISTINCT design_name, design_ovid, date_published FROM DMRS_designs Then I’m going to feed the design_ovid, down to a subquery for my child report. select name, count(distinct diagram_id) from DMRS_DIAGRAM_ELEMENTS where design_ovid = :dESIGN_OVID and type = 'Table' group by name having count(distinct diagram_id) < 2 order by count(distinct diagram_id) desc Each diagram element has an entry in this table, so I need to filter on type=’Table.’ Each design has AT LEAST one diagram, the master diagram. So any relational table in this table, only having one listing means it’s not in any SubViews. If you have overloaded object names, which is VERY possible, you’ll want to do the report off of ‘OBJECT_ID’, but then you’ll need to correlate that to the NAME, as I doubt you’re so intimate with your designs that you recognize the GUIDs So I’m going to cheat and just stick with names, but I think you get the gist. My Model Of my almost 90 tables, how many of those have I not added to at least one SubView? Now let’s run my report! Voila! My ‘BEER2′ table isn’t in any SubView! It says ’1′ because the main model diagram counts as a view. So if the count came back as ’2′, that would mean the table was in the main model diagram and in 1 SubView diagram. And I know what you’re thinking, what kind of residential lending program would have a table called ‘BEER2?’ Let’s just say, that my business model has some kinks to work out!

    Read the article

  • MDX equivalent to SQL subqueries with aggregation

    - by James Lampe
    I'm new to MDX and trying to solve the following problem. Investigated calculated members, subselects, scope statements, etc but can't quite get it to do what I want. Let's say I'm trying to come up with the MDX equivalent to the following SQL query: SELECT SUM(netMarketValue) net, SUM(CASE WHEN netMarketValue > 0 THEN netMarketValue ELSE 0 END) assets, SUM(CASE WHEN netMarketValue < 0 THEN netMarketValue ELSE 0 END) liabilities, SUM(ABS(netMarketValue)) gross someEntity1 FROM ( SELECT SUM(marketValue) netMarketValue, someEntity1, someEntity2 FROM <some set of tables> GROUP BY someEntity1, someEntity2) t GROUP BY someEntity1 In other words, I have an account ledger where I hide internal offsetting transactions (within someEntity2), then calculate assets & liabilities after aggregating them by someEntity2. Then I want to see the grand total of those assets & liabilities aggregated by the bigger entity, someEntity1. In my MDX schema I'd presumably have a cube with dimensions for someEntity1 & someEntity2, and marketValue would be my fact table/measure. I suppose i could create another DSV that did what my subquery does (calculating net), and simply create a cube with that as my measure dimension, but I wonder if there is a better way. I'd rather not have 2 cubes (one for these net calculations and another to go to a lower level of granularity for other use cases), since it will be a lot of duplicate info in my database. These will be very large cubes.

    Read the article

  • LEFT OUTER JOIN in Linq - How to Force

    - by dodegaard
    I have a LEFT OUTER OUTER join in LINQ that is combining with the outer join condition and not providing the desired results. It is basically limiting my LEFT side result with this combination. Here is the LINQ and resulting SQL. What I'd like is for "AND ([t2].[EligEnd] = @p0" in the LINQ query to not bew part of the join condition but rather a subquery to filter results BEFORE the join. Thanks in advance (samples pulled from LINQPad) - Doug (from l in Users join mr in (from mri in vwMETRemotes where met.EligEnd == Convert.ToDateTime("2009-10-31") select mri) on l.Mahcpid equals mr.Mahcpid into lo from g in lo.DefaultIfEmpty() orderby l.LastName, l.FirstName where l.LastName.StartsWith("smith") && l.DeletedDate == null select g) Here is the resulting SQL -- Region Parameters DECLARE @p0 DateTime = '2009-10-31 00:00:00.000' DECLARE @p1 NVarChar(6) = 'smith%' -- EndRegion SELECT [t2].[test], [t2].[MAHCPID] AS [Mahcpid], [t2].[FirstName], [t2].[LastName], [t2].[Gender], [t2].[Address1], [t2].[Address2], [t2].[City], [t2].[State] AS [State], [t2].[ZipCode], [t2].[Email], [t2].[EligStart], [t2].[EligEnd], [t2].[Dependent], [t2].[DateOfBirth], [t2].[ID], [t2].[MiddleInit], [t2].[Age], [t2].[SSN] AS [Ssn], [t2].[County], [t2].[HomePhone], [t2].[EmpGroupID], [t2].[PopulationIdentifier] FROM [dbo].[User] AS [t0] LEFT OUTER JOIN ( SELECT 1 AS [test], [t1].[MAHCPID], [t1].[FirstName], [t1].[LastName], [t1].[Gender], [t1].[Address1], [t1].[Address2], [t1].[City], [t1].[State], [t1].[ZipCode], [t1].[Email], [t1].[EligStart], [t1].[EligEnd], [t1].[Dependent], [t1].[DateOfBirth], [t1].[ID], [t1].[MiddleInit], [t1].[Age], [t1].[SSN], [t1].[County], [t1].[HomePhone], [t1].[EmpGroupID], [t1].[PopulationIdentifier] FROM [dbo].[vwMETRemote] AS [t1] ) AS [t2] ON ([t0].[MAHCPID] = [t2].[MAHCPID]) AND ([t2].[EligEnd] = @p0) WHERE ([t0].[LastName] LIKE @p1) AND ([t0].[DeletedDate] IS NULL) ORDER BY [t0].[LastName], [t0].[FirstName]

    Read the article

  • SQL: Speed Improvement - Cluttered union query

    - by vol7ron
    SELECT * FROM ( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) foo -- UNION -- SELECT a.user_id , a.f_name , a.l_name , '' , '' , '' FROM current_tbl a WHERE a.user_id NOT IN ( select user_id from( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) bar ) ORDER BY user_id Example of table population: current_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Berry A3 | Calv | Chard | | import_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Butcher <- last_name different | | Expected Output: ----------------------------------------------------------------------- user_id1 | f_name1 | l_name1 | user_id2 | f_name2 | l_name2 ----------+-----------+-----------+------------+-----------+----------- A1 | Adam | Acorn | A1 | Adam | Acorn A2 | Beth | Berry | A2 | Beth | Butcher A3 | Calv | Chard | | | Doing this method gets rid of conditions where the row would be: A2 | Beth | Berry | A2 | Beth | Butcher But it keeps the A3 row I hope this makes sense and I haven't overly simplified it. This is a continuation question from my other question. The succession of these improvements has dropped the query down from ~32000ms to where it's at now ~1200ms - quite an improvement. I supect I can optimize by using UNION ALL in the subquery and of course the usual index optimizations, but I'm looking for the best SQL optimization. FYI this particular case is for PostgreSQL.

    Read the article

  • Require help in Writing Query

    - by harigm
    The following image have been uploaded to show what I am trying to do and what I wanted out of it Can any one help me write the Query to get the results what I want Please check the following SELECT * FROM KPT WHERE PROPERTY_ID IN (SELECT PROPERTY_ID FROM khata_header WHERE DIV_ID = 3 and RECORD_STATUS = 0) and CHALLAN_NO > 42646 The above is the query I have written and I have got the following result set ID CHALLAN_NO PROPERTY_ID SITE_NO TOTAL_AMOUNT ----- ------------- -------------- ------------------- --------------- 1242 42757 3103010141 296 595 1243 63743 3204190257 483 594 1244 63743 3204190257 483 594 1334 43395 3217010223 1088 576 1421 524210 3320050416 (null) (null) 1422 524210 3320050416 (null) (null) 1560 564355 3320021408 (null) (null) 1870 516292 3320040420 (null) (null) 1940 68357 3217100104 139 1153 1941 68357 3217100104 139 1153 2002 56256 3320100733 511 4430 2003 56256 3320100733 511 4430 2004 66488 3217040869 293 3094 2005 66488 3217040869 293 3094 2016 64571 3217040374 (null) (null) 2036 523122 3320020352 (null) (null) 2039 65682 3217040021 273 919 In my resultset, I am getting the PropertyId repeated, since there are multilple entries, How Can I know How many have been repeated What are those Property Id which have repeated more than 2 times. Little Back ground about the tables are PROPERTY_ID is the FK in the KPT PROPERTY_ID is the PK in KH I am writing a subquery to get the Result, so I am stuck I dont know how to get my results Please help

    Read the article

  • SQL Joins Excluding Data

    - by Andrew
    Say I have three tables: Fruit (Table 1) ------ Apple Orange Pear Banana Produce Store A (Table 2 - 2 columns: Fruit for sale => Price) ------------------------- Apple => 1.00 Orange => 1.50 Pear => 2.00 Produce Store B (Table 3 - 2 columns: Fruit for sale => Price) ------------------------ Apple => 1.10 Pear => 2.50 Banana => 1.00 If I would like to write a query with Column 1: the set of fruit offered at Produce Store A UNION Produce Store B, Column 2: Price of the fruit at Produce Store A (or null if that fruit is not offered), Column 3: Price of the fruit at Produce Store B (or null if that fruit is not offered), how would I go about joining the tables? I am facing a similar problem (with more complex tables), and no matter what I try, if the "fruit" is not at "produce store a" but is at "produce store b", it is excluded (since I am joining produce store a first). I have even written a subquery to generate a full list of fruits, then left join Produce Store A, but it is still eliminating the fruits not offered at A. Any Ideas?

    Read the article

  • Translate SQL to OCL ?

    - by Roland Bengtsson
    I have a piece of SQL that I want to translate to OCL. I'm not good at SQL so I want to increase maintainability by this. We are using Interbase 2009, Delphi 2007 with Bold and modeldriven development. Now my hope is that someone here both speaks good SQL and OCL :-) The original SQL: Select Bold_Id, MessageId, ScaniaId, MessageType, MessageTime, Cancellation, ChassieNumber, UserFriendlyFormat, ReceivingOwner, Invalidated, InvalidationReason, (Select Parcel.MCurrentStates From Parcel Where ScaniaEdiSolMessage.ReceivingOwner = Parcel.Bold_Id) as ParcelState From ScaniaEdiSolMessage Where MessageType = 'IFTMBP' and not Exists (Select * From ScaniaEdiSolMessage EdiSolMsg Where EdiSolMsg.ChassieNumber = ScaniaEdiSolMessage.ChassieNumber and EdiSolMsg.ShipFromFinland = ScaniaEdiSolMessage.ShipFromFinland and EdiSolMsg.MessageType = 'IFTMBF') and invalidated = 0 Order By MessageTime desc After a small simplification: Select Bold_Id, (Select Parcel.MCurrentStates From Parcel where ScaniaEdiSolMessage.ReceivingOwner = Parcel.Bold_Id) From ScaniaEdiSolMessage Where MessageType = 'IFTMBP' and not Exists (Select * From ScaniaEdiSolMessage EdiSolMsg Where EdiSolMsg.ChassieNumber = ScaniaEdiSolMessage.ChassieNumber and EdiSolMsg.ShipFromFinland = ScaniaEdiSolMessage.ShipFromFinland and EdiSolMsg.MessageType = 'IFTMBF') and invalidated = 0 NOTE: There are 2 cases for MessageType, 'IFTMBP' and 'IFTMBF'. So the table to be listed is ScaniaEdiSolMessage. It has attributes like: MessageType: String ChassiNumber: String ShipFromFinland: Boolean Invalidated: Boolean It has also a link to table Parcel named ReceivingOwner with BoldId as key. So it seems like it list all rows of ScaniaEdiSolMessage and then have a subquery that also list all rows of ScaniaEdiSolMessage and name it EdiSolMsg. The it exclude almost all rows. In fact the query above give one hit from 28000 records. In OCL it is easy to list all instances: ScaniaEdiSolMessage.allinstances Also easy to filter rows by select for example: ScaniaEdiSolMessage.allinstances->select(shipFromFinland and not invalidated) But I do not understand how I should make a OCL to match the SQL above.

    Read the article

  • Django aggregate query generating SQL error

    - by meepmeep
    I'm using Django 1.1.1 on a SQL Server 2005 db using the latest sqlserver_ado library. models.py includes: class Project(models.Model): name = models.CharField(max_length=50) class Thing(models.Model): project = models.ForeignKey(Project) reference = models.CharField(max_length=50) class ThingMonth(models.Model): thing = models.ForeignKey(Thing) timestamp = models.DateTimeField() ThingMonthValue = models.FloatField() class Meta: db_table = u'ThingMonthSummary' In a view, I have retrieved a queryset called 'things' which contains 25 Things: things = Thing.objects.select_related().filter(project=1).order_by('reference') I then want to do an aggregate query to get the average ThingMonthValue for the first 20 of those Things for a certain period, and the same value for the last 5. For the first 20 I do: averageThingMonthValue = ThingMonth.objects.filter(turbine__in=things[:20],timestamp__range="2009-01-01 00:00","2010-03-00:00")).aggregate(Avg('ThingMonthValue'))['ThingMonthValue__avg'] This works fine, and returns the desired value. For the last 5 I do: averageThingMonthValue = ThingMonth.objects.filter(turbine__in=things[20:],timestamp__range="2009-01-01 00:00","2010-03-00:00")).aggregate(Avg('ThingMonthValue'))['ThingMonthValue__avg'] But for this I get an SQL error: 'Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.' The SQL query being used by django reads: SELECT AVG([ThingMonthSummary].[ThingMonthValue]) AS [ThingMonthValue__avg] FROM [ThingMonthSummary] WHERE ([ThingMonthSummary].[thing_id] IN (SELECT _row_num, [id] FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY [AAAA].[id] ASC) as _row_num, [AAAA].[id] FROM ( SELECT U0.[id] FROM [Thing] U0 WHERE U0.[project_id] = 1 ) AS [AAAA]) as QQQ where 20 < _row_num) AND [ThingMonthSummary].[timestamp] BETWEEN '01/01/09 00:00:00' and '03/01/10 00:00:00') Any idea why it works for one slice of the Things and not the second? I've checked and the two slices do contain the desired Things correctly.

    Read the article

  • In WMI, can I use a join (or something similar) to acquire the IisWebServer object for a site, given

    - by Precipitous
    Given a server name and a physical path, I'd like to be able to hunt down the IISWebServer object and ApplicationPool. Website url is also an acceptable input. Our technologies are IIS 6, WMI, and access via C# or Powershell 2. I'm certain this would be easier with IIS 7 its managed API. We don't have that yet. Here's what I can do: Get a list of IIS virtual directories from IISWebVirtualDirSetting and filter (offline) for the matching physical path. $theVirtualDir = gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername -authentication PacketPrivacy ` -class "IISWebVirtualDirSetting" ` | where-object {$_.Path -like $deployLocation} From the virtual directory object, I can get a name (like W3SVC/40565456/root). Given this name, I can get to other goodies, such as the IIS web server object. gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername ` -authentication PacketPrivacy ` -Query "SELECT * FROM IisWebServer WHERE Name='W3SVC/40589473'" The questions, restated: 1) This is a query language. Can I join or subquery so that 1 WMI query statement gets web servers based on IISWebVirtualDir.Path? How? 2) In solving 1, you'll have to explain how to query on the Path property. Why is this an invalid query? "SELECT * FROM IISWebVirtualDirSetting WHERE Path='D:\sites\globaldominator'"

    Read the article

  • PostgreSQL - Error: SQL state: XX000.

    - by rob
    I have a table in Postgres that looks like this: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL CONSTRAINT "pk_Population" PRIMARY KEY ("Id") ) WITH ( OIDS=FALSE ); And a select function that looks like this: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 Calling the select function returns all the rows in the table as expected. I have a need to add a couple of columns to the table (both of which are foreign keys to other tables in the database). This gives me a new table def as follows: CREATE TABLE "Population" ( "Id" bigint NOT NULL DEFAULT nextval('"population_Id_seq"'::regclass), "Name" character varying(255) NOT NULL, "Description" character varying(1024), "IsVisible" boolean NOT NULL, "DefaultSpeciesId" bigint NOT NULL, "DefaultEcotypeId" bigint NOT NULL, CONSTRAINT "pk_Population" PRIMARY KEY ("Id"), CONSTRAINT "fk_Population_DefaultEcotypeId" FOREIGN KEY ("DefaultEcotypeId") REFERENCES "Ecotype" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT "fk_Population_DefaultSpeciesId" FOREIGN KEY ("DefaultSpeciesId") REFERENCES "Species" ("Id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) WITH ( OIDS=FALSE ); and function: CREATE OR REPLACE FUNCTION "Population_SelectAll"() RETURNS SETOF "Population" AS $BODY$select "Id", "Name", "Description", "IsVisible", "DefaultSpeciesId", "DefaultEcotypeId" from "Population"; $BODY$ LANGUAGE 'sql' STABLE COST 100 ROWS 1000; Calling the function after these changes results in the following error message: ERROR: could not find attribute 11 in subquery targetlist SQL state: XX000 What is causing this error and how do I fix it? I have tried to drop and recreate the columns and function - but the same error occurs. Platform is PostgreSQL 8.4 running on Windows Server. Thanks.

    Read the article

  • SQL Query: How to determine "Seen during N hour" if given two DateTime time stamps?

    - by efess
    Hello all. I'm writing a statistics application based off a SQLite database. There is a table which records when users Login and Logout (SessionStart, SessionEnd DateTimes). What i'm looking for is a query that can show what hours user have been logged in, in sort of a line graph way- so between the hours of 12:00 and 1:00AM there were 60 users logged in (at any point), between the hours of 1:00 and 2:00AM there were 54 users logged in, etc... And I want to be able to run a SUM of this, which is why I can't bring the records into .NET and iterate through them that way. I've come up with a rather primative approach, a subquery for each hour of the day, however this approach has proved to be slow and slow. I need to be able to calculate this for a couple hundred thousand records in a split second.. SELECT case when (strftime('%s',datetime(date(sessionstart), '+0 hours')) > strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+0 hours')) < strftime('%s',sessionend)) OR (strftime('%s',datetime(date(sessionstart), '+1 hours')) > strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+1 hours')) < strftime('%s',sessionend)) OR (strftime('%s',datetime(date(sessionstart), '+0 hours')) < strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+1 hours')) > strftime('%s',sessionend)) then 1 else 0 end as hour_zero, ... hour_one, ... hour_two, ........ hour_twentythree FROM UserSession I'm wondering what better way to determine if two DateTimes have been seen durring a particular hour (best case scenario, how many times it has crossed an hour if it was logged in multiple days, but not necessary)? The only other idea I had is have a "hour" table specific to this, and just tally up the hours the user has been seen at runtime, but I feel like this is more of a hack than the previous SQL. Any help would be greatly appreciated!

    Read the article

  • oracle query with inconsistent results

    - by Spencer Stejskal
    Im having a very strange problem, i have a complicated view that returns incorrect data when i query on a particular column. heres an example: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy'; this returns the single result 'Testerson, Testy', 'N' however, if i use the query: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy' and has_garnishment = 'Y'; this returns the single result 'Testerson, Testy', 'Y' The second query should return a subset of the first query, but it returns a different answer. I have dissected the view and determined that this section of the view definition is where the problem arises(Note, I removed all of the select clause except the parts of interests for clarity, in the full query all joined tables are required): SELECT e.fullname empname , NVL2(ded.has_garn, 'Y', 'N') has_garnishment FROM timecard tc , orderdetail od , orderassign oa , employee e , employee3 e3 , customer10 c10 , order_misc om, (SELECT COUNT(*) has_garn, v_ssn FROM deductions WHERE yymmdd_stop = 0 OR (LENGTH(yymmdd_stop) = 7 AND to_date(SUBSTR(yymmdd_stop, 2), 'YYMMDD') sysdate) GROUP BY v_ssn ) ded WHERE oa.lrn(+) = tc.lrn_order AND om.lrn(+) = od.lrn AND od.orderno = oa.orderno AND e.ssn = tc.ssn AND c10.custno = tc.custno AND e.lrn = e3.lrn AND e.ssn = ded.v_ssn(+) One thing of note about the definition of the 'ded' subquery. The v_ssn field is a virtual field on the deductions table. I am not a DBA im a software developer but we recently lost our DBA and the new one is still getting up to speed so im trying to debug this issue. That being said, please explain things a little more thoroughly then you would for a fellow oracle expert. thanks

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >