Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 558/704 | < Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >

  • SQL Server 2008 full-text search doesn't find word in words?

    - by Martijn
    In the database I have a field with a .mht file. I want to use FTS to search in this document. I got this working, but I'm not satisfied with the result. For example (sorry it's in dutch, but I think you get my point) I will use 2 words: zieken and ziekenhuis. As you can see, the phrase 'zieken' is in the word 'ziekenhuis'. When I search on 'ziekenhuis' I get about 20 results. When I search on 'zieken' I get 7 results. How is this possible? I mean, why doesn't the FTS resturn the minimal results which I get from 'ziekenhuis'? Here's the query I use: SELECT DISTINCT d.DocID 'Id', d.Titel, (SELECT afbeeldinglokatie FROM tbl_Afbeelding WHERE soort = 'beleid') as Pic, 'belDoc' as DocType FROM docs d JOIN kpl_Document_Lokatie dl ON d.DocID = dl.DocID JOIN HandboekLokaties hb ON dl.LokatieID = hb.LokatieID WHERE hb.InstellingID = @instellingId AND ( FREETEXT(d.Doel, @searchstring) OR FREETEXT(d.Toepassingsgebied, @searchstring) OR FREETEXT(d.HtmlDocument, @searchstring) OR FREETEXT (d.extraTabblad, @searchstring) ) AND d.StatusID NOT IN( 1, 5)

    Read the article

  • can oracle types be updated like tables?

    - by Omnipresent
    I am converting GTT's to oracle types as explained in an excellent answer by APC. however, some GTT's are being updated based on a select query from another table. For example: UPDATE my_gtt_1 c SET (street, city, STATE, zip) = (SELECT src.unit_address, src.unit_city, src.unit_state, src.unit_zip_code FROM (SELECT mbr.ROWID row_id, unit_address, RTRIM(a.unit_city) unit_city, RTRIM(a.unit_state) unit_state, RTRIM(a.unit_zip_code) unit_zip_code FROM table_1 b, table_2 a, my_gtt_1 mbr WHERE type = 'ABC' AND id = b.ssn_head AND a.h_id = b.h_id AND row_id >= v_start_row AND row_id <= v_end_row) src WHERE c.ROWID = src.row_id) WHERE state IS NULL OR state = ' '; if my_gtt_1 was not a global temporary table but an oracle collection type then is it possible to do updates this complex? Or in these cases we are better off using the global temporary table?

    Read the article

  • Compare range of ip addresses with start and end ip address in MySQL

    - by Maarten
    I have a MySQL table where I store IP ranges. It is setup in the way that I have the start address stored as a long, and the end address (and an id and some other data). Now I have users adding ranges by inputting a start and end ip address, and I would like to check if the new range is not already (partially) in the database. I know I can do a between query, but that doesn't seem to work with 2 different columns, and I also cannot figure out how to pass a range to compare it. Doing it in a loop in PHP is a possibility, but would with a range of e.g. 132.0.0.0-199.0.0.0 be quite a big amount of queries..

    Read the article

  • SQL optional parameters through VB.net

    - by ScaryJones
    I've a document search page with three listboxes that allow multiple selections. They're: Category A Year Category B Only category A is mandatory, the others are optional parameters and might be empty. Each document can belong to multiple options in Category A and multiple options Category B but each document only has one year associated with it. I've kind of got this working through building up a dynamic SQL string but it's messy and I hate using it so I thought I'd ask here if anyone could see an easier way of doing this. An example of the kind of dynamic SQL query i end up with follows: select * from library where libraryID in (select distinct libraryID from categoryAdocs where categoryAdocID in (4)) or year in (2004)

    Read the article

  • Doing a join across two databases with different collations on SQL Server and getting an error.

    - by Andrew G. Johnson
    I know, I know with what I wrote in the question I shouldn't be surprised. But my situation is slowly working on an inherited POS system and my predecessor apparently wasn't aware of JOINs so when I looked into one of the internal pages that loads for 60 seconds I see that it's a fairly quick, rewrite these 8 queries as one query with JOINs situation. Problem is that besides not knowing about JOINs he also seems to have had a fetish for multiple databases and surprise, surprise they use different collations. Fact of the matter is we use all "normal" latin characters that English speaking people would consider the entire alphabet and this whole thing will be out of use in a few months so a bandaid is all I need. Long story short is I need some kind of method to cast to a single collation so I can compare two fields from two databases. Exact error is: Cannot resolve the collation conflict between "SQL_Latin1_General_CP850_CI_AI" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.

    Read the article

  • GAE transaction exceptions

    - by bach
    Hi, In this example IS the exception being thrown if ANY of the Table elements are being changed by another client OR only if the element that we changed has been changed by another client? Just to verify - the exception is thrown from the commit() isn't it? PersistenceManager pm = PMF.get().getPersistenceManager(); try { pm.currentTransaction().begin(); List<Row> Table = (List<Row>) pm.newQuery(query).execute(); Table.get(0).setReserved(true); // <----- we change only this element pm.currentTransaction().commit(); } catch (JDOCanRetryException ex) { pm.currentTransaction().rollback() // <----- if Table.get(1) was changed by another client do we get to this point??? }

    Read the article

  • Using RDL files in Web ReportViewer

    - by user54064
    I want to use a rdl file with the query information stored in it. I don't want to have to convert it to a rdlc file. I have an ASP.NET app that I want to show the report. I thought I would use a ReportViewer on my page and then have it use the rdl file. However, I get an error and in researching it appears that I have to convert the file to an rdlc file. I don't want to strip out the data contained in the report. How can I show the report to the user by running the rdl report?

    Read the article

  • How do you verify the correct data is in a data mart?

    - by blockcipher
    I'm working on a data warehouse and I'm trying to figure out how to best verify that data from our data cleansing (normalized) database makes it into our data marts correctly. I've done some searches, but the results so far talk more about ensuring things like constraints are in place and that you need to do data validation during the ETL process (E.g. dates are valid, etc.). The dimensions were pretty easy as I could easily either leverage the primary key or write a very simple and verifiable query to get the data. The fact tables are more complex. Any thoughts? We're trying to make this very easy for a subject matter export to run a couple queries, see some data from both the data cleansing database and the data marts, and visually compare the two to ensure they are correct.

    Read the article

  • IN statement performance in PostgreSQL (and in general)

    - by Vasil
    I know this has probably been asked before, but I can't find it with SO's search. Lets say i've TABLE1 and TABLE2, who should I expect the performance of a query such as this: SELECT * FROM TABLE1 WHERE id IN SUBQUERY_ON_TABLE2; as the number of rows in TABLE1 and TABLE2 grow and id is a primary key on TABLE1. Yes, I know using IN is such a n00b mistake, but TABLE2 has a generic relation (django generic relation) to multiple other tables so I can't think of another way to filter the data. At what (aproximate) ammount of rows in TABLE1 and TABLE2 should I expect to notice performance issues because of this? Will performance degrade linearly, exponentially etc. depending on the number of rows?

    Read the article

  • performance of parameterized queries for different db's

    - by tuinstoel
    A lot of people know that it is important to use parameterized queries to prevent sql injection attacks. Parameterized queries are also much faster in sqlite and oracle when doing online transaction processing because the query optimizer doesn't have to reparse every parameterized sql statement before executing. I've seen sqlite becoming 3 times faster when you use parameterized queries, oracle can become 10 times faster when you use parameterized queries in some extreme cases with a lot of concurrency. How about other db's like mysql, ms sql, db2 and postgresql? Is there an equal difference in performance between parameterized queries and literal queries?

    Read the article

  • How to create a view of table that contains a timestamp column?

    - by Matt Faus
    This question is an extension of a previous one I have asked. I have a table (2014_05_31_transformed.Video) with a schema that looks like this. I have put up the JSON returned by the BigQuery API describing it's schema in this gist. I am trying to create a view against this table with an API call that looks like this: { 'view': { 'query': u 'SELECT deleted_mod_time FROM [2014_05_31_transformed.Video]' }, 'tableReference': { 'datasetId': 'latest_transformed', 'tableId': u 'Video', 'projectId': 'redacted' } } But, the BigQuery API is returning this error: HttpError: https://www.googleapis.com/bigquery/v2/projects/124072386181/datasets/latest_transformed/tables?alt=json returned "Invalid field name "deleted_mod_time.usec". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long." The schema that the BigQuery API does not make any distinction between a TIMESTAMP data type and a regular nullable INTEGER data type, so I can't think of a way to programmatically correct this problem. Is there anything I can do, or is this a bug with BigQuery's view implementation?

    Read the article

  • PL/SQL bulk collect into associative array with sparse key

    - by Dan
    I want to execute a SQL query inside PL/SQL and populate the results into an associative array, where one of the columns in the SQL becomes the key in the associative array. For example, say I have a table Person with columns PERSON_ID INTEGER PRIMARY KEY PERSON_NAME VARCHAR2(50) ...and values like: PERSON_ID | PERSON_NAME ------------------------ 6 | Alice 15 | Bob 1234 | Carol I want to bulk collect this table into a TABLE OF VARCHAR2(50) INDEX BY INTEGER such that the key 6 in this associative array has the value Alice and so on. Can this be done in PL/SQL? If so, how?

    Read the article

  • One executable with cmd-line params or just many satellite executables?

    - by Nikos Baxevanis
    I design an application back-end. For now, it is a .NET process (a Console Application) which hosts various communication frameworks such as Agatha and NServiceBus. I need to periodically update my datastore with values (coming from the application while it's running). I found three possible ways: Accept command line arguments, so I can call my console app with -update. On start up a background thread will periodically invoke the update method. Create an updater.exe app which will do the updates, but I will have code duplication since in some way it will need to query the data from the source in order to save it to the datastore. Which one is better?

    Read the article

  • More XML Problems - Undeclared Entity 'nbsp'

    - by Gogster
    I'm getting the error: Line 49: xml = r.ReadToEnd(); Line 50: Line 51: System.Xml.Linq.XDocument xmlDoc = System.Xml.Linq.XDocument.Parse(xml); Line 52: Line 53: var query = from p in xmlDoc.Descendants("member") On my XML. When I run the code to generate the XML in an empty page, it runs without error, if I call the code within my webpage it throws this error. The only 'nbsp' on the page is a doctype declaration at the top of the XSLT: <!DOCTYPE xsl:stylesheet [ <!ENTITY nbsp "&#x00A0;"> ]> I'm at a loss as to where this error is coming from and I am looking for suggestions please! Thanks.

    Read the article

  • Adding custom columns to Propel model?

    - by Hard-Boiled Wonderland
    At the moment I am using the below query: $claims = ClaimQuery::create('c') ->leftJoinUser() ->withColumn('CONCAT(User.Firstname, " ", User.Lastname)', 'name') ->withColumn('User.Email', 'email') ->filterByArray($conditions) ->paginate($page = $page, $maxPerPage = $top); However I then want to add columns manually, so I thought this would simply work: foreach($claims as &$claim){ $claim->actions = array('edit' => array( 'url' => $this->get('router')->generate('hera_claims_edit'), 'text' => 'Edit' ) ); } return array('claims' => $claims, 'count' => count($claims)); However when the data is returned Propel or Symfony2 seems to be stripping the custom data when it gets converted to JSON along with all of the superflous model data. What is the correct way of manually adding data this way?

    Read the article

  • A PHP object server for business logic

    - by Matthieu
    I wonder if such a thing is possible or even exist : In the MVC pattern, I would like to have the Model part to be persistant in memory, instead of reloading my instances at each execution. So I want only the Controller and View part to be executed. Is there any solution of server that would provide PHP objects (just like a Mysql server provides data records), and that keeps these objects in memory ? A problem would be also : how to have a constructed query to get objects ? Maybe PHP Linq ?

    Read the article

  • Repeatedly querying xml using python

    - by Jack
    I have some xml documents I need to run queries on. I've created some python scripts (using ElementTree) to do this, since I'm vaguely familiar with using it. The way it works is I run the scripts several times with different arguments, depending on what I want to find out. These files can be relatively large (10MB+) and so it takes rather a long time to parse them. On my system, just running: tree = ElementTree.parse(document) takes around 30 seconds, with a subsequent findall query only adding around a second to that. Seeing as the way I'm doing this requires me to repeatedly parse the file, I was wondering if there was some sort of caching mechanism I can use so that the ElementTree.parse computation can be reduced on subsequent queries. I realise the smart thing to do here may be to try and batch as many queries as possible together in the python script, but I was hoping there might be another way. Thanks.

    Read the article

  • Retreive list of students registered for a particular course

    - by Pankaj Khurana
    Hi, I have moodle1.9 installed on my system and writing some custom reports. I want to retrieve list of students registered for a particular course. I have user profile field usercategoryid through which i figure out for which category he has enrolled. Course belongs to a particular category. Through mdl_user_info_data table i retrieve the category for a particular user. I have written a query to retrieve the users who have registered for a particular category for e.g. Financial Planning 2010. But the issue is that i now want users who have registered for a particular course for e.g. Insurance Planning. There is a relation between course and course category but i am unable to find out the table where course and user information is linked. Please help me on this Thanks

    Read the article

  • algorithm advice for finding maximum items within a time period

    - by darren
    Hi everyone. I have a database schema that is similar to the following: | User | Event | Date |--------|---------------|------ | 111 | Walked dog | 2009-10-1 | 222 | Walked dog | 2009-10-2 | 333 | Fed Fish | 2009-10-5 | 222 | Did Laundry | 2009-10-6 | 111 | Fed Fish | 2009-10-7 | 111 | Walked dog | 2009-10-18 | 222 | Walked dog | 2009-10-19 | 111 | Fed Fish | 2009-10-21 I would like to produce a query that returns the maximum number of times a user performs some action within a time period. For example, given a time period of 5 days, what is the maximum number of times user 111 walked the dog? The most obvious solution would be to start at some zero point and move forward each day, summing up 5 day periods along the way, then taking the maximum total out of all the 5 day windows. the approach seems incredibly costly however. I would appreciate any suggestions you may have.

    Read the article

  • SQL Server 2008 log size management problems

    - by b0x0rz
    I'm trying to shrink the log of a database AND set the recovery to simple, but always there is an error, whatever i try. USE 4_o5; GO ALTER DATABASE 4_o5 SET RECOVERY SIMPLE; GO DBCC SHRINKFILE (4_o5_log, 10); GO the output of sp_helpfile says that log file is located under (hosted solution): I:\dataroot\4_o5_log.LDF please help me perform this operation as the log file got large when importing a lot of data and now this info is no longer needed, have multiple (lots of) backups since then. the exact error message when performing the query above is: incorrect syntax near '4'. RECOVERY is not a recognized SET option. incorrect syntax near _5_log'. i am using visual studio 2010 (also have SQL Server Express installed locally, SQL Server 2008 proper installed at provider (shared)) thnx a lot

    Read the article

  • Sqlite3 activerecord :order => "time DESC" doesn't sort

    - by Ole Morten Amundsen
    rails 2.3.4, sqlite3 I'm trying this Production.find(:all, :conditions = ["time ?", start_time.utc], :order = "time DESC", :limit = 100) The condition works perfectly, but I'm having problems with the :order = time DESC. By chance, I discovered that it worked at Heroku (testing with heroku console), which runs PostgreSQL. However, locally, using sqlite3, new entries will be sorted after old ones, no matter what I set time to. Like this (output has been manually stripped): second entry is new: Production id: 2053939460, time: "2010-04-24 23:00:04", created_at: "2010-04-24 23:00:05" Production id: 2053939532, time: "2010-04-25 10:00:00", created_at: "2010-04-27 05:58:30" Production id: 2053939461, time: "2010-04-25 00:00:04", created_at: "2010-04-25 00:00:04" Production id: 2053939463, time: "2010-04-25 01:00:04", created_at: "2010-04-25 01:00:04" Seems like it sorts on the primary key, id, not time. Note that the query works fine on heroku, returning a correctly ordered list! I like sqlite, it's so KISS, I hope you can help me... Any suggestions?

    Read the article

  • Dynamically append number to PDF or make submit button change its URL based on that number.

    - by jamone
    I'm serving up PDFs from a SQL db and presenting them in the browser. I'm trying to figure out a way to embed a number in the PDF dynamically that is the recordID for that PDFs SQL record so that when the user hits the submit button and the form submits its XML data to my submission page I can tell which PDF this is. If there is some way of changing the submission URL on the fly then I could do a query string to pass my self the recordID. I'm not generating the PDF in code, its being created by hand and then uploaded to my site.

    Read the article

  • Cakephp, Retreive Data for HABTM Models using find

    - by user298079
    I am new to cakephp and I'm trying to accomplish something that should be relatively easy. I have 2 models projects & categories bind by HABTM relationship. I am trying to perform the following query - find all projects that belong to a category $projects = $this->Project->find('all', array('conditions' => array('Category.slug' => $category))); However when I'm doing so it generates an SQL error: SQL Error: 1054: Unknown column 'Category.slug' in 'where clause' What am I doing wrong??

    Read the article

  • Is there any way to modify a column before it is ordered in MySQL?

    - by George Edison
    I have a table with a field value which is a varchar(255). The contents of the field can be quite varied: $1.20 $2994 $56 + tax (This one can be ignored or truncated to $56 if necessary) I have a query constructed: SELECT value FROM unnamed_table ORDER BY value However, this of course uses ASCII string comparison to order the results and does not use any numerical type of comparison. Is there a way to truly order by value without changing the field type to DECIMAL or something else? In other words, can the value field be modified ('$' removed, value converted to decimal) on the fly before the results are sorted?

    Read the article

  • Android insert into sqlite database

    - by Josh
    I know there is probably a simple thing I'm missing, but I've been beating my head against the wall for the past hour or two. I have a database for the Android application I'm currently working on (Android v1.6) and I just want to insert a single record into a database table. My code looks like the following: //Save information to my table sql = "INSERT INTO table1 (field1, field2, field3) " + "VALUES (" + field_one + ", " + field_two + ")"; Log.v("Test Saving", sql); myDataBase.rawQuery(sql, null); the myDataBase variable is a SQLiteDatabase object that can select data fine from another table in the schema. The saving appears to work fine (no errors in LogCat) but when I copy the database from the device and open it in sqlite browser the new record isn't there. I also tried manually running the query in sqlite browser and that works fine. The table schema for table1 is _id, field1, field2, field3. Any help would be greatly appreciated. Thanks!

    Read the article

< Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >