Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 63/210 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Linq to SQL problem

    - by Ronnie Overby
    I have a local collection of recordId's (integers). I need to retrieve records that have every one of their child records' ids in that local collection. Here is my query: public List<int> OwnerIds { get; private set; } ... filteredPatches = from p in filteredPatches where OwnerIds.All(o => p.PatchesOwners.Select(x => x.OwnerId).Contains(o)) select p; I am getting this error: Local sequence cannot be used in Linq to SQL implementation of query operators except the Contains() operator. I get that .All() isn't supported by Linq to SQL, but is there a way to do what I am trying to do?

    Read the article

  • Any tutorial for Python PalmDB library?

    - by roddik
    Hello, I've downloaded the Python PalmDB lib, but can't find any info on how to use it. I've tried reading docstrings and so far I've been able to come up with the following code: from pprint import pprint from PalmDB.PalmDatabase import PalmDatabase pdb = PalmDatabase() with open('testdb.pdb','rb') as data: pdb.fromByteArray(data.read()) pprint(dir(pdb)) pprint(pdb.attributes) print pdb.__doc__ #print pdb.records print pdb.records[10].toXML() which gives me the xml representation of a record (?) with some nasty long payload attribute, which doesn't resemble any kind of human-readable text to me. I just want to read the contents of the pdb file. Is there a guide/tutorial for this library? What would you do to figure out the proper way to make things done in my situation?

    Read the article

  • sql server 2000 and for xml explicit

    - by Marcin
    Hi everyone, I've got a problem with using for xml explicit in SQL Server 2000 (so I can't use the new path() stuff from sql 2005/8) Essentially I have two tables and the XML structure I want to have is <xml> <table_1 field1="foo" field2="foobar2" field3="foobar3"> <a_row_from_table_2 field1="goo" field2="goobar2" field3="goobar3" /> <a_row_from_table_2 field1="hoo" field2="hoobar2" field3="hoobar3" /> </table_1> </xml> That is, table_1 has a one-to-many relationship with table_2, and I want to make a hierarchy of it. So far I can't seem to get it, the closest I've managed to get is all the records from table1, with all the records from table2 appended to the very last element of table1 Any help with setting up this kind of relationship would be greatly appreciated. -Marcin

    Read the article

  • Silverlight -> WCF -> Database -> problem

    - by Billy
    Hi there, I have some silverlight code that calls a WCF service which then uses the Entity Framework to access the database and return records. Everything runs fine but ... when I replace the Entity Framework code with classic ADO.NET code I get an error: The remote server returned an error: NotFound When I call the ADO.NET code directly with a unit test it returns records fine so it's not a problem with the ADO.NEt code I used fiddler and it seems to say that the service cannot be found with a "500" error. i don't think it's anything to do with the service as the only thing I change is the technology to access the database. Anyone know what i'm missing here?

    Read the article

  • Web Services: more frequent "small" calls, or less frequent "big" calls

    - by Klay
    In general, is it better to have a web application make lots of calls to a web service getting smaller chunks of data back, or to have the web app make fewer calls and get larger chunks of data? In particular, I'm building a Silverlight app that needs to get large amounts of data back from the server in response to a query created by a user. Each query could return anywhere from a few hundred records to a few thousand. Each record has around thirty fields of mostly decimal-type data. I've run into the situation before where the payload size of the response exceeded the maximum allowed by the service. I'm wondering whether it's better (more efficient for the server/client/web service) to cut this payload vertically--getting all values for a single field with each call--or horizontally--getting batches of complete records with each call. Or does it matter?

    Read the article

  • SQL DISTINCT Value Question

    - by CPOW
    How can I filter my results in a Query? example I have 5 Records John,Smith,apple Jane,Doe,apple Fred,James,apple Bill,evans,orange Willma,Jones,grape Now I want a query that would bring me back 3 records with the DISTINCT FRUIT, BUT... and here is the tricky part, I still want the columns for First Name , Last Name. PS I do not care which of the 3 it returns mind you, but I need it to only return 3 (or what ever how many DISTINCT fruit there are. ex return would be John,Smith,apple Bill,evans,orange Willma,Jones,grape Thanks in advance I've been banging my head on this all day.

    Read the article

  • strange SqlAlchemy update behaviour

    - by Max
    I'm new to SqlAlchemy and Elixir, so I've started from tutorial and tried to create table, insert a record, and then update it as follows: #'elixir_test.py' from elixir import * metadata.bind = "postgresql://myuser:mypwd@localhost:5432/dbname" metadata.bind.echo = True class Movie(Entity): title = Field(Unicode(30)) year = Field(Integer) description = Field(UnicodeText) def __repr__(self): return '<Movie "%s" (%d)>' % (self.title, self.year) and in another file in the same directory: from elixir_test import * setup_all() #create table create_all() Movie(title=u"Blade Runner", year=1982) #add record session.commit() #get records Movie.query.all() #trying to update record and commit changes, BUT... movie = Movie.query.first() movie.year = 1983 session.commit() #now we have two records in our table, one #with year=1982 and one with year=1983 Movie.query.all() What did I missed?

    Read the article

  • Isn't INT more efficient than UNIQUEIDENTIFIER?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation?

    Read the article

  • Anyone using NoSQL databases for medical record storage?

    - by Brian Bay
    Electronic Medical records are composed of different types of data. Visit information ( date/location/insurance info) seems to lend itself to a RDMS. Other types of medical infomation, such as lab reports, x-rays, photos, and electronic signatures, are document based and would seem to be a good candidate for a 'document-oriented' database, such as MongoDB. Traditionally, binary data would be stored as a BLOB in a RDBMS. A hybrid approach using a traditional RDBMS along with a 'document-oriented' database would seem like good alternative to this. Other alternative would be something like DB2 purexml. The ultimate answer could be that 'it depends', but I really just wanted to get some general feedback/ideas on this. Is anyone using the NoSql approach for medical records?

    Read the article

  • Rspec-rails doesn't seem to find my models

    - by sa125
    Hi - I'm trying out rspec, and immediately hit a wall when it doesn't seem to load db records I know exist. Here's my fairly simple spec (no tests yet). require File.expand_path(File.dirname(__FILE__) + '../spec_helper') describe SomeModel do before :each do @user1 = User.find(1) @user2 = User.find(2) end it "should do something fancy" end I get an ActiveRecord::RecordNotFound exception, saying it couldn't find User w/ ID=1 or ID=2, which I know for a fact exist. I set both test and development databases to point to the same schema in database.yml, so this shouldn't be database mixup. I also ran script/generate rspec after installing the gems (rspec, rspec-rails), and gem.config both environment.rb and test.rb. Any idea what I'm missing? thanks. EDIT Seems I was running the tests with rake spec:models, which emptied the db and thus no records were found. When I used % spec spec/models/some_model_spec.rb, everything worked as expected.

    Read the article

  • Postgresql 8.4 reading OID style BLOBs with Hibernate

    - by peter
    I am getting this weird case when querying Postgres 8.4 for some records with Blobs (of type OIDs) with Hibernate. The query does return all right but when my code wants to read the content of the BLOB with the simple code below, it gets 0 bytes back public static byte[] readBlob(Blob blob) throws Exception { InputStream is = null; try { is = blob.getBinaryStream(); return org.apache.commons.io.IOUtils.toByteArray(is); } finally { if (is != null) try { is.close(); } catch(Exception e) {} } } Funny think is that I am getting this behavior only since I've started adding more then one such records to the table. The underlying JDBC library is type 3 (postgresq 8.4-701). Can someone give me a hint as to how to solve this issue? Thanks Peter

    Read the article

  • Selecting the most common value from relation - SQL statement

    - by Ronnie
    I have a table within my database that has many records, some records share the same value for one of the columns. e.g. | id | name | software | ______________________________ | 1 | john | photoshop | | 2 | paul | photoshop | | 3 | gary | textmate | | 4 | ade | fireworks | | 5 | fred | textmate | | 6 | bob | photoshop | I would like to return the value of the most common occurring piece of software, by using an SQL statement. So in the example above the required SQL statement would return 'photoshop' as it occurs more than any other piece of software. Is this possible? Thank you for your time.

    Read the article

  • LINQ to remove duplicated property

    - by Shawn Mclean
    I have a LINQ statement like this: var media = (from p in postService.GetMedia(postId) select new { PostId = postId, SynthId = p.SynthId }); There are many(possibly thousands) of records returned with the same SynthId. I want to select one one, any random one. So when I'm finished, media should contain records with distinct SynthId. SynthId can be null, I want all nulls to be in media (the distinct should not affect them). My DAL is EntityFramework, if that will help. How do I accomplish this in the most efficient way?

    Read the article

  • PHP Doctrine: Filter Table?

    - by ropstah
    I'm still not convinced after my previous question and some experience. Requirements: I don't want to use an SQL query everytime a filterBy() function is called and still be able to call -filterBy() on the returned table. Please find the comment @ ObjectsTable class: "How to instantiate another table and add records which match the filter criteria?" Usage: $globaltable = new ObjectsTable(); //globally accessible variable $globaltable->findAll(); //this call is made once at the beginning of the request $globaltable->filterBy('somefield', $someValue); //this function is used all over the place ObjectsTable class: class ObjectsTable extends Doctrine_Table { function filterBy($field, $value) { //How to instantiate another table and add records which match the criteria? } }

    Read the article

  • Word mergefield wildcard not correctly matching

    - by aZn137
    Hello, Below is my mergefield code: { IF { MERGEFIELD Subs_State } = "GA" "blah blah" "{ IF { MERGEFIELD CEOrgStates } = "GA" "blah blah" ""} "} I'm pulling records from a MS Access db. My goal is to check whether a record has Subs_State field matching "GA", or the CEOrgStates has the word "GA" (some records have stuff like "|FL|CA|GA|CT|KY|" (no quotes)). When I merged the docs, Word doesnt seem to be able to match with the wildcards: If I use and compare "*GA" (fields ending with GA), it works; however, the double wildcards "*GA*" dont seem to work at all. Here are the things I’ve tried: Have data in lowercase, then compare with lowercase Have data in lowercase, convert to and then compare with uppercase Do the opposite of the above 2 with uppercase data Use “*GA*” and “*ga*” (no pipe) Use different delimiters Nothing seems to work with the double wildcard matching. What am I doing wrong? Thanks!

    Read the article

  • MsSQL 2005 query performance

    - by Max
    I have the following query: select ............. from //one table and about 20 left joins// where ( ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) ) order by this_.id asc And I have two environment: One with 175 000 records in table "this_" and second with 25 000 records in table "this_". This query works right on 175k database and it works smth about 2 seconds, but on base with 25k this query freezes. and if drop one the folloing item from where clause: ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) query runs normally. How can I to increase performance of this query?

    Read the article

  • SQL Server Process Queue Race Condition

    - by William Edmondson
    I have an order queue that is accessed by multiple order processors through a stored procedure. Each processor passes in a unique ID which is used to lock the next 20 orders for its own use. The stored procedure then returns these records to the order processor to be acted upon. There are cases where multiple processors are able to retrieve the same 'OrderTable' record at which point they try to simultaneously operate on it. This ultimately results in errors being thrown later in the process. My next course of action is to allow each processor grab all available orders and just round robin the processors but I was hoping to simply make this section of code thread safe and allow the processors to grab records whenever they like. So Explicitly - Any idea why I am experiencing this race condition and how I can solve the problem. BEGIN TRAN UPDATE OrderTable WITH ( ROWLOCK ) SET ProcessorID = @PROCID WHERE OrderID IN ( SELECT TOP ( 20 ) OrderID FROM OrderTable WITH ( ROWLOCK ) WHERE ProcessorID = 0) COMMIT TRAN SELECT OrderID, ProcessorID, etc... FROM OrderTable WHERE ProcessorID = @PROCID

    Read the article

  • Data sync solution?

    - by user321088
    For some security issues I'm in an envorinment where third party apps can't access my DB. For this reason I should have some service/tool/script (dunno what yet... i'm open to the best option, still reading to see what I'm gonna do...) which enables me to generate on a regular basis(daily, weekly, monthly) some csv file with all new/modified records for a certain application. I should be able to automate this process and also export at any time a new file. So it should keep track for each application which records he still needs. Each application will need some data in some other format (csv/xls/sql), also some fields will be needed for some application and some aren't... It should be fairly flexible... What is the best option for me? Creating some custom tables for each application? Based on that extracting modified data?

    Read the article

  • Insert string between two markers

    - by user275074
    I have a requirement to insert a string between two markers. Initially I get a sting (from a file stored on the server) between #DATA# and #END# using: function getStringBetweenStrings($string,$start,$end){ $startsAt=strpos($string,$start)+strlen($start); $endsAt=strpos($string,$end, $startsAt); return substr($string,$startsAt,$endsAt-$startsAt); } I do some processing and based on the details of the string, query for some records. If there are records I need to be able to append them at the end of the string and then re-insert the string between #DATA# and #END# within the file on the server. How can I best achieve this? Is it possible to insert a record at a time in the file before #END# or is it best to manipulate the string on the server and just re-insert over the existing string in the file on the server?

    Read the article

  • Java Google App Engine inconsistent data lose after restarting dev server

    - by user259349
    Hello everyone, I am using Java GAE. So far, i'm just scafolding my data objects and i'm seeing an interesting issue. The records that i am playing around with are getting updated properly as long as my dev server is running up. The second that the my dev server gets restarted, i lose all of my changes. That would be not alarming if i lost all of my records, but, there was a point of time where my data persisted through the server restart. I'm worried that i would lose production data if i launched without fixing this potential bugs? ANy idea on wher ei should look?

    Read the article

  • Search and Highlight search text in items in the Listview in WPF

    - by kiran-k
    Hi, I am using MVVM to show the database records in a gridview (ListView view). i have a textbox where we can enter the text to be searched in the results listed in the gridview. i tried many ways to highlight the search text (Not the entire row only the text matches in the record) in the records displayed in the list view but unable to highlight. i am able to find the starting and ending indexes. i tried to created rectangles using those indexes on the text block but unable to highlight. Can anyone have solution for this. Thanks in advance.

    Read the article

  • Rails "Load more..." instead of pagination.

    - by Joseph Silvashy
    I have a list of elements, and I've been using will_paginate up until now, but I'd like to have something like "load more..." at the bottom of the list. Is there an easy way to accomplish this using will_paginate or do I need to resort to some other method here? From what I know this is a better route anyhow because then I don't need a SQL count of the records. And it really doesn't matter if there are like 9,847 pages, nobody would need the records beyond the first couple pages anyhow.

    Read the article

  • Converting "is null" into a linq to sql statement

    - by Darryl Braaten
    I am having trouble replicating the following sql as a LINQ statement select TableA.* from TableA left outer join TableAinTableB on TableA.Id = TableAId where TableBId is null The following returns no lines from TableA in db.TableA join AinB in db.TableAinTableB on TableA.Id equals TableAId where AinB.TableBId == null select TableA Also tried and a few other things that didn't work. from TableA in db.TableA join AinB in db.TableAinTableB on TableA.Id equals TableAId where AinB == null select TableA TableAinTableB is a many to many table. The query I want will pull all the records from TableA that have no records in the middle table. My sql does what I want but I have no idea how to convert it to LINQ to SQL. I ended up working around it by just doing a db.ExecuteQuery("working sql"); But I would like to know if the query is possible in LINQ and how to write it, or a pointer to a document that covers this scenario. My searching did not uncover anything I found useful.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >