Search Results

Search found 8161 results on 327 pages for 'django queries'.

Page 175/327 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • fql.multiquery new sdk

    - by Ronald Burris
    I cannot figure out what is wrong with this fql.multiquery and cannot seem to find any examples of the new sdk with fql.multiquery. Ultimately I want to get the page name and page id(s) of the visiting user pages which they both administrator and are fans of. $queries = '{ "page_admin_ids" : "SELECT page_id FROM page_admin WHERE uid = ' . $afid . ' LIMIT 5", "page_fan_ids" : "SELECT page_id FROM page_fan WHERE page_id IN (SELECT page_id FROM #page_admin_ids)", "page_name_and_id" : "SELECT name, page_id FROM page WHERE page_id IN (SELECT page_id FROM #page_fan_ids)" }'; $attachment = array("method"="fql.multiquery","query"=$queries,'access_token'=$access_token); $ret_code = $facebook-api($attachment); print_r($ret_code); die();

    Read the article

  • Best practices for combining Lucene.NET and a relational database?

    - by FlySwat
    I'm working on a project where I will have a LOT of data, and it will be searchable by several forms that are very efficiently expressed as SQL Queries, but it also needs to be searched via natural language processing. My plan is to build an index using Lucene for this form of search. My question is that if I do this, and perform a search, Lucene will then return the ID's of matching documents in the index, I then have to lookup these entities from the relational database. This could be done in two ways (That I can think of so far): N amount of queries (Horrible) Pass all the ID's to a stored procedure at once (Perhaps as a comma delimited parameter). This has the downside of being limited to the max parameter size, and the slow performance of a UDF to split the string into a temporary table. I'm almost tempted to mirror everything into lucenes index, so that I can periodicly generate the index from the backing store, but only need to access it for the frontend. Advice?

    Read the article

  • The subscription model behind CSS selectors?

    - by Martin Kristiansen
    With CSS selectors a query string body > h1.span subscribes to a specific type of nodes in the tree. Does anyone know how this is done? Selectors for transformations, how does the browser select the result set? And is there a trick to making it efficient? I imagine there being some sort of hierarchical type-tree for the entire structure to which the nodes subscribe and which is what is used when doing the selector queries — but this is only a guess. Does anyone know the real answer? Or even more interesting, what would be the best way to do dynamic lookups on a tree based on jQuery/CSS search queries?

    Read the article

  • Help to translate SQL query to Relational Algebra

    - by Mestika
    Hi everyone, I'm having some difficulties with translating some queries to Relational Algebra. I've a great book about Database Design and here is a chapter about Relational Algebra but I still seem to have some trouble creating the right one: Thoes queries I've most difficuelt with is these: SELECT COUNT( cs.student_id ) AS counter FROM course c, course_student cs WHERE c.id = cs.course_id AND c.course_name = 'Introduction to Database Design' SELECT COUNT( cs.student_id ) FROM Course c INNER JOIN course_student cs ON c.id = cs.course_id WHERE c.course_name = 'Introduction to Database Design' and SELECT COUNT( * ) FROM student JOIN grade ON student.f_name = "Andreas" AND student.l_name = "Pedersen" AND student.id = grade.student_id I know the notation can be a bit hard to paste into HTML forum, but maybe just use some common name or the Greek name. Thanks in advance Mestika

    Read the article

  • Problem using SQLDMO/Vb6 against SQL Server 2008

    - by E.J. Brennan
    I have a client, that uses SQLDMO for a portion of a custom application that was written against SQL Server 2000, and they recently upgraded to SQL Server 2008. The majority of the app still runs fine (doesn't use SQLDMO), but the admin functions which rely on SQLDMO stopped working. I installed the SQL2005 backward compatibility pack, and now SQLDMO partially works, i.e. I can run "select" type queries, but any "Update" queries fail with the error message: to connect to the server you must use SQL Server management studio or sql server management objects (SMO) Any thoughts? Should the backward compatibility pack give me ALL the functionality back, or is this a known issue? BTW: I realize SQLDMO has been deprecated and will go away next release, none-the-less I need to do what I can to solve the problem at hand.

    Read the article

  • Convert a Mysql database from latin to UTF-8

    - by Matthieu
    I am converting a website from ISO to UTF-8, so I need to convert the Mysql database too. On internet, I read various solutions, I don't know wich one to choose. Do I really need to convert my varchar columns to binary, then to utf8 like that : ALTER TABLE t MODIFY col BINARY(150); ALTER TABLE t MODIFY col CHAR(150) CHARACTER SET utf8; This is long to do that for each column, of each table, of each database. I have 10 database, wich 20 tables each, wich around 2 - 3 varchar colums (2 queries each column), this gives me around 1000 queries to write ! How come ? How to do ?

    Read the article

  • How to eager load sibling data using LINQ to SQL?

    - by Scott
    The goal is to issue the fewest queries to SQL Server using LINQ to SQL without using anonymous types. The return type for the method will need to be IList<Child1>. The relationships are as follows: Parent Child1 Child2 Grandchild1 Parent Child1 is a one-to-many relationship Child1 Grandchild1 is a one-to-n relationship (where n is zero to infinity) Parent Child2 is a one-to-n relationship (where n is zero to infinity) I am able to eager load the Parent, Child1 and Grandchild1 data resulting in one query to SQL Server. This query with load options eager loads all of the data, except the sibling data (Child2): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; I need to load the sibling data as well. One approach I have tried is splitting the query into two LINQ to SQL queries and merging the result sets together (not pretty), however upon accessing the sibling data it is lazy loaded anyway. Adding the sibling load option will issue a query to SQL Server for each Grandchild1 and Child2 record (which is exactly what I am trying to avoid): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); loadOptions.LoadWith<Parent>(o => o.Child2List); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=1 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=2 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=3 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=4 I've also written LINQ to SQL queries to join in all of the data in hopes that it would eager load the data, however when the LINQ to SQL EntitySet of Child2 or Grandchild1 are accessed it lazy loads the data. The reason for returning the IList<Child1> is to hydrate business objects. My thoughts are I am either: Approaching this problem the wrong way. Have the option of calling a stored procedure? My organization should not be using LINQ to SQL as an ORM? Any help is greatly appreciated. Thank you, -Scott

    Read the article

  • Authentication on odata service

    - by Toad
    I want to add some authentication to my odata service. Depending on the user calling i want to: filter rows and/or remove columns. I read in scott hanselmans fine blogpost on odata ( http://www.hanselman.com/blog/CreatingAnODataAPIForStackOverflowIncludingXMLAndJSONIn30Minutes.aspx )that it is possible to intercept the incoming queries. If this works i could add some extra filtering. How would this intercepting and altering queries work exactly? I can not find any examples of where and how to do this. (i'm using entitie framework and wcf dataservices (just like scotts example blog)

    Read the article

  • Linq Query Help Needed

    - by Randy Minder
    Say I have the following LINQ queries: var source = from workflow in sourceWorkflowList select new { SubID = workflow.SubID, ReadTime = workflow.ReadTime, ProcessID = workflow.ProcessID, LineID = workflow.LineID }; var target = from workflow in targetWorkflowList select new { SubID = workflow.SubID, ReadTime = workflow.ReadTime, ProcessID = workflow.ProcessID, LineID = workflow.LineID }; var difference = source.Except(target); sourceWorkflowList and targetWorkflowList have the exact same column definitions. But they both contain more columns of data than what is shown in the queries above. Those are just the columns needed for this particular issue. difference contains all rows in sourceWorkflowList that are not contained in targetWorkflowList Now what I would like to do is to remove all rows from sourceWorkflowList that do not exist in difference. Could someone show me a query that would do this? Thanks very much - Randy

    Read the article

  • The advantages and disadvantages of using ORM

    - by JHarley1
    Good Morning, I would like to discuss today the advantages and disadvantages of using ORM (such as ADO.NET). Advantages: Speeds-up Development - eliminates the need for repetitive SQL code. Reduces Development Time. Reduces Development Costs. Overcomes vendor specific SQL differences - the ORM knows how to write vendor specific SQL so you don't have to. Disadvantages: Loss in developer productivity whilst they learn to program with ORM. Developers loose understanding of what the code is actually doing - the developer is more in control using SQL. ORM has a tendency to be slow. ORM fail to compete against SQL queries for complex queries. In summary, I believe that the disadvantages of using an ORM (mainly the reduced time taken to perform repetitive tasks) is far outweighed by the disadvantages of ORM e.g. its difficulty to get to grips with. Can people point out were I am going wrong and suggest any further advantages/disadvantages. Many Thanks, J

    Read the article

  • Is there an extensible SQL like query language that is safe for exposing via a public API?

    - by Lokkju
    I want to expose some spatial (and a few non-spatial) datasets via a public API. The backend store will either be PostgreSQL/PostGIS, sqlite/spatialite, or CouchDB/GeoCouch. My goal is to find a some, preferably standard, way to allow people to make complex spatial queries against the data. I would like it to be a simple GET based request. The idea is to allow safe SQL type queries, without allowing unsafe ones. I would rather modify something that is off the shelf than doing the entire thing myself. I specifically want to support requesting specific fields from a table; joining results; and spatial functions that are already implemented by the underlying datastore. Ideas anyone?

    Read the article

  • VBA - Create ADODB.Recordset from the contents of a spreadsheet

    - by robault
    Hello, I am working on an Excel application that queries a SQL database. The queries can take a long time to run (20-40 min). If I've miss-coded something it can take a long time to error or reach a break point. I can save the results to a sheet fine, it's when I am working with the record sets that things can blow up. Is there a way to load the data into a ADODB.Recordset when I'm debugging to skip querying the database (after the first time)? Would I use something like this? http://stackoverflow.com/questions/2086234/query-excel-worksheet-in-ms-access-vba-using-adodb-recordset

    Read the article

  • Suggest Cassandra data model for an existing schema

    - by Andriy Bohdan
    Hello guys! I hope there's someone who can help me suggest a suitable data model to be implemented using nosql database Apache Cassandra. More of than I need it to work under high loads and large amounts of data. Simplified I have 3 types of objects: Product Tag ProductTag Product: key - string key name - string .... - some other fields Tag: key - string key name - unique tag words ProductTag: product_key - foreign key referring to product tag_key - foreign key referring to tag rating - this is rating of tag for this product Each product may have 0 or many tags. Tag may be assigned to 1 or many products. Means relation between products and tags is many-to-many in terms of relational databases. Value of "rating" is updated "very" often. I need to be run the following queries Select objects by keys Select tags for product ordered by rating Select products by tag order by rating Update rating by product_key and tag_key The most important is to make these queries really fast on large amounts of data, considering that rating is constantly updated.

    Read the article

  • geo-indexing: efficiently calculating proximity based on latitude/longitude

    - by AnC
    My simple web app (WSGI, Python) supports text queries to find items in the database. Now I'd like to extend this to allow for queries like "find all items within 1 mile of {lat,long}". Of course that's a complex job if efficiency is a concern, so I'm thinking of a dedicated external module that does indexing for geo-coordinates - sort of like Lucene would for text. I assume a generic component like this already exists, but haven't been able to find anything so far. Any help would be greatly appreciated.

    Read the article

  • Thumbnails from HTML pages created and used automatically in web application

    - by Jesper Rønn-Jensen
    I am working on a Ruby on Rails app that visualizes product trees. The tree is built of nodes an everything is rendered in HTML/CSS3. Some of the products make several hundred SQL queries as the tree builds up (up to 800 queries on the biggest tree). I'd like to have small thumbnails of each tree to present it on an index page. So rendering each tree once again and modifying CSS to make a tiny representation is an option. But i think it's probably easier to generate thumbnails, crop, cache, and show these on the index page. Any ideas on how to do this? Any links/articles/blog posts that could help me?

    Read the article

  • Fast search in XMl files in .NET (or How to index XML files)

    - by codymanix
    I have to implement a search feature which is able to quickly perform arbitrary complex queries to XML-data. If the user makes a query, all XML files must be searched to find possible matches. The users will have lots of XML-Files (a few 10000 or more) which are typically a few kilobytes in size. All the XML-files have almost the same structure. I already benchmarked XPath, it is too slow for my needs. How can it be done most efficiently? Is is possible to create indexes for the contents of the XML files (preserving content semantics, not just plain fulltext search)? Will it be useful to put the XML data into an (embedded) SQL database and do the queries with SQL? What other possibilities do I have?

    Read the article

  • Using non primitive types in ServiceOperation for WCF Data Service (3.5SP1)

    - by Nix
    Is there any way at all to create a "mock" entity type for use in a WCF Service Operation? We have some queries we do that we need to optimize by exposing as a ServiceOperation. The problem is in order to do so we would result in a very long list of primitative types... Ex SomeoneHelpMe(int time, string name, string address, string i, string purple, string foo, int stillGoing, int tooMany, etc...) And we really need to reduce this to SomeoneHelpedMe(CustomEntityNotMappedToAnything e) This would also help us when it comes time to write some complex queries since there is a 3 param limitation... I saw this will be possible in 4.0 using "complex types", but i am still in the 3.5SP1 world. Let me know if anyone needs more information.

    Read the article

  • Linq to SQL Repository ~theory~ - Generic but now uses Linq to Objects?

    - by Matt Tolliday
    The project I am currently working on used Linq to SQL as an ORM data access technology. Its an MVC3 Web app. The problem I faced was primarily due to the inability to mock (for testing) the DataContext which gets autogenerated by the DBML designer. So to solve this issue (after much reading) I refactored the repository system which was in place - single repository with seperate and duplicated access methods for each table which ended up with something like 300 methods only 10 of which were unique - into a single repository with generic methods taking the table and returning more generic types to the upper reaches of the application. My question revolves more around the design I've used to get thus far and the differences I'm noticing in the structure of the app. 1) Having refactored the code from the dark ages which used classic Linq to SQL queries: public Billing GetBilling(int id) { var result = ( from bil in _bicDc.Billings where bil.BillingId == id select bil).SingleOrDefault(); return (result); } it now looks like: public T GetRecordWhere<T>(Expression<Func<T, bool>> predicate) where T : class { T result; try { result = _dataContext.GetTable<T>().Where(predicate).SingleOrDefault(); } catch (Exception ex) { throw ex; } return result; } and is used by the controller with a query along the lines of: _repository.GetRecordWhere<Billing>(x => x.BillingId == 1); which is fine, and precisely what I wanted to achieve. ...however.... I'm also having to do the following to get precisely the result set i require in the controller class (the highest point of the app in essence)... viewModel.RecentRequests = _model.GetAllRecordsWhere<Billing>(x => x.BillingId == 1) .Where(x => x.BillingId == Convert.ToInt32(BillingType.Submitted)) .OrderByDescending(x => x.DateCreated). Take(5).ToList(); This - as far as my understanding is correct - is now using Linq to Objects rather than the Linq to SQL queries I was previously? Is this okay practise? It feels wrong to me but I dont know why. Probably because the logic of the queries is in the very highest tier of the app, rather than the lowest, but... I defer to you good people for advice. One of the issues I considered was bringing the entire table into memory but I understand that using the Iqeryable return type the where clause is taken to the database and evaluated there. Thus returning only the resultset i require... i may be wrong. And if you've made it this far, well done. Thank you, and if you have any advice it is very much appreciated!!

    Read the article

  • Help with linq to sql compiled query

    - by stackoverflowuser
    Hi I am trying to use compiled query for one of my linq to sql queries. This query contains 5 to 6 joins. I was able to create the compiled query but the issue I am facing is my query needs to check if the key is within a collection of keys passed as input. But compiled queries do not allow passing of collection (since collection can have varying number of items hence not allowed). For instance input to the function is a collection of keys. Say: List<Guid> InputKeys List<SomeClass> output = null; var compiledQuery = CompiledQueries.Compile<DataContext, IQueryable<SomeClass>>( (context) => from a in context.GetTable<A>() where InputKeys.Contains(a.Key) select a); using(var dataContext = new DataContext()) { output = compiledQuery(dataContext).ToList(); } return output; Is there any work around or better way to do the above?

    Read the article

  • Calculations in a table of data

    - by Christian W
    I have a table of data with survey results, and I want to do certain calculations on this data. The datastructure is somewhat like this: ____________________________________________________________________________________ | group |individual | key | key | key | | | |subkey|subkey|subkey|subkey|subkey|subkey|subkey|subkey|subkey| | | |q|q|q |q |q |q|q|q |q|q|q |q |q |q|q|q |q|q|q |q |q |q|q|q | |-------|-----------|-|-|--|--|---|-|-|--|-|-|--|--|---|-|-|--|-|-|--|--|---|-|-|--| | 1 | 0001 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 1 | 0002 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 1 | 0003 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 2 | 0004 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 2 | 0005 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 3 | 0006 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | | 4 | 0007 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 | ------------------------------------------------------------------------------------ Excuse my poor ascii skills... So, every individual belongs to a group, and has answered some questions. These questions are always grouped in keys and subkeys. Is there any simple method to calculate averages, deviations and similar based on the groupings. Something like public float getAverage(int key, int individual); float avg = getAverage(5,7); I think what I'm asking is what would be the best way to structure the data in C# to make it as easy as possible to work with? I have started making classes for every entity, but I got confused somewhere and something stopped working. So before I continue along this path, I was wondering if there are any other, better, ways of doing this? (Every individual can also have describing variables, like agegroup and such, but that's not important for the base functionality.) Our current solution does all calculations inline in the queries when requesting the data from the database. This works, but it's slow and the number of queries equals questions * individuals + keys * individuals, which could be alot if individual queries. Any suggestions?

    Read the article

  • Web service performance testing plan, Microsoft .NET WS, SQL

    - by zxed
    Trying to answer a question to come up with a testing plan. It has to do with using a website and/or webservice that queries a sql server to get data and display to user. * Solution must be able to handle an estimated 2000 users, approximately 700 concurrent users, 10,000 + website hits a month. Database calls should handle 100,000 queries via the website/webservice a month. The system is used at multiple times during a 24 hour period; however networking and bandwidth traffic decreases after 5 pm * two windows 2003 servers are used, one for web, another for sql. Both are located in the same room. User access is varied and users can be far/near (its a centralized system), users access via www

    Read the article

  • Custom Lucene Sharding with Hibernate Search

    - by Timo Westkämper
    Has anyone experience with custom Lucene sharding / paritioning using Hibernate Search? The documentation of Hibernate Search says the following about Lucene Sharding : In some cases, it is necessary to split (shard) the indexing data of a given entity type into several Lucene indexes. This solution is not recommended unless there is a pressing need because by default, searches will be slower as all shards have to be opened for a single search. In other words don't do it until you have problems :) Has anyone implemented sharding in such a way for Hibernate Search that also queries can be target to one of the shards? In our case we have Lucene queries that should target only one shard per query.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >