Search Results

Search found 17240 results on 690 pages for 'query'.

Page 620/690 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • What algorithm should i follow to retrieve data in the prescribed format?

    - by Prateek
    I have to retrieve data from a database in which tables are consisting of fields like "ttc, rm, atc and lta" namely. Actually these values are stored on daily basis with a 15 min interval like From_time to_time ttc rm atc lta 00:00 00:15 45 10 35 25, 00:15 00:30 35 10 25 25 and so on .. These values are stored for every day of every month and i want it to be previewed in the prescribed format then what algorithm should i follow. I am confused about how to do comparisons for a format like below mentioned. The format is at this link https://drive.google.com/a/itbhu.ac.in/file/d/0B_J0Ljq64i4Za2J1V0lvbDZ4eGc/edit?usp=sharing To be specific once again, my question is, I have to prepare a report from the retrieved data which is being stored in the databases as explained above. But the report which is going to be prepared will be of entire month. So, to say the least, there may be cases that for two particular days the value of "ttc" would be same for some time so i want it to be listed together (as shown in format). And the confusing part is any of the values "ttc", "rm", "atc", "lta" can be same for any particular interval. So what algorithm should i follow for such comparisons. And if still any query with question, u can ask your doubt. Thanks

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • Inner join and outer join options in Entity Framework 4.0

    - by bigb
    I am using EF 4.0 and I need to implement query with one inner join and with N outer joins I started to implement this using different approaches but get into trouble at some point. Here is two examples how I started of doing this using ObjectQuery<'T' and Linq to Entity 1)Using ObjectQuery<'T' I implement flexible outer join but I don't know how to perform inner join with entity Rules in that case (by default Include("Rules") doing outer join, but i need to inner join by Id). public static IEnumerable<Race> GetRace(List<string> includes, DateTime date) { IRepository repository = new Repository(new BEntities()); ObjectQuery<Race> result = (ObjectQuery<Race>)repository.AsQueryable<Race>(); //perform outer joins with related entities if (includes != null) foreach (string include in includes) result = result.Include(include); //here i need inner join insteard of default outer join result = result.Include("Rules"); return result.ToList(); } 2)Using Linq To Entity I need to have kind of outer join(somethin like in GetRace()) where i may pass a List with entities to include) and also i need to perform correct inner join with entity Rules public static IEnumerable<Race> GetRace2(List<string> includes, DateTime date) { IRepository repository = new Repository(new BEntities()); IEnumerable<Race> result = from o in repository.AsQueryable<Race>() from b in o.RaceBetRules select new { o }); //I need here: // 1. to perform the same way inner joins with related entities like with ObjectQuery above //here i getting List<AnonymousType> which i cant cast to //IEnumerable<Race> when i did try to cast like //(IEnumerable<Race>)result.ToList(); i did get error: //Unable to cast object of type //'System.Collections.Generic.List`1[<>f__AnonymousType0`1[BetsTipster.Entity.Tip.Types.Race]]' //to type //'System.Collections.Generic.IEnumerable`1[BetsTipster.Entity.Tip.Types.Race]'. return result.ToList(); } May be someone have some ideas about that.

    Read the article

  • Modify request querystring paramaters to build a new link without resorting to string manipulation

    - by Andrew M
    Hi All, I want to dynamically populate a link with the URI of the current request, but set one specific query string parameter. All other querystring paramaters (if there are any) should be left untouched. And I don't know in advance what they might be. Eg, imagine I want to build a link back to the current page, but with the querystring parameter "valueOfInterest" always set to be "wibble" (I'm doing this from the code-behind of an aspx page, .Net 3.5 in C# FWIW). Eg, a request for either of these two: /somepage.aspx /somepage.aspx?valueOfInterest=sausages would become: /somepage.aspx?valueOfInterest=wibble And most importantly (perhaps) a request for: /somepage.aspx?boring=something /somepage.aspx?boring=something&valueOfInterest=sausages would preserve the boring params to become: /somepage.aspx?boring=something&valueOfInterest=wibble Caveats: I'd like to avoid string manipulation if there's something more elegant in asp.net that is more robust. However if there isn't something more elegant, so be it. I've done (a little) homework: I found a blog post which suggested copying the request into a local HttpRequest object, but that still has a read-only collection for the querystring params. I've also had a look at using a URI object, but that doesn't seem to have a querystring

    Read the article

  • Can you recommend a full-text search engine?

    - by Jen
    Can you recommend a full-text search engine? (Preferably open source) I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in my C++ desktop application. Hence, I’m looking for a fast full-text search solution to integrate with my app. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) Have a well-documented API To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Other points: I don't need multi-user support I don't need support for complex queries The database resides on the user's computer, so the indexing should be performed locally. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).

    Read the article

  • Java many to many association map

    - by Raphael Jolivet
    Hi, I have to classes, ClassA and ClassB and a "many to many" AssociationClass. I want to use a structure to hold the associations between A and B such as I can know, for each instance of A or B, which are their counterparts. I thought of using a Hashmap, with pair keys: Hasmap<Pair<ClassA, ClassB>, AssociationClass> associations; This way, I can add and remove an association between two instances of ClassA and ClassB, and I can query a relation for two given instances. However, I miss the feature of getting all associations defined for a given instance of ClassA or ClassB. I could do it by brute force and loop over all keys of the map to search for associations between a given instance, but this is inefficient and not elegant. Do you know of any data structure / free library that enables this ? I don't want to reinvent the wheel. Thanks in advance for your help, Raphael NB: This is not a "database" question. These objects are pure POJO used for live computation, I don't need persistence stuff.

    Read the article

  • Why does XPath.selectNodes(context) always use the whole document in JDOM

    - by Simeon
    Hi, I'm trying to run the same query on several different contexts, but I always get the same result. This is an example xml: <root> <p> <r> <t>text</t> </r> </p> <t>text2</t> </root> So this is what I'm doing: final XPath xpath = XPath.newInstance("//t"); List<Element> result = xpath.selectNodes(thisIsThePelement); // and I've debuged it, it really is the <p> element And I always get both <t> elements in the result list. I need just the <t> inside the <p> I'm passing to the XPath object. Any ideas would be of great help, thanks.

    Read the article

  • Paramaterising SQL in SSIS

    - by Anonymouslemming
    Hi all, I'm trying to paramaterize some queries in SSIS. After some reading, it sounds like my best option is to create one variable that contains my base sql, another that contains my criteria and a final variable that is evaluated as an expression that includes both of these. I want to end up with an SQL query that is effectively UPDATE mytable set something='bar' where something_else='foo' So my first two variables have the scope of my package and are as follows: Name: BaseSQL Data Type: String Value: UPDATE mytable set something = 'bar' where something_else = Name: MyVariable Data Type: String Value: foo My third variable has a scope of the data flow task where I want to use this SQL and is as follows: Name: SQLQuery Data Type: String Value: @[User::BaseSQL] + "'" + @[User::MyVariable] + "'" EvaluateAsExpression: True In the OLE DB Source, I then choose my connection and 'SQL command from variable' and select User::SQLQuery from the dropdown box. The Variable Value window then displays the following: @[User::BaseSQL] + "'" + @[User::MyVariable] + "'" This is as desired, and would provide the output I want from my DB. The variable name dropdown also contains User::BaseSQL and User::MyVariable so I believe that my namespaces are correct. However, when I then click preview, I get the following error when configuring an OLE DB Source (using SQL command from variable): TITLE: Microsoft Visual Studio Error at Set runtime in DB [Set runtime in myDb DB [1]]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "Statement(s) could not be prepared.". An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "Must declare the scalar variable "@".". (Microsoft Visual Studio) Can anyone advise what I'm missing or how I can resolve this please ? Thanks in advance!

    Read the article

  • What are some good ways to store performance statistics in a database for querying later?

    - by Nathan
    Goal: Store arbitrary performance statistics of stuff that you care about (how many customers are currently logged on, how many widgets are being processed, etc.) in a database so that you can understand what how your servers are doing over time. Assumptions: A database is already available, and you already know how to gather the information you want and are capable of putting it in the database however you like. Some Ideal Attributes of a Solution Causes no noticeable performance hit on the server being monitored Has a very high precision of measurement Does not store useless or redundant information Is easy to query (lends itself to gathering/displaying useful information) Lends itself to being graphed easily Is accurate Is elegant Primary Questions 1) What is a good design/method/scheme for triggering the storing of statistics? 2) What is a good database design for how to actually store the data? Example answers...that are sort of vague and lame... 1) I could, once per [fixed time interval], store a row of data with all the performance measurements I care about in each column of one big flat table indexed by timestamp and/or server. 2) I could have a daemon monitoring performance stuff I care about, and add a row whenever something changes (instead of at fixed time intervals) to a flat table as in #1. 3) I could trigger either as in #2, but I could store information about each aspect of performance that I'm measuring in separate tables, opening up the possibility of adding tons of rows for often-changing items, and few rows for seldom-changing items. Etc. In the end, I will implement something, even if it's some super-braindead approach I make up myself, but I'm betting there are some really smart people out there willing to share their experiences and bright ideas!

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • How can I solve this NHibernate Querying in an n-tier architecture?

    - by Tyler Wright
    I've hit a wall with trying to decouple NHibernate from my services layer. My architecture looks like this: web - services - repositories - nhibernate - db I want to be able to spawn nhibernate queries from my services layer and possibly my web layer without those layers knowing what orm they are dealing with. Currently, I have a find method on all of my repositories that takes in IList<object[]> criteria. This allows me to pass in a list of criteria such as new object() {"Username", usernameVariable}; from anywhere in my architecture. NHibernate takes this in and creates a new Criteria object and adds in the passed in criteria. This works fine for basic searches from my service layer, but I would like to have the ability to pass in a query object that my repository translates into an NHibernate Criteria. Really, I would love to implement something like what is described in this question: Is there value in abstracting nhibernate criterion. I'm just not finding any good resources on how to implement something like this. Is the method described in that question a good approach? If so, could anyone provide some pointers on how to implement such a solution?

    Read the article

  • Uninterruptible Windows Process

    - by Nullstr1ng
    Hi Guys, We're starting a new custom project right now from a client and one of the requirements is the process cannot be terminated unless the system is shutting down, restarting, or logging-off. This application monitors the USB interface. We will be using WMI to query the device periodically. The client want's to run the application on Windows XP Operating System and doesn't like installing .NET. So we targeted Visual Basic 6 as our language. My main concern is this application cannot be terminated. Our Project Adviser talks about Anti-virus and yes, some of the anti virus cannot be terminated. I was thinking how to do the same in Visual Basic 6. I know there will be API involved on the project but where should I go? so API is ok with me. I saw some articles that converts the EXE to a SERVICE, create Windows Service in Visual Basic 6, etc. So please .. share your thoughts.

    Read the article

  • How to call Android contacts list?

    - by aZn137
    Hi, I'm making an Android app, and need to call the phone's contact list. I need to call the contacts list function, pick a contact, then return to my app with the contact's name. Here's the code I got on the internet, but it doesnt work. Please help: import android.app.ListActivity; import android.content.Intent; import android.database.Cursor; import android.os.Bundle; import android.provider.Contacts.People; import android.view.View; import android.widget.ListAdapter; import android.widget.ListView; import android.widget.SimpleCursorAdapter; import android.widget.TextView; public class Contacts extends ListActivity { private ListAdapter mAdapter; public TextView pbContact; public static String PBCONTACT; public static final int ACTIVITY_EDIT=1; private static final int ACTIVITY_CREATE=0; // Called when the activity is first created. @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); Cursor C = getContentResolver().query(People.CONTENT_URI, null, null, null, null); startManagingCursor(C); String[] columns = new String[] {People.NAME}; int[] names = new int[] {R.id.row_entry}; mAdapter = new SimpleCursorAdapter(this, R.layout.mycontacts, C, columns, names); setListAdapter(mAdapter); } // end onCreate() // Called when contact is pressed @Override protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); Cursor C = (Cursor) mAdapter.getItem(position); PBCONTACT = C.getString(C.getColumnIndex(People.NAME)); // RHS 05/06 //pbContact = (TextView) findViewById(R.id.myContact); //pbContact.setText(new StringBuilder().append("b")); Intent i = new Intent(this, NoteEdit.class); startActivityForResult(i, ACTIVITY_CREATE); } }

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • Generating Random Records Date Wise

    - by Julian
    I work for a non-profit organization where we send volunteers to aided schools everyday. I am creating a site to display this info and am using SQL server express. I want some help regarding a query so here's my first post We have 15 volunteers currently who will go to 4 different schools to teach. Here are some conditions: We have to create a 'new' group comprising of 1 Leader and 4 TeamSupporters 'every day' except Sunday who will go to teach everyday If a person becomes a Leader in a week, he cannot become a leader again for the same week. A leader can become a TeamSupporter in the same week. Moving ahead, we can have more number of school to target, so 4 is not a constant Here's how the output should look like School1 School2 School3 School4 Jun14 Leader V6 V6 V6 V6 Support1 V3 V3 V3 V3 Support2 V9 V9 V9 V9 Support3 V12 V12 V12 V12 Support4 V1 V1 V1 V1 Jun15 Leader V2 V2 V2 V2 Support1 V7 V7 V7 V7 Support2 V9 V9 V9 V9 Support3 V8 V8 V8 V8 Support4 V11 V11 V11 V11 Jun16 Leader V9 V9 V9 V9 Support1 V6 V6 V6 V6 Support2 V4 V4 V4 V4 Support3 V3 V3 V3 V3 Support4 V14 V14 V14 V14 and so on..

    Read the article

  • Searching a column containing CSV data in a MySQL table for existence of input values

    - by Adarsh R
    Hi, I have a table say, ITEM, in MySQL that stores data as follows: ID FEATURES -------------------- 1 AB,CD,EF,XY 2 PQ,AC,A3,B3 3 AB,CDE 4 AB1,BC3 -------------------- As an input, I will get a CSV string, something like "AB,PQ". I want to get the records that contain AB or PQ. I realized that we've to write a MySQL function to achieve this. So, if we have this magical function MATCH_ANY defined in MySQL that does this, I would then simply execute an SQL as follows: select * from ITEM where MATCH_ANY(FEAURES, "AB,PQ") = 0 The above query would return the records 1, 2 and 3. But I'm running into all sorts of problems while implementing this function as I realized that MySQL doesn't support arrays and there's no simple way to split strings based on a delimiter. Remodeling the table is the last option for me as it involves lot of issues. I might also want to execute queries containing multiple MATCH_ANY functions such as: select * from ITEM where MATCH_ANY(FEATURES, "AB,PQ") = 0 and MATCH_ANY(FEATURES, "CDE") In the above case, we would get an intersection of records (1, 2, 3) and (3) which would be just 3. Any help is deeply appreciated. Thanks

    Read the article

  • Can I add a condition to CakePHP's update statement?

    - by Don Kirkby
    Since there doesn't seem to be any support for optimistic locking in CakePHP, I'm taking a stab at building a behaviour that implements it. After a little research into behaviours, I think I could run a query in the beforeSave event to check that the version field hasn't changed. However, I'd rather implement the check by changing the update statement's WHERE clause from WHERE id = ? to WHERE id = ? and version = ? This way I don't have to worry about other requests changing the database record between the time I read the version and the time I execute the update. It also means I can do one database call instead of two. I can see that the DboSource.update() method supports conditions, but Model.save() never passes any conditions to it. It seems like I have a couple of options: Do the check in beforeSave() and live with the fact that it's not bulletproof. Hack my local copy of CakePHP to check for a conditions key in the options array of Model.save() and pass it along to the DboSource.update() method. Right now, I'm leaning in favour of the second option, but that means I can't share my behaviour with other users unless they apply my hack to their framework. Have I missed an easier option?

    Read the article

  • Is it possible to execute a function in Mongo that accepts any parameters?

    - by joshua.clayton
    I'm looking to write a function to do a custom query on a collection in Mongo. Problem is, I want to reuse that function. My thought was this (obviously contrived): var awesome = function(count) { return function() { return this.size == parseInt(count); }; } So then I could do something along the lines of: db.collection.find(awesome(5)); However, I get this error: error: { "$err" : "error on invocation of $where function: JS Error: ReferenceError: count is not defined nofile_b:1" } So, it looks like Mongo isn't honoring scope, but I'm really not sure why. Any insight would be appreciated. To go into more depth of what I'd like to do: A collection of documents has lat/lng values, and I want to find all documents within a concave or convex polygon. I have the function written but would ideally be able to reuse the function, so I want to pass in an array of points composing my polygon to the function I execute on Mongo's end. I've looked at Mongo's geospatial querying and it currently on supports circle and box queries - I need something more complex.

    Read the article

  • SQL Pivot table error-using variable gives syntax error

    - by Antoni
    Hi my coworker came to me with this error and now I am hooked and trying to figure it out, hope some of the experts can help us! Thanks so much! When I execute Step6 we get this error: Msg 102, Level 15, State 1, Line 4 Incorrect syntax near '@cols'. --Sample of pivot query --Creating Test Table Step1 CREATE TABLE Product(Cust VARCHAR(25), Product VARCHAR(20), QTY INT) GO -- Inserting Data into Table Step2 INSERT INTO Product(Cust, Product, QTY) VALUES('KATE','VEG',2) INSERT INTO Product(Cust, Product, QTY) VALUES('KATE','SODA',6) INSERT INTO Product(Cust, Product, QTY) VALUES('KATE','MILK',1) INSERT INTO Product(Cust, Product, QTY) VALUES('KATE','BEER',12) INSERT INTO Product(Cust, Product, QTY) VALUES('FRED','MILK',3) INSERT INTO Product(Cust, Product, QTY) VALUES('FRED','BEER',24) INSERT INTO Product(Cust, Product, QTY) VALUES('KATE','VEG',3) GO -- Selecting and checking entires in table Step3 SELECT * FROM Product GO -- Pivot Table ordered by PRODUCT Step4 select * FROM ( SELECT * FROM Product) up PIVOT (SUM(QTY) FOR CUST IN ([FRED], [KATE])) AS pvt ORDER BY PRODUCT GO --dynamic pivot???? Step5 DECLARE @cols NVARCHAR(2000) select @cols = STUFF(( SELECT DISTINCT TOP 100 PERCENT '],[' + b.Cust FROM (select top 100 Cust from tblProduct)b ORDER BY '],[' + b.Cust FOR XML PATH('') ), 1, 2, '') + ']' --Show Step6 SELECT * FROM (SELECT * FROM tblProduct) p PIVOT (SUM(QTY) FOR CUST IN (@cols)) as pvt Order by Product

    Read the article

  • How to get the equivalent of the accuracy in Google Map Geocoder V3

    - by Scorpi0
    Hi, I want to get geocode from google, and I used to do it with the V2 of the API. Google send in the json a pretty good information, the accuracy, reference here : http://code.google.com/intl/fr-FR/apis/maps/documentation/javascript/v2/reference.html#GGeoAddressAccuracy In V3, Google doesn't seem to send me exactly the same information. There is the array "adresse_component", which seem bigger if the accuracy is better, but not exactly. For example, I have a request accuracy to the street number, the array is of size 8. Another query is accuracy to the route, so less accuracy, but the array is style of size 8, as there is a row 'sublocality', which not appear in the first case. Ok, for a result, Google send a data 'types', which have the 'best' accuracy. This types are here : http://code.google.com/intl/fr-FR/apis/maps/documentation/geocoding/#Types But, there is no real order, and if I wan't the result better than postal_code, I have no clue to how to do that. So, how can I get this equivalent of the V2 accuracy, whithout some dumb and horrible code ?

    Read the article

  • How Can I Create Reports in a Custom C#.NET Windows Application? - General Question

    - by user311509
    Assume i have a custom Windows application written in C#. This application has only the following functionalists, add, edit, delete and view. For example, a user can add a sale, change sales record, delete a sale record or view the whole sales record. I need to add some reporting functionalists e.g. i want a user to print the sales of a certain customer from 2008 to 2009 into pdf, what all products a certain customer has purchased from us and so on. I will only include the basic common report requests that are usually needed in the office. Any other kind of reports that are requested inconsistently, i would do it manually from my side at the back end and send the results manually to the requester. What i would do is: If a user wants more info of a certain customer, a special window box appears for that customer. This window box will have different controls that allows user to request more info such as, print customer purchases from ..... to ..... (user chooses the dates) and user will view results in pdf or so. Of course, at the back scene i will write an appropriate SQL Query with parameters that meets a certain function. Is this how it should be done? I have heard about SQL Reporting, i don't know anything about it yet. I will check it out. Anyhow, your suggestions won't harm. I'm still a student, so i don't have practical experience yet. I hope my question is clear enough. Thank you.

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • How to handle update events on a ASP.NET GridView?

    - by Bogdan M
    Hello, This may sound silly, but I need to find out how to handle an Update event from a GridView. First of all, I have a DataSet, where there is a typed DataTable with a typed TableAdapter, based on a "select all query", with auto-generated Insert, Update, and Delete methods. Then, in my aspx page, I have an ObjectDataSource related to my typed TableAdapter on Select, Insert, Update and Delete methods. Finnally, I have a GridView bound to this ObjectDataSource, with default Edit, Update and Cancel links. How should I implement the edit functionality? Should I have something like this? protected void GridView_RowEditing(object sender, GridViewEditEventArgs e) { using(MyTableAdapter ta = new MyTableAdapter()) { ta.Update(...); TypedDataTable dt = ta.GetRecords(); this.GridView.DataSource = dt; this.GridView.DataBind(); } } In this scenario, I have the feeling that I update some changes to the DB, then I retrive and bind all the data, and not only the modified parts. Is there any way to update only the DataSet, and this to update on his turn the DataBase and the GridView? I do not want to retrive all the data after a CRUD operations is performed, I just want to retrive the changes made. Thanks. PS: I'm using .NET 3.5 and VS 2008 with SP1.

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >