Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 191/285 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • SQL Table Setup Advice

    - by Ozzy
    Hi all. Basically I have an xml feed from an offsite server. The xml feed has one parameter ?value=n now N can only be between 1 and 30 What ever value i pick, there will always be 4000 rows returned from the XML file. My script will call this xml file 30 times for each value once a day. So thats 120000 rows. I will be doing quite complicated queries on these rows. But the main thing is I will always filter by value first so SELECT * WHERE value = 'N' etc. That will ALWAYS be used. Now is it better to have one table where all 120k rows are stored? or 30 tables were 4k rows are stored? EDIT: the SQL database in question will be MySQL

    Read the article

  • Rails flash hash violation of MVC?

    - by user94154
    I know Rails' flash hash is nothing new, but I keep running into the same problem with it. Controllers should be for business logic and db queries, not formatting strings for display to the user. But the flash hash is always set in the controller. This means that I need to hack and work around Rails to use Helpers that I made to format strings for the flash hash. Is this just a pragmatic compromise to MVC or am I missing something here? How do you deal with this problem? Or do you not even see it as one?

    Read the article

  • SPARQL UNION - Result set incomplete

    - by jplevac
    I have two queries: query 1: SELECT DISTINCT ?o COUNT(?o) WHERE { ?s1 ?somep1 <predicate_one-uri>. ?s1 ?p ?o} query 2: SELECT DISTINCT ?o COUNT(?o) WHERE {?s2 ?somep2 <predicate_two-uri>.?s2 ?p ?o.} Each query gives me a different result set (as expected). I need to make a union of these two sets, from what I understand the query below should give me the set I want: SELECT DISTINCT ?o COUNT(?o) WHERE { { ?s1 ?somep1 <predicate_one-uri>.?s1 ?p1 ?o} UNION {?s2 ?somep2 <predicate_two-uri>.?s2 ?p2 ?o.} } The problem is that some results from query 1 are not in the union set and vice-versa for query 2. The union is not working properly as it does not incorporate all results of query 1 and query 2. Please advise on the proper structure of the sparql query for achieving the desired result set. Thanks in advance! JP Levac

    Read the article

  • Best practices for using memcached in Rails?

    - by Matt
    Hello everybody, as database transcations in our app are getting more and more time consuming, we have started to use memcached to reduce the amount of queries passed to MySQL. All in all, it works fine and really saves a lot of time. But as caching was "silently appearing" as a workaround to give the app more juice, a lot of our models now contain code like this: def self.all_cached Rails.cache.fetch('object_name') { find( :all, :include => [associations]) } end This is getting more and more a pain as filling and flushing the cache happens in several classes accross the application. Now, I was wondering if there was a better way to abstract memcached logic to make it more powerful and easy to use across all needed models? I was thinking about having some kind of memcached-module which is included in all needed modules. But before playing around, I thought: Let's ask experts first :-) Thanks Matt

    Read the article

  • How to achieve multiple grids like the SQL server Results pane

    - by Skun
    Hi guys ! I'm having problems with my project once again :( The front end is C# I need to support multiline querying like MS SQL server and when these queries are executed, naturally there are going to be multiple result sets. Getting the datatables respective to the results is not a problem, but how do i make it appear like its done in MS SQL server. One result set below the other and with a scroll bar? Should i bind it to a datagrid? If so how can i bind multiple tables to a datagrid ? and will it generate the scrollbars and the columns automatically? If i am not clear, please let me know and i'll try to be more clearer. ps: If anyone knows how this can be done with the XtraGridControl in devexpress that would be awesome ! :D

    Read the article

  • Building a subquery with ARel in Rails3

    - by Christopher
    I am trying to build this query in ARel: SELECT FLOOR(AVG(num)) FROM ( SELECT COUNT(attendees.id) AS num, meetings.club_id FROM `meetings` INNER JOIN `attendees` ON `attendees`.`meeting_id` = `meetings`.`id` WHERE (`meetings`.club_id = 1) GROUP BY meetings.id) tmp GROUP BY tmp.club_id It returns the average number of attendees per meeting, per club. (a club has many meetings and a meeting has many attendees) So far I have (declared in class Club < ActiveRecord::Base): num_attendees = meetings.select("COUNT(attendees.id) AS num").joins(:attendees).group('meetings.id') Arel::Table.new('tmp', self.class.arel_engine).from(num_attendees).project('FLOOR(AVG(num))').group('tmp.club_id').to_sql but, I am getting the error: undefined method `visit_ActiveRecord_Relation' for #<Arel::Visitors::MySQL:0x9b42180> The documentation for generating non trivial ARel queries is a bit hard to come by. I have been using http://rdoc.info/github/rails/arel/master/frames Am I approaching this incorrectly? Or am I a few methods away from a solution?

    Read the article

  • I am not able to create foreign key in mysql Error 150. Please help

    - by Shantanu Gupta
    i am trying to create a foreign key in my table. But when i executes my query it shows me error 150 Error Code : 1005 Can't create table '.\vts#sql-6ec_1.frm' (errno: 150) (0 ms taken) My Queries are Query to create a foreign Key alter table `vts`.`tblguardian` add constraint `FK_tblguardian` FOREIGN KEY (`GuardianPickPointId`) REFERENCES `tblpickpoint` (`PickPointId`) Primary Key table CREATE TABLE `tblpickpoint` ( `PickPointId` int(4) NOT NULL auto_increment, `PickPointName` varchar(500) default NULL, `PickPointLabel` varchar(500) default NULL, `PickPointLatLong` varchar(100) NOT NULL, PRIMARY KEY (`PickPointId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC Foreign Key Table CREATE TABLE `tblguardian` ( `GuardianId` int(4) NOT NULL auto_increment, `GuardianName` varchar(500) default NULL, `GuardianAddress` varchar(500) default NULL, `GuardianMobilePrimary` varchar(15) NOT NULL, `GuardianMobileSecondary` varchar(15) default NULL, `GuardianPickPointId` varchar(100) default NULL, PRIMARY KEY (`GuardianId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

    Read the article

  • MS Analysis Services OLAP API for Python

    - by Kaloyan Todorov
    I am looking for a way to connect to a MS Analysis Services OLAP cube, run MDX queries, and pull the results into Python. In other words, exactly what Excel does. Is there a solution in Python that would let me do that? Someone with a similar question going pointed to Django's ORM. As much as I like the framework, this is not what I am looking for. I am also not looking for a way to pull rows and aggregate them -- that's what Analysis Services is for in the first place. Ideas? Thanks.

    Read the article

  • MySQL UNION query from one table + ORDER BY

    - by ilnur777
    I have one table with two queries and I need to sort it with descending type using ORDER BY. Here is my MySQL query that does not work properly: (SELECT `text` FROM `comments` WHERE user_fr='".$user."' && archive='1' ORDER BY `is_new_fr` DESC) UNION (SELECT `text` FROM `message` WHERE user_to='".$user."' && archive='1' ORDER BY `is_new_to` DESC) Description! is_new_fr and is_new_to counts total new messages. Here is my table contant: user_fr | user_to | archive | is_new_fr | is_new_to| text name1 | name2 | 1 | 2 | 0 | testing... name2 | name1 | 1 | 0 | 5 | testing ... I want to make an order that 1st will display note that has more messages to few, or by another words using DESCending type. This is the display on the page I want to do: Open dialog with name2. Messages (5) Open dialog with name1. Messages (2) Thank you!

    Read the article

  • UITableView via NSFetchedResultsControllerDelegate, select first record by default?

    - by deafgreatdane
    I have a UITableView that gets populated from CoreData via a controller that implements NSFetchedResultsControllerDelegate. How can I have it automatically select the first row (and fire the tableView:didSelectRowAtIndexPath message)? The tableview is used for a variety of predicate queries, so I'm suspicious of solutions that work on the UIViewController lifecycle (viewDidLoad, etc), but I'm new to the platform, so I'm open. I've tried a variety of things, but I'm not sure where in the call stack to put it. I've tried calling cell.selected = true inside tableView:cellForRowAtIndex: method, but that just ends up turning the cell black (and doesn't fire the selected callback method) A tagent question, with all the delegating and core data protocols, does it imply asynchronous data fetch (multiple threads)? Or is the NSFetchedResultsController calling all its related methods in the same thread? Maybe I'm just scared that if it is async, there would be race conditions that would be tough to troubleshoot later.

    Read the article

  • Using django-haystack, how do I perform a search with only partial terms?

    - by Sri Raghavan
    I've got a Haystack/xapian search index for django.contrib.auth.models.User. The template is simply {{object.get_full_name}} as I intend for a user to type in a name and be able to search for it. My issue is this: if I search, say, Sri (my full first name) I come up with a result for the user object pertaining to my name. However, if I search Sri Ragh - that is, my full name, and part of my last name, I get no results. How can I set Haystack up so that I can get the appropriate results for partial queries? (I essentially want it to search *Sri Ragh*, but I don't know if wildcards would actually do the trick, or how to implement them). This is my search query: results = SearchQuerySet().filter(content='Sri Ragh')

    Read the article

  • CPU and Data alignment

    - by MS
    Dear All, Pardon me if you feel this has been answered numerous times, but I need answers to the following queries! Why data has to be aligned (on 4 byte/ 8 byte/ 2 byte boundaries)? Here my doubt is when the CPU has address lines Ax Ax-1 Ax-2 ... A2 A1 A0 then it is quite possible to address the memory locations sequentially. So why there is the need to align the data at specific boundaries? How to find the alignment requirements when I am compiling my code and generating the executatble? If for e.g the data alignment is 4 byte boundary, does that mean each consecutive byte is located at modulo 4 offsets? My doubt is if data is 4 byte aligned does that mean that if a byte is at 1004 then the next byte is at 1008 (or at 1005)? Your thoughts are much welcome. Thanks in advance! /MS

    Read the article

  • Anyone have any issues with using PLINQO and ASP.NET MVC 2.0?

    - by Chad
    I'm asking because I'm working on an ASP.NET MVC 1.0 site, thinking of upgrading to ASP.NET MVC 2.0. Then I read that PLINQO 5.0 was released (I had never heard of PLINQO before) and have been impressed with what PLINQO appears to be capable of. 1) Is PLINQO capable of building out an ASP.NET MVC 2.0 UI project when it's run? 2) Have you had any bad experiences using PLINQO (particularly in an ASP.NET MVC app)? Let me make sure I have the scenario right in my mind: Using PLINQO (assuming it supports ASP.NET MVC 2.0), I should be able to point it to my DB and it will create 3 projects: data, test, and mvc 2.0 UI? The data would contain LINQ to SQL queries, with the PLINQO extensions added in and the other projects setup to use the data project by default?

    Read the article

  • parallelizing code using openmp

    - by anubhav
    Hi, The function below contains nested for loops. There are 3 of them. I have given the whole function below for easy understanding. I want to parallelize the code in the innermost for loop as it takes maximum CPU time. Then i can think about outer 2 for loops. I can see dependencies and internal inline functions in the innermost for loop . Can the innermost for loop be rewritten to enable parallelization using openmp pragmas. Please tell how. I am writing just the loop which i am interested in first and then the full function where this loop exists for referance. Interested in parallelizing the loop mentioned below. //* LOOP WHICH I WANT TO PARALLELIZE *// for (y = 0; y < 4; y++) { refptr = PelYline_11 (ref_pic, abs_y++, abs_x, img_height, img_width); LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; } The full function where this loop exists is below for referance. /*! *********************************************************************** * \brief * Setup the fast search for an macroblock *********************************************************************** */ void SetupFastFullPelSearch (short ref, int list) // <-- reference frame parameter, list0 or 1 { short pmv[2]; pel_t orig_blocks[256], *orgptr=orig_blocks, *refptr, *tem; // created pointer tem int offset_x, offset_y, x, y, range_partly_outside, ref_x, ref_y, pos, abs_x, abs_y, bindex, blky; int LineSadBlk0, LineSadBlk1, LineSadBlk2, LineSadBlk3; int max_width, max_height; int img_width, img_height; StorablePicture *ref_picture; pel_t *ref_pic; int** block_sad = BlockSAD[list][ref][7]; int search_range = max_search_range[list][ref]; int max_pos = (2*search_range+1) * (2*search_range+1); int list_offset = ((img->MbaffFrameFlag)&&(img->mb_data[img->current_mb_nr].mb_field))? img->current_mb_nr%2 ? 4 : 2 : 0; int apply_weights = ( (active_pps->weighted_pred_flag && (img->type == P_SLICE || img->type == SP_SLICE)) || (active_pps->weighted_bipred_idc && (img->type == B_SLICE))); ref_picture = listX[list+list_offset][ref]; //===== Use weighted Reference for ME ==== if (apply_weights && input->UseWeightedReferenceME) ref_pic = ref_picture->imgY_11_w; else ref_pic = ref_picture->imgY_11; max_width = ref_picture->size_x - 17; max_height = ref_picture->size_y - 17; img_width = ref_picture->size_x; img_height = ref_picture->size_y; //===== get search center: predictor of 16x16 block ===== SetMotionVectorPredictor (pmv, enc_picture->ref_idx, enc_picture->mv, ref, list, 0, 0, 16, 16); search_center_x[list][ref] = pmv[0] / 4; search_center_y[list][ref] = pmv[1] / 4; if (!input->rdopt) { //--- correct center so that (0,0) vector is inside --- search_center_x[list][ref] = max(-search_range, min(search_range, search_center_x[list][ref])); search_center_y[list][ref] = max(-search_range, min(search_range, search_center_y[list][ref])); } search_center_x[list][ref] += img->opix_x; search_center_y[list][ref] += img->opix_y; offset_x = search_center_x[list][ref]; offset_y = search_center_y[list][ref]; //===== copy original block for fast access ===== for (y = img->opix_y; y < img->opix_y+16; y++) for (x = img->opix_x; x < img->opix_x+16; x++) *orgptr++ = imgY_org [y][x]; //===== check if whole search range is inside image ===== if (offset_x >= search_range && offset_x <= max_width - search_range && offset_y >= search_range && offset_y <= max_height - search_range ) { range_partly_outside = 0; PelYline_11 = FastLine16Y_11; } else { range_partly_outside = 1; } //===== determine position of (0,0)-vector ===== if (!input->rdopt) { ref_x = img->opix_x - offset_x; ref_y = img->opix_y - offset_y; for (pos = 0; pos < max_pos; pos++) { if (ref_x == spiral_search_x[pos] && ref_y == spiral_search_y[pos]) { pos_00[list][ref] = pos; break; } } } //===== loop over search range (spiral search): get blockwise SAD ===== **// =====THIS IS THE PART WHERE NESTED FOR STARTS=====** for (pos = 0; pos < max_pos; pos++) // OUTERMOST FOR LOOP { abs_y = offset_y + spiral_search_y[pos]; abs_x = offset_x + spiral_search_x[pos]; if (range_partly_outside) { if (abs_y >= 0 && abs_y <= max_height && abs_x >= 0 && abs_x <= max_width ) { PelYline_11 = FastLine16Y_11; } else { PelYline_11 = UMVLine16Y_11; } } orgptr = orig_blocks; bindex = 0; for (blky = 0; blky < 4; blky++) // SECOND FOR LOOP { LineSadBlk0 = LineSadBlk1 = LineSadBlk2 = LineSadBlk3 = 0; for (y = 0; y < 4; y++) //INNERMOST FOR LOOP WHICH I WANT TO PARALLELIZE { refptr = PelYline_11 (ref_pic, abs_y++, abs_x, img_height, img_width); LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; } block_sad[bindex++][pos] = LineSadBlk0; block_sad[bindex++][pos] = LineSadBlk1; block_sad[bindex++][pos] = LineSadBlk2; block_sad[bindex++][pos] = LineSadBlk3; } } //===== combine SAD's for larger block types ===== SetupLargerBlocks (list, ref, max_pos); //===== set flag marking that search setup have been done ===== search_setup_done[list][ref] = 1; } #endif // _FAST_FULL_ME_

    Read the article

  • In-memory Database in Excel

    - by user329174
    Hello, I am looking for a way to import a datatable from Access into an Excel variable and then run queries through this variable to speed up the process. I am trying to migrate from C# .NET where I read a data table from an access database into memory and then used LINQ to query this dataset. It is MUCH faster than how I have it currently coded in VBA where I must make lots of calls to the actual database, which is slow. I have seen the QueryTable mentioned, but it appears that this requires pasting the data into the excel sheet. I would like to keep everything in memory and minimize the interaction between the Excel Sheet and the VBA code as much as possible. I wish we didn't need to use Excel+VBA to do this, but we're kind of stuck with that for now. Thanks for the help!

    Read the article

  • subscript for a join (\bowtie) operation in LyX/LaTeX

    - by Amir Rachum
    I'm using LyX to write some Relational Algebra queries. I'm using the \bowtie symbol for the join operation but when I try to put a text in subscript directly under the symbol, I get the following error: ...a_{\t{pId}}\t{person}\right)\bowtie\limits {\t{pId}{1}=\t{pId}_{2}... I'm ignoring this misplaced \limits or \nolimits command. Anyone knows how to do what I want? Preferably in LyX, but ERT code snippets will also be appreciated. Thanks! Edit: \t is a macro for \text.

    Read the article

  • Lexing partial SQL in C#

    - by Chris T
    I'd need to parse partial SQL queries (it's for a SQL injection auditing tool). For example '1' AND 1=1-- Should break down into tokens like [0] => [SQL_STRING, '1'] [1] => [SQL_AND] [2] => [SQL_INT, 1] [3] => [SQL_AND] [4] => [SQL_INT, 1] [5] => [SQL_COMMENT] [6] => [SQL_QUERY_END] Are their any at least lexers for SQL that I base mine off of or any good tools like bison for C# (though I'd rather not write my own grammar as I need to support most if not all the grammar of MySQL 5)

    Read the article

  • Looking for an elegant way to store one-to-many relationship in coredata when order is important

    - by Eric Schweichler
    I've been trying to come up with a way to solve my problem, but every solution I can think of is messy and makes me want to retch. I have a one-to-many relationship, consisting of a Team object that can have many Member objects. When I built my data model using Xcode, I was given the default NSSet in which to store the member objects, Unfortunately Sets are not ordered and I need to preserve the order of the Member objects and I need to know if there are empty spaces between Members. I thought of Using an NSArray in place of the NSSet and creating a dummy Member object in my data store that I could use to mark vacant a spot between to Member objects, but that solution really feels like too much of a hack to me. Since I'll always have to filter out this dummy Member from any queries. An NSDictionary would be perfect as I could store the Member object references and their positions as Object-Key pairs, (taking care of both order and vacancies) but apparently CoreData does not support NSDictionary. Has anyone had a similar need, and devised a simple solution?

    Read the article

  • LINQ to SQL Translation

    - by Ben
    Hi, Depending on how I map my linq queries to my domain objects, I get the following error The member 'member' has no supported translation to SQL. This code causes the error: public IQueryable<ShippingMethod> ShippingMethods { get { return from sm in _db.ShippingMethods select new ShippingMethod( sm.ShippingMethodID, sm.Carrier, sm.ServiceName, sm.RatePerUnit, sm.EstimatedDelivery, sm.DaysToDeliver, sm.BaseRate, sm.Enabled ); } } This code works fine: public IQueryable<ShippingMethod> ShippingMethods { get { return from sm in _db.ShippingMethods select new ShippingMethod { Id = sm.ShippingMethodID, Carrier = sm.Carrier, ServiceName = sm.ServiceName, EstimatedDelivery = sm.EstimatedDelivery, DaysToDeliver = sm.DaysToDeliver, RatePerUnit = sm.RatePerUnit, IsEnabled = sm.Enabled, BaseRate = sm.BaseRate }; } } This is my testmethod I am testing with: [TestMethod] public void Test_Shipping_Methods() { IOrderRepository orderRepo = new SqlOrderRepository(); var items = orderRepo.ShippingMethods.Where(x => x.IsEnabled); Assert.IsTrue(items.Count() > 0); } How does the way in which I instantiate my object affect the linq to sql translation? Thanks Ben

    Read the article

  • Custom Classes Passed from Service to a UI threads via AIDL

    - by Honza Pokorny
    I have a service that regularly queries a web server for new messages. The service stores the new messages in an arrayList. These messages are implemented using a custom class, storing all kinds of metadata (strings and longs). An activity then connects to this service to retrieve those messages and display them to the user. I have an .aidl file that describes the interface that the service exposes. package com.example.package; interface MyInterface { List<Message> getMessages(); } The Message class extends the Parcelable class which should allow for the IPC transfer. The problem is this: Eclipse gives me an error saying that the type of List<Message> is unknown. Any imports are marked as invalid. Ideas? Thanks

    Read the article

  • Query DNSBL or other block lists using PHP

    - by 55skidoo
    Is there any way to use PHP code to query a DNSBL (block list) provider and find out if the IP address submitted is a bad actor? I would like to take an existing IP address out of a registration database, then check whether it's a known block-listed IP address by performing a lookup on it, then if it's a blacklisted, do an action on it (such as, delete entry from registration database). Most of the instructions I have seen assume you are trying to query the blocklist via a mail server, which I can't do. I tried querying via web browser by typing in queries such as "58.64.xx.xxx.dnsbl.sorbs.net" but that didn't work.

    Read the article

  • Group by/count in LINQ against SQL Compact 3.5 SP2

    - by bash74
    Hello, I am using LINQ-To-Entities in C# and run queries against a SQL Compact Server 3.5 SP2. What I try to achieve is a simple group by with an additional where clause which includes a Count(). var baseIdent="expression"; var found=from o in ObservedElements where o.ObservedRoots.BaseIdent==baseIdent group o by o.ID into grouped where grouped.Count()==1 select new {key=grouped.Key, val=grouped}; foreach(var res in found){ //do something here } This query throws the famous exception "A parameter is not allowed in this location. Ensure that the '@' sign and all other parameters are in a valid location in the SQL statement." When I either omit the where clause OR directly enter the expression "expression" in the query (where o.ObservedRoots.BaseIdent=="expression") everything just works fine. Does anybody know how to solve this? Workaround would also be fine? Thanks in advance, Sebastian

    Read the article

  • Given GPS coordinates, how do I find nearby landmarks or points-of-interest?

    - by stackoverflowuser2010
    I just bought a Google Nexus One smartphone, and I want to write a small Android application for fun. Suppose I can get my current GPS coordinates, so then how can I programmatically find nearby landmarks or points-of-interest within some radius? Is there an API to get GPS geo-tagged landmarks, like in Google Earth's database? For example, if I'm in downtown Chicago, my program would point me to all the "tourist" things to visit in that city. Ideally, it would all run on my smartphone, but if necessary, I can have the smartphone query a webserver, which would then run more queries.

    Read the article

  • .NET Database Apps: Your Preferred Setup

    - by mdvaldosta
    I'm struggling to settle into a pattern for developing typical database driven apps in C# and Visual Studio. There are so many ways to set them up, using drag/drop datasets and adapters or writing the queries manually in ADO.NET or Linq to SQL, Linq to Entities, to bind or not to data bind etc etc. Where to store the connection string, in app.config or in a method or both etc etc. So many tutorials and all of them are different. Everytime I write something I start hating the way it looks and works, so I scrap it and start over. It's getting a bit tedious. Maybe it's alittle of the OCD in me. Would any of you professional developers out there share your method of setting up and structuring your database logic and maybe some sample code? It's really how to go about organizing the code and the method(s) of interacting with SQL that I'm trying to get into a routine with, one that works and won't get me laughed at by someone reviewing it.

    Read the article

  • [PHP] building html tables from query data... faster?

    - by Andrew Heath
    With my limited experience/knowledge I am using the following structure to generate HTML tables on the fly from MySQL queries: $c = 0; $t = count($results); $table = '<table>'; while ($c < $t) { $table .= "<tr><td>$results[0]</td><td>$results[1]</td> (etc etc) </tr>"; ++$c; } $table .= '</table>'; this works, obviously. But for tables with 300+ rows there is a noticeable delay in pageload while the script builds the table. Currently the maximum results list is only about 1,100 rows, and the wait isn't long, but there's clearly a wait. Are there other methods for outputting an HTML table that are faster than my WHILE loop? (PHP only please...)

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >