Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 168/193 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Linq2Sql: query - subquery optimisation

    - by Budda
    I have the following query: IList<InfrStadium> stadiums = (from sector in DbContext.sectors where sector.Type=typeValue select new InfrStadium(sector.TeamId) ).ToList(); and InfrStadium class constructor: private InfrStadium(int teamId) { IList<Sector> teamSectors = (from sector in DbContext.sectors where sector.TeamId==teamId select sector) .ToList<>(); ... work with data } Current implementation perform 1+n queries, where n - number of records fetched the 1st time. I want to optimize that. And another one I would love to do using 'group' operator in way like this: IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IEnumerable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } But attempt to launch query causes the following error: Expression of type 'System.Int32' cannot be used for constructor parameter of type 'System.Collections.Generic.IEnumerable`1[InfrStadiumSector]' Question 1: Could you please explain, what is wrong here, I don't understand why 'team_sectors' is applied as 'System.Int32'? I've tried to change query a little (replace IEnumerable with IQueryeable): IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors.AsQueryable()) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IQueryeable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } In this case I've received another but similar error: Expression of type 'System.Int32' cannot be used for parameter of type 'System.Collections.Generic.IEnumerable1[InfrStadiumSector]' of method 'System.Linq.IQueryable1[InfrStadiumSector] AsQueryableInfrStadiumSector' Question 2: Actually, the same question: can't understand at all what is going on here... P.S. I have another to optimize query idea (describe here: Linq2Sql: query optimisation) but I would love to find a solution with 1 request to DB).

    Read the article

  • What technologies/tools do people use to implement live websites ?

    - by MadSeb
    Hi, I have the following situation: -I have a server A hooked up to a piece of hardware that sends values and information out of every second. Programs on the server machine can read these values. This server A is in a very remote location so Internet connection is very slow and not reliable but the connection does exist. Let's say it's a weather station in the Arctic. -Users from the home location want to monitorize the weather values somehow. Well, the users can use a remote desktop connection the server A but that would be too too slow. My idea is somehow to have a website on a web server (let's call the webserver - B and B is in a home location ) and make the server A connect to the server B and somehow send values and the web application reads the values and displays them....... but how to do such a system ?? I know I can use MySQL and have the server A connect to a SQL server on server B and send INSERT queries and have the web application running on server B constantly read from the SQL server but I think that would be way way too slow and I think there has to be a better solution. Any ideas ? BTW. The users should be able to send information to the weather station from the website as well ( so an ADMIN user should be allowed to shut down the weather station from the website or whatever) Best regards, MadSeb

    Read the article

  • Sql Server problems reading columns with a foreigh key

    - by illdev
    I have a weird situation, where simple queries seem to never finish for instance SELECT top 100 ArticleID FROM Article WHERE ProductGroupID=379114 returns immediately SELECT top 1000 ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT top 1000 ArticleID FROM Article returns immediately by 'returning' I mean 'in query analyzer the green check mark appears and it says "Query executed successfully"'. I sometimes get the rows painted to the grid in qa, but still the query goes on waiting for my client to time out - 'sometimes': SELECT ProductGroupID AS Product23_1_, ArticleID AS ArticleID1_, ArticleID AS ArticleID18_0_, Inventory_Name AS Inventory3_18_0_, Inventory_UnitOfMeasure AS Inventory4_18_0_, BusinessKey AS Business5_18_0_, Name AS Name18_0_, ServesPeople AS ServesPe7_18_0_, InStock AS InStock18_0_, Description AS Descript9_18_0_, Description2 AS Descrip10_18_0_, TechnicalData AS Technic11_18_0_, IsDiscontinued AS IsDisco12_18_0_, Release AS Release18_0_, Classifications AS Classif14_18_0_, DistributorName AS Distrib15_18_0_, DistributorProductCode AS Distrib16_18_0_, Options AS Options18_0_, IsPromoted AS IsPromoted18_0_, IsBulkyFreight AS IsBulky19_18_0_, IsBackOrderOnly AS IsBackO20_18_0_, Price AS Price18_0_, Weight AS Weight18_0_, ProductGroupID AS Product23_18_0_, ConversationID AS Convers24_18_0_, DistributorID AS Distrib25_18_0_, type AS Type18_0_ FROM Article AS articles0_ WHERE (IsDiscontinued = '0') AND (ProductGroupID = 379121) shows this behavior. I have no idea what is going on. Probably select is broken ;) Anyone can tell me how to handle such a situation? More info, anyone?

    Read the article

  • Passing a WHERE clause for a Linq-to-Sql query as a parameter

    - by Mantorok
    Hi all This is probably pushing the boundaries of Linq-to-Sql a bit but given how versatile it has been so far I thought I'd ask. I have 3 queries that are selecting identical information and only differ in the where clause, now I know I can pass a delegate in but this only allows me to filter the results already returned, but I want to build up the query via parameter to ensure efficiency. Here is the query: from row in DataContext.PublishedEvents join link in DataContext.PublishedEvent_EventDateTimes on row.guid equals link.container join time in DataContext.EventDateTimes on link.item equals time.guid where row.ApprovalStatus == "Approved" && row.EventType == "Event" && time.StartDate <= DateTime.Now.Date.AddDays(1) && (!time.EndDate.HasValue || time.EndDate.Value >= DateTime.Now.Date.AddDays(1)) orderby time.StartDate select new EventDetails { Title = row.EventName, Description = TrimDescription(row.Description) }; The code I want to apply via a parameter would be: time.StartDate <= DateTime.Now.Date.AddDays(1) && (!time.EndDate.HasValue || time.EndDate.Value >= DateTime.Now.Date.AddDays(1)) Is this possible? I don't think it is but thought I'd check out first. Thanks

    Read the article

  • Creating a Mysql view to SELECT coloumns from different tables

    - by user330429
    I need help in constructing a VIEW on 4 tables. The view should contain coloumns: ER.ID, ER.EMPID, ER.CUSTID, ER.STATUS, ER.DATEREPORTED, ER.REPORT, EB.NAME, CR.CUSTNAME, CR.LOCID, CL.LOCNAME, DI.DEPTNAME ALIASES EMP_REPORT ER , EMP_BIO EB, CUST_RECORD CR, CUST_LOC CL, DEPT_ID DI THE DATA MODELS ARE: describe EMP_REPORT; +--------------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+-------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | empid | int(11) | NO | | NULL | | | custid | int(11) | NO | | NULL | | | status | varchar(32) | NO | | NULL | | | datereported | bigint(20) | NO | | NULL | | | report | text | YES | | NULL | | +--------------+-------------+------+-----+---------+----------------+ describe EMP_BIO; +--------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------+-------------+------+-----+---------+-------+ | empid | int(11) | NO | PRI | NULL | | | name | varchar(56) | NO | | NULL | | | sex | char(1) | NO | | NULL | | | deptid | int(11) | NO | | NULL | | | email | varchar(32) | NO | | NULL | | | mobile | bigint(20) | YES | | NULL | | | gtlk | varchar(32) | YES | | NULL | | | skype | varchar(32) | YES | | NULL | | | cvid | int(11) | YES | | NULL | | +--------+-------------+------+-----+---------+-------+ describe CUST_RECORD; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | custid | int(11) | NO | PRI | NULL | auto_increment | | custname | varchar(32) | NO | | NULL | | | address | varchar(255) | YES | | NULL | | | contactp | varchar(32) | YES | | NULL | | | mobile | bigint(20) | YES | | NULL | | | locid | int(11) | NO | | NULL | | | remarks | text | YES | | NULL | | | date | int(11) | YES | | NULL | | | addedby | int(11) | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ describe CUST_LOC; +---------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+-------------+------+-----+---------+-------+ | locid | int(11) | NO | PRI | 0 | | | locname | varchar(32) | NO | | NULL | | +---------+-------------+------+-----+---------+-------+ describe DEPT_ID; +----------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+-------------+------+-----+---------+-------+ | deptid | int(11) | NO | | NULL | | | deptname | varchar(32) | YES | | NULL | | +----------+-------------+------+-----+---------+-------+ The table EMP_REPORT contains reports submitted by employees, all the coloumns in it needs to be fetched. The empid in this table should be used to fetch corresponding name in EMP_BIO (employee biodata) table. The custid in EMP_REPORT should be used to fetch corresponding locid in CUST_RECORD(customer record) which is used to fetch locname in CUST_LOC(customer location) table. The empid in EMP_REPORT is used to fetch corresponding deptid in EMP_BIO table which is then used to fetch corresponding deptname from DEPT_ID(department id) table. I tried constructing view using union of different select queries, but dint get proper results. Please help me. Thanks in advance. PS: sorry for my poor english

    Read the article

  • How should I name my SQL query files? Should I use some methodology?

    - by Mehper C. Palavuzlar
    We have an Oracle 10g database (a huge one) in our company, and I provide employees with data upon their requests. My problem is, I save almost every SQL query I wrote, and now my list has grown too much. I want to organize and rename these .sql files so that I can find the one I want easily. At the moment, I'm using some folders named as Sales Dept, Field Team, Planning Dept, Special etc. and under those folders there are .sql files like Delivery_sales_1, Delivery_sales_2, ... Sent_sold_lostsales_endpoints, ... Sales_provinces_period, Returnrates_regions_bymonths, ... Jack_1, Steve_1, Steve_2, ... I try to name the files regarding their content but this makes file names longer and does not completely meet my needs. Sometimes someone comes and demands a special report, and I give the file his name, but this is also not so good. I know duplicates or very similar files are growing in time but I don't have control over them. Can you show me the right direction to rename all these files and folders and organize my queries for easy and better control? TIA.

    Read the article

  • MySQL LIMIT 1 but query 15 rows?

    - by Ian
    Basically what I'm trying to do is compare the ID's of rows against 15 results in MySQL, eliminating all but 1 (using NOT IN) and then pull that result. Now normally this would be fine by itself, however the order of the 15 rows I'm doing the SQL query for are constantly changing based on a ranking, so there is a possibility that between the time the ranking updates, and the ajax request (which I submit the ID's for NOT IN) more than just one ID has changed, which would of course bring back more than one row which I do not want. So in short, is there a way in which I can query 15 rows, but only return one? Without having to run two separate queries. Any help is appreciated, thank you. EXAMPLE: Say I have 7 items in my database, and I'm displaying 5 on the page to the user. These are what are being displayed to the user: Apple Orange Kiwi Banana Grape But in the database I also have Peach Blackberry Now what I want to do is if the user deletes an item from their list, it will add another item (based on a ranking they have) Now the issue is, in order to know what they have on their list at the moment I send the remaining items to the database (say they deleted Kiwi, I would send Apple, Orange, Banana, and Grape) So now I select the highest ranked 5 items from are remaining six items, make sure they are not the ones already displayed on the page, and then add the new one to list (either Peach or Blackberry) All good and well, except that if both peach and blackberry now outrank grape, then I will be returning two results instead of just one. Because it would've searched... Apple Orange Banana Peach Blackberry and excluded... Apple Orange Banana Grape Which leaves us with both Peach and Blackberry, instead of just Peach or Blackberry

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Access VBA question: Change the query being referenced by a function, depending on context

    - by Tara Amatista
    I have a custom function in Access2007 that hinges on grabbing data out of a specific query. It opens Outlook, creates a new email and populates the fields with specific addresses and data taken from the query ("DecisionEmail"). Now I want to make a different query ("RequestEmail") and have it populate the email with that data. So all I have to do is change this line: Set MailList = db.OpenRecordset("DecisionEmail") and that's where I get stumped. This is my desired result: If the user is on Form_Decision and clicks the button "Send email", "DecisionEmail" will get plugged into the function and that data will appear in the email. If the user on Form_SendRequest and clicks the button "Send email", "RequestEmail" will instead get plugged in. The reason that these are different queries is because they contain very different information that is smudged about in different ways. However, since it's just one little thing that needs to change in the function code, I don't think a brand new function is a good idea. My last resort would be to make a brand new function and use the Conditions field in the Macro interface to choose between them, but I have a feeling there's a more elegant solution possible. I have a vague notion of setting the query names as variables and using an If statement but I just don't have the mental vocabulary for thinking through this.

    Read the article

  • "Most popular" GROUP BY in LINQ?

    - by tags2k
    Assuming a table of tags like the stackoverflow question tags: TagID (bigint), QuestionID (bigint), Tag (varchar) What is the most efficient way to get the 25 most used tags using LINQ? In SQL, a simple GROUP BY will do: SELECT Tag, COUNT(Tag) FROM Tags GROUP BY Tag I've written some LINQ that works: var groups = from t in DataContext.Tags group t by t.Tag into g select new { Tag = g.Key, Frequency = g.Count() }; return groups.OrderByDescending(g => g.Frequency).Take(25); Like, really? Isn't this mega-verbose? The sad thing is that I'm doing this to save a massive number of queries, as my Tag objects already contain a Frequency property that would otherwise need to check back with the database for every Tag if I actually used the property. So I then parse these anonymous types back into Tag objects: groups.OrderByDescending(g => g.Frequency).Take(25).ToList().ForEach(t => tags.Add(new Tag() { Tag = t.Tag, Frequency = t.Frequency })); I'm a LINQ newbie, and this doesn't seem right. Please show me how it's really done.

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Django Aggregation Across Reverse Relationship

    - by Tom
    Given these two models: class Profile(models.Model): user = models.ForeignKey(User, unique=True, verbose_name=_('user')) about = models.TextField(_('about'), blank=True) zip = models.CharField(max_length=10, verbose_name='zip code', blank=True) website = models.URLField(_('website'), blank=True, verify_exists=False) class ProfileView(models.Model): profile = models.ForeignKey(Profile) viewer = models.ForeignKey(User, blank=True, null=True) created = models.DateTimeField(auto_now_add=True) I want to get all profiles sorted by total views. I can get a list of profile ids sorted by total views with: ProfileView.objects.values('profile').annotate(Count('profile')).order_by('-profile__count') But that's just a dictionary of profile ids, which means I then have to loop over it and put together a list of profile objects. Which is a number of additional queries and still doesn't result in a QuerySet. At that point, I might as well drop to raw SQL. Before I do, is there a way to do this from the Profile model? ProfileViews are related via a ForeignKey field, but it's not as though the Profile model knows that, so I'm not sure how to tie the two together. As an aside, I realize I could just store views as a property on the Profile model and that may turn out to be what I do here, but I'm still interested in learning how to better use the Aggregation functions.

    Read the article

  • Mysql query in drupal database - groupwise maximum with duplicate data

    - by nselikoff
    I'm working on a mysql query in a Drupal database that pulls together users and two different cck content types. I know people ask for help with groupwise maximum queries all the time... I've done my best but I need help. This is what I have so far: # the artists SELECT users.uid, users.name AS username, n1.title AS artist_name FROM users LEFT JOIN users_roles ur ON users.uid=ur.uid INNER JOIN role r ON ur.rid=r.rid AND r.name='artist' LEFT JOIN node n1 ON n1.uid = users.uid AND n1.type = 'submission' WHERE users.status = 1 ORDER BY users.name; This gives me data that looks like: uid username artist_name 1 foo Joe the Plumber 2 bar Jane Doe 3 baz The Tooth Fairy Also, I've got this query: # artwork SELECT n.nid, n.uid, a.field_order_value FROM node n LEFT JOIN content_type_artwork a ON n.nid = a.nid WHERE n.type = 'artwork' ORDER BY n.uid, a.field_order_value; Which gives me data like this: nid uid field_order_value 1 1 1 2 1 3 3 1 2 4 2 NULL 5 3 1 6 3 1 Additional relevant info: nid is the primary key for an Artwork every Artist has one or more Artworks valid data for field_order_value is NULL, 1, 2, 3, or 4 field_order_value is not necessarily unique per Artist - an Artist could have 4 Artworks all with field_order_value = 1. What I want is the row with the minimum field_order_value from my second query joined with the artist information from the first query. In cases where the field_order_value is not valuable information (either because the Artist has used duplicate values among their Artworks or left that field NULL), I would like the row with the minimum nid from the second query.

    Read the article

  • LINQ query to find if items in a list are contained in another list

    - by cjohns
    I have the following code: List<string> test1 = new List<string> { "@bob.com", "@tom.com" }; List<string> test2 = new List<string> { "[email protected]", "[email protected]" }; I need to remove anyone in test2 that has @bob.com or @tom.com. What I have tried is this: bool bContained1 = test1.Contains(test2); bool bContained2 = test2.Contains(test1); bContained1 = false but bContained2 = true. I would prefer not to loop through each list but instead use a Linq query to retrieve the data. bContained1 is the same condition for the Linq query that I have created below: List<string> test3 = test1.Where(w => !test2.Contains(w)).ToList(); The query above works on an exact match but not partial matches. I have looked at other queries but I can find a close comparison to this with Linq. Any ideas or anywhere you can point me to would be a great help.

    Read the article

  • [nHibernate] casting string to bool using nHibernate Criteria

    - by code-zoop
    I have an nHibernate query using Criteria, and I am trying to cast a string to bool in the query itself. I have done the same with casting a string to int, and that works well (the "DataField" property is "1" as a string): var result = Session .CreateCriteria<Car>() .Add(Restrictions.Eq((Projections.Cast(NHibernateUtil.Int32, Projections.Property("DataField"), 1)) .List<Car>(); tx.Commit(); But I am trying to do the same with bool, but I do not get the expected result: var result = Session .CreateCriteria<Car>() .Add(Restrictions.Eq((Projections.Cast(NHibernateUtil.bool, Projections.Property("DataField"), true)) .List<Car>(); tx.Commit(); "DataField" is the string "True", but the result in an empty list, where it should contain 100 elements with the "DataField" property string set to "True". I have tried with the string "true", and "1", but the result is still an empty List. [EDIT] As Commented below, I could check for the string "True" or "False", but I would say this is a more general question than just for the Boolean. Note that the idea is to have some sort of key value representation of the data, where the value can be different data types. I need the value table to contain all data, so storing the data as string seems like the cleanest solution! I have been able to use the method above to store both int and double as string, and to the cast in the query, but I have not succeeded using the same method for DateDime and Boolean. And for DateTime it is crucial to have the actual DateTime object. How can I make the cast from string to bool, and string to DateTime work in the queries? Thanks

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

  • Codeigniter Inserting Multidimensional Array as rows in MySQL

    - by RisingSun
    Please Refer to this question I asked Codeigniter Insert Multiple Rows in SQL To restate <tr> <td><input type="text" name="user[0][name]" value=""></td> <td><input type="text" name="user[0][address]" value=""><br></td> <td><input type="text" name="user[0][age]" value=""></td> <td><input type="text" name="user[0][email]" value=""></td> </tr> <tr> <td><input type="text" name="user[1][name]" value=""></td> <td><input type="text" name="user[1][address]" value=""><br></td> <td><input type="text" name="user[1][age]" value=""></td> <td><input type="text" name="user[1][email]" value=""></td> </tr> .......... Can Be Inserted into MySQL as this foreach($_POST['user'] as $user) { $this->db->insert('mytable', $user); } This results in multiple MySQL queries. Is it possible to optimise it further, so that the insert occurs in one query Something like this insert multiple rows via a php array into mysql but taking advantage of codeigniters simpler syntax. Thanks

    Read the article

  • How do I Order on common attribute of two models in the DB?

    - by Will
    If i have two tables Books, CDs with corresponding models. I want to display to the user a list of books and CDs. I also want to be able to sort this list on common attributes (release date, genre, price, etc.). I also have basic filtering on the common attributes. The list will be large so I will be using pagination in manage the load. items = [] items << CD.all(:limit => 20, :page => params[:page], :order => "genre ASC") items << Book.all(:limit => 20, :page => params[:page], :order => "genre ASC") re_sort(items,"genre ASC") Right now I am doing two queries concatenating them and then sorting them. This is very inefficient. Also this breaks down when I use paging and filtering. If I am on page 2 of how do I know what page of each table individual table I am really on? There is no way to determine this information without getting all items from each table. I have though that if I create a new Class called items that has a one to one relationship with either a Book or CD and do something like Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "genre ASC") However this gives back an ambiguous error. So can only be refined as Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "books.genre ASC") And does not interleave the books and CDs in a way that I want. Any suggestions.

    Read the article

  • How can I get a iterable resultset from the database using pdo, instead of a large array?

    - by Tchalvak
    I'm using PDO inside a database abstraction library function query. I'm using fetchAll(), which if you have a lot of results, can get memory intensive, so I want to provide an argument to toggle between a fetchAll associative array and a pdo result set that can be iterated over with foreach and requires less memory (somehow). I remember hearing about this, and I searched the PDO docs, but I couldn't find any useful way to do that. Does anyone know how to get an iterable resultset back from PDO instead of just a flat array? And am I right that using an iterable resultset will be easier on memory? I'm using Postgresql, if it matters in this case. . . . The current query function is as follows, just for clarity. /** * Running bound queries on the database. * * Use: query('select all from players limit :count', array('count'=>10)); * Or: query('select all from players limit :count', array('count'=>array(10, PDO::PARAM_INT))); **/ function query($sql_query, $bindings=array()){ DatabaseConnection::getInstance(); $statement = DatabaseConnection::$pdo->prepare($sql_query); foreach($bindings as $binding => $value){ if(is_array($value)){ $statement->bindParam($binding, $value[0], $value[1]); } else { $statement->bindValue($binding, $value); } } $statement->execute(); // TODO: Return an iterable resultset here, and allow switching between array and iterable resultset. return $statement->fetchAll(PDO::FETCH_ASSOC); }

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • PyGTK: Manually render an existing widget at a given Rectangle? (TextView in a custom CellRenderer)

    - by NicDumZ
    Hello! I am trying to draw a TextView into the cell of a TreeView. (Why? I would like to have custom tags on text, so they can be clickable and trigger different actions/popup menus depending on where user clicks). I have been trying to write a customized CellRenderer for this purpose, but so far I failed because I find it extremely difficult to find generic documentation on rendering design in gtk. More than an answer at the specific question (that might be hard/not doable, and I'm not expecting you to do everything for me), I am first looking for documentation on how a widget is rendered, to understand how one is supposed to implement a CellRenderer. Can you share any link that explains, either for gtk or for pygtk, the rendering mechanism? More specifically: size allocation mechanism (should I say protocol?). I understand that a window has a defined size, and then queries its children, saying "my size is w x h, what would be your ideal size, buddy?", and then sometimes shrinks children when all children cant fit together at their ideal sizes. Any specific documentation on that, and on particular on when this happens during rendering? How are rendered "builtin" widgets? What kind of methods do they call on Widget base class? On the parent Window? When? Do they use pango.Layout? can you manually draw a TextView onto a pango.Layout object? This link gives an interesting example showing how you can draw content in a pango.Layout object and use it in a CellRenderer. I guess that I could adapt it if only I understood how TextView widget are rendered. Or perhaps, to put it more simply: given an existing widget instance, how does one render it at a specific gdk.Rectangle? Thanks a lot.

    Read the article

  • When to use @Singleton in a Jersey resource

    - by dexter
    I have a Jersey resource that access the database. Basically it opens a database connection in the initialization of the resource. Performs queries on the resource's methods. I have observed that when I do not use @Singleton, the database is being open at each request. And we know opening a connection is really expensive right? So my question is, should I specify that the resource be singleton or is it really better to keep it at per request especially when the resource is connecting to the database? My resource code looks like this: //Use @Singleton here or not? @Path(/myservice/) public class MyResource { private ResponseGenerator responser; private Log logger = LogFactory.getLog(MyResource.class); public MyResource() { responser = new ResponseGenerator(); } @GET @Path("/clients") public String getClients() { logger.info("GETTING LIST OF CLIENTS"); return responser.returnClients(); } ... // some more methods ... } And I connect to the database using a code similar to this: public class ResponseGenerator { private Connection conn; private PreparedStatement prepStmt; private ResultSet rs; public ResponseGenerator(){ Class.forName("org.h2.Driver"); conn = DriverManager.getConnection("jdbc:h2:testdb"); } public String returnClients(){ String result; try{ prepStmt = conn.prepareStatement("SELECT * FROM hosts"); rs = prepStmt.executeQuery(); ... //do some processing here ... } catch (SQLException se){ logger.warn("Some message"); } finally { rs.close(); prepStmt.close(); // should I also close the connection here (in every method) if I stick to per request // and add getting of connection at the start of every method // conn.close(); } return result } ... // some more methods ... } Some comments on best practices for the code will also be helpful.

    Read the article

  • Incrementing value by one over a lot of rows

    - by Andy Gee
    Edit: I think the answer to my question lies in the ability to set user defined variables in MySQL through PHP - the answer by Multifarious has pointed me in this direction Currently I have a script to cycle over 10M records, it's very slow and it goes like this: I first get a block of 1000 results in an array similar to this: $matches[] = array('quality_rank'=>46732, 'db_id'=>5532); $matches[] = array('quality_rank'=>12324, 'db_id'=>1234); $matches[] = array('quality_rank'=>45235, 'db_id'=>8345); $matches[] = array('quality_rank'=>75543, 'db_id'=>2562); I then cycle through them one by one and update the record $mult = count($matches)*2; foreach($matches as $m) { $rank++; $score = (($m[quality_rank] + $rank)/($mult))*100; $s = "UPDATE `packages_sorted` SET `price_rank` = '".$rank."', `deal_score` = '".$score."' WHERE `db_id` = '".$m[db_id]."' LIMIT 1"; } It seems like this is a very slow way of doing it but I can't find another way to increment the field price_rank by one each time. Can anyone suggest a better method. Note: Although I wouldn't usually store this kind of value in a database I really do need on this occasion for comparison search queries later on in the project. Any help would be kindly appreciated :)

    Read the article

  • MySQL move data from one table to another, matching ID's

    - by Reveller
    I have (a.o.) two MySQL tables with (a.o.) the following columns: tweets: ------------------------------------- id text from_user_id from_user ------------------------------------- 1 Cool tweet! 13295354 tradeny 2 Tweeeeeeeet 43232544 bolleke 3 Yet another 13295354 tradeny 4 Something.. 53546443 janusz4 users: ------------------------------------- id from_user num_tweets from_user_id ------------------------------------- 1 tradeny 2235 2 bolleke 432 3 janusz4 5354 I now want to normalize the tweets table, replacing tweets.from_user with an integer that matches users.id. Secondly, I want to fill in the corresponding users.from_user_id, Finally, I want to delete tweets.from_user_id so that the end result would look like: tweets: ------------------------ id text from_user ------------------------ 1 Cool tweet! 1 2 Tweeeeeeeet 2 3 Yet another 1 4 Something.. 3 users: ------------------------------------- id from_user num_tweets from_user_id ------------------------------------- 1 tradeny 2235 13295354 2 bolleke 432 43232544 3 janusz4 5354 53546443 My question is whether one could help me form the proper queries for this. I have only come so far: UPDATE tweets SET from_user = (SELECT id FROM users WHERE from_user = tweets.from_user) WHERE... UPDATE users SET from_user_id = (SELECT from_user_id FROM tweets WHERE from_user = tweets.from_user) WHERE... ALTER TABLE tweets DROP from_user_id Any help would be greatly appreciated :-)

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >