Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 170/210 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Complex query making site extremely slow

    - by Basit
    select SQL_CALC_FOUND_ROWS DISTINCT media.*, username from album as album, album_permission as permission, user as user, media as media , word_tag as word_tag, tag as tag where ((media.album_id = album.album_id and album.private = 'yes' and album.album_id = permission.album_id and (permission.email = '' or permission.user_id = '') ) or (media.album_id = album.album_id and album.private = 'no' ) or media.album_id = '0' ) and media.status = '1' and media.user_id = user.user_id and word_tag.media_id = media.media_id and word_tag.tag_id = tag.tag_id and tag.name in ('justin','bieber','malfunction','katherine','heigl','wardrobe','cinetube') and media.media_type = 'video' and media.media_id not in ('YHL6a5z8MV4') group by media.media_id order by RAND() #there is limit too, by 20 rows.. i dont know where to begin explaining about this query, but please forgive me and ask me if you have any question. following is the explanation. SQL_CALC_FOUND_ROWS is calculating how many rows are there and will be using for pagination, so it counts total records, even tho only 20 is showing. DISTINCT will stop the repeated row to display. username is from user table. album, album_permission. its checking if album is private and if it is, then check if user has permission, by user_id. i think rest is easy to understand, but if you need to know more about it, then please ask. im really frustrated by this query and site is very slow or not opening sometimes cause of this query. please help

    Read the article

  • How to improve Java perfomance on Informix for Windows

    - by Michal Niklas
    I have problem with performance of Java UDR functions on Informix on Windows. On this server I already have some functions in C and SPL. I chose one function to write it in those 3 languages and I measured performance of this function on test table. Function calculates some kind of checksum so it not use any db libraries etc. only string and math operations. I observed performance on 30k records with SQL like: select function(txt) from _tmp_perf_test and I changed function to 'function_c, function_spl or function_java. My performance tests showed that C function is the fastest, SPL function is about 5 times slower, where Java is 100 (one hundred!) times slower than C. I checked it few times and 1:100 ratio didn't improve. I changed Java function to simply return length of the string but even this do not help so it looks, that there is general problem with Java function invocation, because there was no difference in time between Java function that calculate checksum and Java function that returns length of the string. I increased JVM_MAX_HEAP_SIZE to 128 and it not helped too. I use IBM Informix Dynamic Server Version 11.50.TC6DE. The same test on Linux server: IBM Informix Dynamic Server Version 11.50.FC6 show more "normal" results, i.e. Java is slower from C and SPL but only 2 to 5 times. What can I do to improve Java performance on Informix server on Windows? More info about Java on servers: c:\Informix\extend\krakatoa\jre\bin>java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pwi32dev-20081129a (SR9-0 )) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Windows Server 2003 x86-32 j9vmwi3223-20081129 (JIT enabled) J9VM - 20081126_26240_lHdSMr JIT - 20081112_1511ifx1_r8 GC - 200811_07) JCL - 20081129 [root@informix11 bin]# ./java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pxa64devifx-20071025 (SR6b)) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux amd64-64 j9vmxa6423-20071005 (JIT enabled) J9VM - 20071004_14218_LHdSMr JIT - 20070820_1846ifx1_r8 GC - 200708_10) JCL - 20071025

    Read the article

  • How to parse SOAP response from ruby client?

    - by Richard O'Neil
    Hi I am learning Ruby and I have written the following code to find out how to consume SOAP services: require 'soap/wsdlDriver' wsdl="http://www.abundanttech.com/webservices/deadoralive/deadoralive.wsdl" service=SOAP::WSDLDriverFactory.new(wsdl).create_rpc_driver weather=service.getTodaysBirthdays('1/26/2010') The response that I get back is: #<SOAP::Mapping::Object:0x80ac3714 {http://www.abundanttech.com/webservices/deadoralive} getTodaysBirthdaysResult=#<SOAP::Mapping::Object:0x80ac34a8 {http://www.w3.org/2001/XMLSchema}schema=#<SOAP::Mapping::Object:0x80ac3214 {http://www.w3.org/2001/XMLSchema}element=#<SOAP::Mapping::Object:0x80ac2f6c {http://www.w3.org/2001/XMLSchema}complexType=#<SOAP::Mapping::Object:0x80ac2cc4 {http://www.w3.org/2001/XMLSchema}choice=#<SOAP::Mapping::Object:0x80ac2a1c {http://www.w3.org/2001/XMLSchema}element=#<SOAP::Mapping::Object:0x80ac2774 {http://www.w3.org/2001/XMLSchema}complexType=#<SOAP::Mapping::Object:0x80ac24cc {http://www.w3.org/2001/XMLSchema}sequence=#<SOAP::Mapping::Object:0x80ac2224 {http://www.w3.org/2001/XMLSchema}element=[#<SOAP::Mapping::Object:0x80ac1f7c>, #<SOAP::Mapping::Object:0x80ac13ec>, #<SOAP::Mapping::Object:0x80ac0a28>, #<SOAP::Mapping::Object:0x80ac0078>, #<SOAP::Mapping::Object:0x80abf6c8>, #<SOAP::Mapping::Object:0x80abed18>] >>>>>>> {urn:schemas-microsoft-com:xml-diffgram-v1}diffgram=#<SOAP::Mapping::Object:0x80abe6c4 {}NewDataSet=#<SOAP::Mapping::Object:0x80ac1220 {}Table=[#<SOAP::Mapping::Object:0x80ac75e4 {}FullName="Cully, Zara" {}BirthDate="01/26/1892" {}DeathDate="02/28/1979" {}Age="(87)" {}KnownFor="The Jeffersons" {}DeadOrAlive="Dead">, #<SOAP::Mapping::Object:0x80b778f4 {}FullName="Feiffer, Jules" {}BirthDate="01/26/1929" {}DeathDate=#<SOAP::Mapping::Object:0x80c7eaf4> {}Age="81" {}KnownFor="Cartoonists" {}DeadOrAlive="Alive">]>>>> I am having a great deal of difficulty figuring out how to parse and show the returned information in a nice table, or even just how to loop through the records and have access to each element (ie. FullName,Age,etc). I went through the whole "getTodaysBirthdaysResult.methods - Object.new.methods" and kept working down to try and work out how to access the elements, but then I get to the array and I got lost. Any help that can be offered would be appreciated.

    Read the article

  • ProgrammingError when aggregating over an annotated & grouped Django ORM query

    - by ento
    I'm trying to construct a query to get the "average, maximum, minimum number of items purchased by a single user". The data source is this simple sales record table: class SalesRecord(models.Model): id = models.IntegerField(primary_key=True) user_id = models.IntegerField() product_code = models.CharField() price = models.IntegerField() created_at = models.DateTimeField() A new record is inserted into this table for every item purchased by a user. Here's my attempt at building the query: q = SalesRecord.objects.all() q = q.values('user_id').annotate( # group by user and count the # of records count=Count('id'), # (= # of items) ).order_by() result = q.aggregate(Max('count'), Min('count'), Avg('count')) When I try to execute the code, a ProgrammingError is raised at the last line: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM (SELECT sales_records.user_id AS user_id, COUNT(sales_records.`' at line 1") Django's error screen shows that the SQL is SELECT FROM (SELECT `sales_records`.`player_id` AS `player_id`, COUNT(`sales_records`.`id`) AS `count` FROM `sales_records` WHERE (`sales_records`.`created_at` >= %s AND `sales_records`.`created_at` <= %s ) GROUP BY `sales_records`.`player_id` ORDER BY NULL) subquery It's not selecting anything! Can someone please show me the right way to do this? Hacking Django I've found that clearing the cache of selected fields in django.db.models.sql.BaseQuery.get_aggregation() seems to solve the problem. Though I'm not really sure this is a fix or a workaround. @@ -327,10 +327,13 @@ # Remove any aggregates marked for reduction from the subquery # and move them to the outer AggregateQuery. + self._aggregate_select_cache = None + self.aggregate_select_mask = None for alias, aggregate in self.aggregate_select.items(): if aggregate.is_summary: query.aggregate_select[alias] = aggregate - del obj.aggregate_select[alias] + if alias in obj.aggregate_select: + del obj.aggregate_select[alias] ... yields result: {'count__max': 267, 'count__avg': 26.2563, 'count__min': 1}

    Read the article

  • Find Port Number and Domain Name to connect to Hive Table

    - by user1419563
    I am new to Hive, MapReduce and Hadoop. I am using Putty to connect to hive table and access records in the tables. So what I did is- I opened Putty and in the host name I typed- ares-ingest.vip.host.com and then I click Open. And then I entered my username and password and then few commands to get to Hive sql. Below is the list what I did $ bash bash-3.00$ hive Hive history file=/tmp/rjamal/hive_job_log_rjamal_201207010451_1212680168.txt hive> set mapred.job.queue.name=hdmi-technology; hive> select * from table LIMIT 1; So my question is- I was trying to connect to Hive Tables using Squirrel SQL Client, so in that my Connection URL is- jdbc:hive://ares-ingest.vip.host.com:10000/default. So whenever I try to connect with these attributes, I always get Hive: Could not establish connection to ares-ingest.vip.host.com:10000/default: java.net.ConnectException: Connection timed out: connect. It might be possible I am using wrong port number or domain name here. Is there any way from the command prompt I can find out these two things, like what Domain Name and Port Number(where Hive server is running) should I use to connect to Hive table from Squirrel SQL Client. As I know host and port are determined by where the hive server is running

    Read the article

  • Resources related to data-mining and gaming on social networks

    - by darren
    Hi all I'm interested in the problem of patterning mining among players of social networking games. For example detecting cheaters of a game, given a company's user database. So far I have been following the usual recipe for a data mining project: construct a data warehouse that aggregates significant information select a classifier, and train it with a subsectio of records from the warehouse validate classifier with another test set lather, rinse, repeat Surprisingly, I've found very little in this area regarding literature, best practices, etc. I am hoping to crowdsource the information gathering problem here. Specifically what I'm looking for: What classifiers have worked will for this type of pattern mining (it seems highly temporal, users playing games, users receiving rewards, users transferring prizes etc). Are there any highly agreed upon attributes specific to social networking / gaming data? What is a practical amount of information that should be considered? One problem I've run into is data overload, where queries and data cleansing may take days to complete. Related to point above, what hardware resources are required to produce results? I've found it difficult to estimate the amount of computing power I will require for production use. It has become apparent that a white box in the corner does not have enough horse-power for such a project. Are companies generally resorting to cloud solutions? Are they buying clusters? Basically, any resources (theoretical, academic, or practical) about implementing a social networking / gaming pattern-mining program would be very much appreciated. Thanks.

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • Excel VBA to Update SQL Table

    - by user307655
    Hi All, I have a small excel program that is use to upload data to an SQL server. This has been working well for a while. My problem now is that I would like to offer to users a function to update an existing record in SQL. As each row on this table has a unique id columne. There is a column call UID which is the primary key. This is part of the code currently to upload new data: Set Cn = New ADODB.Connection Cn.Open "Driver={SQL Server};Server=" & ServerName & ";Database=" & DatabaseName & _ ";Uid=" & UserID & ";Pwd=" & Password & ";" rs.Open TableName, Cn, adOpenKeyset, adLockOptimistic For RowCounter = StartRow To EndRow rs.AddNew For ColCounter = 1 To NoOfFields rs(ColCounter - 1) = shtSheetToWork.Cells(RowCounter, ColCounter) Next ColCounter Next RowCounter rs.UpdateBatch ' Tidy up rs.Close Set rs = Nothing Cn.Close Set Cn = Nothing Is there anyway i can modify this code to update a particular UID rather than importing new records? Thanks again for your help

    Read the article

  • How to explain to users the advantages of dumb primary key?

    - by Hao
    Primary key attractiveness I have a boss(and also users) that wants primary key to be sophisticated/smart/attractive control number(sort of like Social Security number, or credit card number format) I just padded the primary key(in Views) with zeroes to appease their desire to make the control number sophisticated,smart and attractive. But they wanted it as: first 2 digits as client code, then 4 digits as year year, then last 4 digits as transaction number on that client on a given year, then reset the transaction number of client to 1 when next year flows. Each client's transaction starts with 1. e.g. WM20090001, WM20090002, BB2009001, WM20100001, BB20100001 But as I wanted to make things as simple as possible, I forgo embedding their suggested smartness in primary key, I just keep the primary key auto increments regardless of client and year. But to make it not dull-looking(they really are adamant to make the primary key as smart control number), I made the primary key appears to them smart, on view query, I put the client code and four digit year code on front of the eight-zero padded autoincrement key, i.e. WM200900000001. Sort of slug-like information on autoincremented primary key. Keeping primary key autoincrement regardless of any other information, we are able keep other potential side effects problem when they edit a record, for example, if they made a mistake of entering the transaction on WM, then they edit the client code to BB, if we use smart primary key, the primary keys of WM customer will have gaps in their control number. Or worse yet, instead of letting the control numbers have gaps/holes, the user will request that subsequent records of that gap should shift up to that gap and have their subsequent primary keys re-adjust(decremented). How do you deal with these user requests(reasonable or otherwise)? Do you yield to their request? Or just continue using dumb primary key and explain them the repercussions of having a very smart/sophisticated primary key and educate them the significant advantages of having a dumb primary key? P.S. quotable quote(http://articles.techrepublic.com.com/5100-10878_11-1044961.html): "If you hold your tongue the first time users ask what is for them a reasonable request, things will work a lot better in the end."

    Read the article

  • How to get decent MySQL driver perfomance in Ruby

    - by Zombies
    I notice that I am getting very poor performance for either or both inserts and queries. The queries themselves are basic and can execute with no delay directly from mysql. The ruby script that I wrote is only 1 thread, so only 1 connection is being used, and never closed unless the script is terminated. Pretty basic, I am just trying to insert a lot of rows. There is a look-up or two to get a surrogate key, or to check for duplicates, but the complexity is just O(n). Also, it isn't like there are millions of records, so again the queries themselves take no time to run. I am using: Ruby 1.9.1 Gem/driver:ruby-mysql 2.9.2 MySQL 5.1.37-1ubuntu5.1 ^ all 32 bit versions on a 32bit ubuntu distro I am getting about 1-2 inserts per second, pretty slow. I know a lot of people will suggest to change drivers, but that means I have some refactoring and resting to do. So I would really appreciate any help, but please if you do recomend that at least say why you do (eg: if you have used ruby-mysql x.x.x before and found another mysql driver to be better).ruby-mysql 2.9.2 What I would like to know: How can I improve performance with ruby-mysql 2.9.2 If and only if I cannot do this with ruby-mysql 2.9.2, what should I do?

    Read the article

  • SQL statement with datetimepicker

    - by David Archer
    This should hopefully be a simple one. When using a date time picker in a windows form, I want an SQL statement to be carried out, like so: string sql = "SELECT * FROM Jobs WHERE JobDate = '" + dtpJobDate.Text + "'"; Unfortunately, this doesn't actually provide any results because the JobDate field is stored as a DateTime value. I'd like to be able to search for all records that are on this date, no matter what the time stored may be, any help? New query: SqlDataAdapter da2 = new SqlDataAdapter(); SqlCommand cmd = new SqlCommand(); cmd.CommandText = "SELECT * FROM Jobs WHERE JobDate >= @p_StartDate AND JobDate < @p_EndDate"; cmd.Parameters.Add ("@p_StartDate", SqlDbType.DateTime).Value = dtpJobDate.Value.Date; cmd.Parameters.Add ("@p_EndDate", SqlDbType.DateTime).Value = dtpJobDate.Value.Date.AddDays(1); cmd.Connection = conn; da2.SelectCommand = cmd; da2.Fill(dt); dgvJobDiary.DataSource = dt; Huge thanks for all the help!

    Read the article

  • Problem with JSONResult

    - by vikitor
    Hi, I' still newby to this so I'll try to explain what I'm doing. Basically what I want is to load a dropdownlist depending on the value of a previous one, and I want it to load the data and appear when the other one is changed. This is the code I've written in my controller: public ActionResult GetClassesSearch(bool ajax, string phylumID, string kingdom){ IList<TaxClass> lists = null; int _phylumID = int.Parse(phylumID); int _kingdom = int.Parse(kingdom); lists = _taxon.getClassByPhylumSearch(_phylumID, _kingdom); return Json(lists.count); } and this is how I call the method from the javascript function: function loadClasses(_phylum) { var phylum = _phylum.value; $.getJSON("/Suspension/GetClassesSearch/", { ajax: true, phylumID: phylum, kingdom: kingdom }, function(data) { alert(data); alert('no fallo') document.getElementById("pClass").style.display = "block"; document.getElementById("sClass").options[0] = new Option("-select-", "0", true, true); //for (i = 0; i < data.length; i++) { // $('#sClass').addOption(data[i].classID, data[i].className); //} }); } The thing is that just like this it works, I pass the function the number of classes within a selected phylum, and it displays the pclass element, the problem gets when I try to populate the slist with data (which should contain the objects retrieved from the database), because when there is data returned by the database changing return Json(lists) instead of return Json(lists.count) I keep getting the same error: A circular reference was detected while serializing an object of type 'SubSonic.Schema.DatabaseColumn'. I've been going round and round debugging and making tests but I can't make it work, and it is suppossed to be a simple thing, but I'm missing something. I have commented the for loop because I'm not quite sure if that's the way you access the data, because I've not been able to make it work when it finds records. Can anyone help me? Thanks in advance, Victor

    Read the article

  • Grouped UITableView shows blank space when section is empty

    - by christo16
    Hello, I have a grouped UITableView where not all sections may be displayed at once, the table is driven by some data that not every record may have. My trouble is that the records that don't have certain sections show up as blank spaces in the table (see picture) There are no footers/headers. Anything I've missed? - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return [self getRowCount:section]; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.text = [NSString stringWithFormat:@"section: %d row: %d",indexPath.section,indexPath.row]; // Configure the cell... return cell; } - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { float height = 0.0; if([self getRowCount:indexPath.section] != 0){ height = kDefaultRowHeight; } return height; }

    Read the article

  • Invoice & Invoice lines: How do you store customer address information?

    - by elviejo
    Hi I'm developing an invoicing application. So the general idea is to have two tables: Invoice (ID, Date, CustomerAddress, CustomerState, CustomerCountry, VAT, Total); InvoiceLine (Invoice_ID, ID, Concept, Units, PricePerUnit, Total); As you can see this basic design leads to a lot of repetiton of records where the client will have the same addrres, state and country. So the alternative is to have an address table and then make a relationship Address<-Invoice. However I think that an invoice is immutable document and should be stored just the way it was first made. Sometimes customers change their addresses, or states and if it was coming from an Address catalog that will change all the previously made invoices. So What is your experience? How is the customer address stored in an invoice? In the Invoice table? an Address Table? or something else? Can you provide pointers to a book, article or document where this is discussed in further detail?

    Read the article

  • SQL 2000 (MSDE) Hangs When It Receives an Erroneous Query from a Classic ASP Web Application

    - by Jimbo
    I have a SQL interface page in my classic ASP web app that allows admin users to run queries against the app's database (MSDE 2000) - it simply consists of a textarea that the user submits and the app returns the resulting list of records as below Dim oRS Set oRS = Server.CreateObject("ADODB.Recordset") oRS.ActiveConnection = sConnectionString // run the query - this is for the admin only so doesnt check for sql safe commands etc. oRS.Open Request.Form("txtSQL") If Not oRS.EOF Then // list the field names from the recordset For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).name & "&nbsp;" Next // show the data for each record in the recordset While Not oRS.EOF For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).value & "&nbsp;" Next Response.Write "<br />" oRS.Movenext() Wend End If The problem with this is that if you send it an invalid query (with a spelling mistake, invalid join etc.) instead of throwing back an error immediately, it hangs IIS (you can see this by trying to browse the app from another computer, it fails) for a number of minutes and THEN returns the error. I have NO idea why! Can anyone help?

    Read the article

  • Is this possible to join tables in doctrine ORM without using relations?

    - by piemesons
    Suppose there are two tables. Table X-- Columns: id x_value Table Y-- Columns: id x_id y_value Now I dont want to define relationship in doctrine classes and i want to retrieve some records using these two tables using a query like this: Select x_value from x, y where y.id="variable_z" and x.id=y.x_id; I m not able to figure out how to write query like this in doctrine orm EDIT: Table structures: Table 1: CREATE TABLE IF NOT EXISTS `image` ( `id` int(11) NOT NULL AUTO_INCREMENT, `random_name` varchar(255) NOT NULL, `user_id` int(11) NOT NULL, `community_id` int(11) NOT NULL, `published` varchar(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=259 ; Table 2: CREATE TABLE IF NOT EXISTS `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `city` varchar(20) DEFAULT NULL, `state` varchar(20) DEFAULT NULL, `school` varchar(50) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=20 ; Query I am using: $q = new Doctrine_RawSql(); $q ->select('{u.*}, {img.*}') ->from('users u LEFT JOIN image img ON u.id = img.user_id') ->addComponent('u', 'Users u') ->addComponent('img', 'u.Image img') ->where("img.community_id='$community_id' AND img.published='y' AND u.state='$state' AND u.city='$city ->orderBy('img.id DESC') ->limit($count+12) ->execute(); Error I am getting: Fatal error: Uncaught exception 'Doctrine_Exception' with message 'Couldn't find class u' in C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Table.php:290 Stack trace: #0 C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Table.php(240): Doctrine_Table- >initDefinition() #1 C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Connection.php(1127): Doctrine_Table->__construct('u', Object(Doctrine_Connection_Mysql), true) #2 C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\RawSql.php(425): Doctrine_Connection- >getTable('u') #3 C:\xampp\htdocs\fanyer\doctrine\models\Image.php(33): Doctrine_RawSql- >addComponent('img', 'u.Image imga') #4 C:\xampp\htdocs\fanyer\community_images.php(31): Image->get_community_images_gallery_filter(4, 0, 'AL', 'ALBERTVILLE') #5 {main} thrown in C:\xampp\htdocs\fanyer\doctrine\lib\Doctrine\Table.php on line 290

    Read the article

  • Can AVAudioSession do full duplex?

    - by Eric Christensen
    It would seem like it should be able to, but the following breakout test code can't do both: //play a file: NSArray *pathsArray = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [pathsArray objectAtIndex:0]; NSString* playFilePath=[documentsDirectory stringByAppendingPathComponent:@"testplayfile.wav"]; AVAudioPlayer *tempplayer = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:playFilePath] error:nil]; [tempplayer prepareToPlay]; [tempplayer play]; //and record a file: NSString* recFilePath=[documentsDirectory stringByAppendingPathComponent:@"testrecordfile.wav"]; AVAudioRecording *soundrecording = [[AVAudioRecorder alloc] initWithURL:[NSURL fileURLWithPath:recFilePath] settings:nil error:nil]; [soundrecording prepareToRecord]; [soundrecording record]; This is the minimum I can think of to individually play one file and record another. And this works just fine in the simulator. I can play back a file and record at the same time. But it doesn't work on the iphone itself. If I comment out either function, the other performs fine. The playback plays fine either alone or with both, if it's first. If I comment out the playback, the record records fine. (There's additional code to stop the recording not shown here.) So each works fine, but not together. I know audioQueue has a setting to allow both, but I don't see an analogue for AVAudioSessions. Any idea if it's possible, and if so, what I need to add? Thanks!

    Read the article

  • Primary Key Identity Value Increments On Unique Key Constraint Violation

    - by Jed
    I have a SqlServer 2008 table which has a Primary Key (IsIdentity=Yes) and three other fields that make up a Unique Key constraint. In addition I have a store procedure that inserts a record into the table and I call the sproc via C# using a SqlConnection object. The C# sproc call works fine, however I have noticed interesting results when the C# sproc call violates the Unique Key constraint.... When the sproc call violates the Unique Key constraint, a SqlException is thrown - which is no surprise and cool. However, I notice that the next record that is successfully added to the table has a PK value that is not exactly one more than the previous record - For example: Say the table has five records where the PK values are 1,2,3,4, and 5. The sproc attempts to insert a sixth record, but the Unique Key constraint is violated and, so, the sixth record is not inserted. Then the sproc attempts to insert another record and this time it is successful. - This new record is given a PK value of 7 instead of 6. Is this normal behavior? If so, can you give me a reason why this is so? (If a record fails to insert, why is the PK index incremented?) If this is not normal behavior, can you give me any hints as to why I am seeing these symptoms?

    Read the article

  • Sql Querying, group relationships

    - by Jordan
    Hi, Suppose I have two tables: Group ( id integer primary key, someData1 text, someData2 text ) GroupMember ( id integer primary key, group_id foreign key to Group.id, someData text ) I'm aware that my SQL syntax is not correct :) Hopefully is clear enough. My problem is this: I want to load a group record and all the GroupMember records associated with that group. As I see it, there are two options. A single query: SELECT Group.id, Group.someData1, Group.someData2 GroupMember.id, GroupMember.someData FROM Group INNER JOIN GroupMember ... WHERE Group.id = 4; Two queries: SELECT id, someData2, someData2 FROM Group WHERE id = 4; SELECT id, someData FROM GroupMember WHERE group_id = 4; The first solution has the advantage of only being one database round trip, but has the disadvantage of returning redundant data (All group data is duplicated for every group member) The second solution returns no duplicate data but involves two round trips to the database. What is preferable here? I suppose there's some threshold such that if the group sizes become sufficiently large, the cost of returning all the redundant data is going to be greater than the overhead involved with an additional database call. What other things should I be thinking about here? Thanks, Jordan

    Read the article

  • What should be the responsibility of a presenter here?

    - by Achu
    I have a 3 layer design. (UI / BLL / DAL) UI = ASP.NET MVC In my view I have collection of products for a category. Example: Product 1, Product 2 etc.. A user able to select or remove (by selecting check box) product’s from the view, finally save as a collection when user submit these changes. With this 3 layer design how this product collection will be saved? How the filtering of products (removal and addition) to the category object? Here are my options. (A) It is the responsibility of the controller then the pseudo Code would be Find products that the user selected or removed and compare with existing records. Add or delete that collection to category object. Call SaveCategory(category); // BLL CALL Here the first 2 process steps occurs in the controller. (B) It is the responsibility of BLL then pseudo Code would be Collect products what ever user selected SaveCategory(category, products); // BLL CALL Here it's up to the SaveCategory (BLL) to decide what products should be removed and added to the database. Thanks

    Read the article

  • Boto - How to delete a record set from route53 -Tried to delete resource record set but it was not found

    - by Tampa
    I am using the following to delete route53 records. I get no error messages. conn = Route53Connection(aws_access_key_id, aws_secret_access_key) changes = ResourceRecordSets(conn, zone_id) change = changes.add_change("DELETE",sub_domain, "A", 60,weight=weight,identifier=identifier) change.add_value(ip_old) changes.commit() all required fields are present and they match..weight, identifier, ttl=60 etc.\ e.g. test.com. A 111.111.111.111 60 1 id1 test.com. A 111.111.111.222 60 1 id2 I want to delete 111.111.111.222 and the record set. So, what is the proper way to delete a record set? For a record set, I will have multiple values that are distinguished by a unique identifier. When an ip address becomes in active I want to remove from route53. I am using a a poor mans load balancing. Here is the meta of the record want to delete. {'alias_dns_name': None, 'alias_hosted_zone_id': None, 'identifier': u'15754-1', 'name': u'hui.com.', 'resource_records': [u'103.4.xxx.xxx'], 'ttl': u'60', 'type': u'A', 'weight': u'1'} Traceback (most recent call last): File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 353, in <module> deleteRedisSubDomains(aws_access_key_id, aws_secret_access_key,platform=platform,sub_domain=sub_domain,redis_domain=redis_domain,zone_id=zone_id,ip_address=ip_address,weight=1,identifier=identifier) File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 341, in deleteRedisSubDomains changes.commit() File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/record.py", line 131, in commit return self.connection.change_rrsets(self.hosted_zone_id, self.to_xml()) File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/connection.py", line 291, in change_rrsets body) boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request <?xml version="1.0"?> <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2011-05-05/"><Error><Type>Sender</Type><Code>InvalidChangeBatch</Code><Message>Tried to delete resource record set hui.com., type A, SetIdentifier 15754-1 but it was not found</Message></Error><RequestId>9972af89-cb69-11e1-803b-7bde5b9c457d</RequestId></ErrorResponse> Thanks

    Read the article

  • problem in displaying a slug with dash

    - by Mac Taylor
    hey guys i made a slug with dash for my stories urls such as : http://stackoverflow.com/questions/482636/fetching-records-with-slug-instead-of-id this is my code to create slug : function Slugit($title) { $title = strip_tags($title); // Preserve escaped octets. $title = preg_replace('|%([a-fA-F0-9][a-fA-F0-9])|', '---$1---', $title); // Remove percent signs that are not part of an octet. $title = str_replace('%', '', $title); // Restore octets. $title = preg_replace('|---([a-fA-F0-9][a-fA-F0-9])---|', '%$1', $title); $title = remove_accents($title); if (seems_utf8($title)) { if (function_exists('mb_strtolower')) { $title = mb_strtolower($title, 'UTF-8'); } $title = utf8_uri_encode($title, 500); } $title = strtolower($title); $title = preg_replace('/&.+?;/', '', $title); // kill entities $title = str_replace('.', '-', $title); $title = preg_replace('/[^%a-z0-9 _-]/', '', $title); $title = preg_replace('/\s+/', '-', $title); $title = preg_replace('|-+|', '-', $title); $title = trim($title, '-'); return $title; } as you can see dashes , up to here , everything is fine but when i click on the link , it can not open and find it my database as it's saved in normal and with no dashes so i wrote something to remove dashes $string = str_replace('-', '&nbsp;', $string); but when there is ? or . in url , then it can not dispaly ! any help to retrieve back the original url ?!

    Read the article

  • How to keep character encoding with database queries.

    - by JasonS
    Hi, I am doing the following. 1) I am exporting a database and saving it to a file called dump.sql. 2) The file is then transferred to a different server via PHP ftp. 3) When the file has been successfully transferred the administrator has an option to run a 'dbtransfer' script on the new host. 4) This script blows up the script and runs the queries line by line. This works great - however there is a problem with foreign language encoding. We are using UTF-8. Step 1 : This works fine, file is in UTF-8 Format. Step 3 : When I test the contents of the dump.sql file using mb_check_encoding(). The string comes back as UTF-8. Step 4 : This creates tables with utf8_general_ci encoding. The information is dumped in. When I check the table after the transfer I get records like this: 'ç,Ç,ö,Ö,ü,Ü,ı,İ,ş,Ş,ğ,Ğ'. I don't understand how a UTF-8 string can lose its encoding when it goes into the database. Am I missing a step? Do I need to run some sort of function to ensure the string is parsed as UTF-8? Once the system is installed I can save foreign language queries. It is just the transfer that is messing up. Any ideas?

    Read the article

  • ASP.net MVC - Update Model on complex models

    - by ludicco
    Hi there, I'm struggling myself trying to get the contents of a form which is a complex model and then update the model with that complex model. My account model has many individuals [AcceptVerbs(HttpVerbs.Post)] public ActionResult OpenAnAccount(string area,[Bind(Exclude = "Id")]Account account, [Bind(Prefix="Account.Individuals")] EntitySet<Individual> individuals){ var db = new DB(); account.individuals = invdividuals; db.Accounts.InsertOnSubmit(account); db.SubmitChanges(); } So it works nicely for adding new Records, but not for update them like: [AcceptVerbs(HttpVerbs.Post)] public ActionResult OpenAnAccount(string area,[Bind(Exclude = "Id")]Account account, [Bind(Prefix="Account.Individuals")] EntitySet<Individual> individuals){ var db = new DB(); var record = db.Accounts.Single(a => a.Reference == area); account.individuals = invdividuals; try{ UpdateModel(record, account); // I can't convert account ToValueProvider() db.SubmitChanges(); } catch{ return ... //Error Message } } My problem is being how to use UpdateModel with the account model since it's not a FormCollection. How can I convert it? How can I use ToValueProvider with a complex model? I hope I was clear enough Thanks a lot :)

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >