Search Results

Search found 32970 results on 1319 pages for 'zend db select'.

Page 256/1319 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Passing Results from SQL to Google Maps API in CodeIgniter

    - by Jason Shultz
    I'm hoping to use google maps on my site. My addresses are stored in a db. I’m pulling up a page where the information is all dynamic. For example: mysite.com/site/business/5 (where 5 is the id of the business). Let’s say I do a query like this: function addressForMap($id) { $this->db->select(‘b.id, b.busaddress, b.buscity, b.buszip’); $this->db->from(‘business as b’); $this->db->where(‘b.id, $id); } How can I output the info to the google maps api correctly so that it display’s the map appropriately? The API interface takes the results like this: $marker['address'] = 'Crescent Park, Palo Alto';

    Read the article

  • SQL Server problems reading columns with a foreign key

    - by illdev
    I have a weird situation, where simple queries seem to never finish for instance SELECT top 100 ArticleID FROM Article WHERE ProductGroupID=379114 returns immediately SELECT top 1000 ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT top 1000 ArticleID FROM Article returns immediately By 'returning' I mean 'in query analyzer the green check mark appears and it says "Query executed successfully"'. I sometimes get the rows painted to the grid in qa, but still the query goes on waiting for my client to time out - 'sometimes': SELECT ProductGroupID AS Product23_1_, ArticleID AS ArticleID1_, ArticleID AS ArticleID18_0_, Inventory_Name AS Inventory3_18_0_, Inventory_UnitOfMeasure AS Inventory4_18_0_, BusinessKey AS Business5_18_0_, Name AS Name18_0_, ServesPeople AS ServesPe7_18_0_, InStock AS InStock18_0_, Description AS Descript9_18_0_, Description2 AS Descrip10_18_0_, TechnicalData AS Technic11_18_0_, IsDiscontinued AS IsDisco12_18_0_, Release AS Release18_0_, Classifications AS Classif14_18_0_, DistributorName AS Distrib15_18_0_, DistributorProductCode AS Distrib16_18_0_, Options AS Options18_0_, IsPromoted AS IsPromoted18_0_, IsBulkyFreight AS IsBulky19_18_0_, IsBackOrderOnly AS IsBackO20_18_0_, Price AS Price18_0_, Weight AS Weight18_0_, ProductGroupID AS Product23_18_0_, ConversationID AS Convers24_18_0_, DistributorID AS Distrib25_18_0_, type AS Type18_0_ FROM Article AS articles0_ WHERE (IsDiscontinued = '0') AND (ProductGroupID = 379121) shows this behavior. I have no idea what is going on. Probably select is broken ;) I got a foreign key on ProductGroups ALTER TABLE [dbo].[Article] WITH CHECK ADD CONSTRAINT [FK_ProductGroup_Articles] FOREIGN KEY([ProductGroupID]) REFERENCES [dbo].[ProductGroup] ([ProductGroupID]) GO ALTER TABLE [dbo].[Article] CHECK CONSTRAINT [FK_ProductGroup_Articles] there are some 6000 rows and IsDiscontinued is a bit, not null, but leaving this condition out does not change the outcome. Anyone can tell me how to handle such a situation? More info, anyone? Additional Info: this does not seem to be restricted to this Foreign Key, but all/some referencing this entity.

    Read the article

  • Improving performance in this query

    - by Luiz Gustavo F. Gama
    I have 3 tables with user logins: sis_login = administrators tb_rb_estrutura = coordinators tb_usuario = clients I created a VIEW to unite all these users by separating them by levels, as follows: create view `login_names` as select `n1`.`cod_login` as `id`, '1' as `level`, `n1`.`nom_user` as `name` from `dados`.`sis_login` `n1` union all select `n2`.`id` as `id`, '2' as `level`, `n2`.`nom_funcionario` as `name` from `tb_rb_estrutura` `n2` union all select `n3`.`cod_usuario` as `id`, '3' as `level`, `n3`.`dsc_nome` as `name` from `tb_usuario` `n3`; So, can occur up to three ids repeated for different users, which is why I separated by levels. This VIEW is just to return me user name, according to his id and level. considering it has about 500,000 registered users, this view takes about 1 second to load. too much time, but is becomes very small when I need to return the latest posts on the forum of my website. The tables of the forums return the user id and level, then look for a name in this VIEW. I have registered 18 forums. When I run the query, it takes one second for each forum = 18 seconds. OMG. This page loads every time somebody enter my website. This is my query: select `x`.`forum_id`, `x`.`topic_id`, `l`.`nome` from ( select `t`.`forum_id`, `t`.`topic_id`, `t`.`data`, `t`.`user_id`, `t`.`user_level` from `tb_forum_topics` `t` union all select `a`.`forum_id`, `a`.`topic_id`, `a`.`data`, `a`.`user_id`, `a`.`user_level` from `tb_forum_answers` `a` ) `x` left outer join `login_names` `l` on `l`.`id` = `x`.`user_id` and `l`.`level` = `x`.`user_level` group by `x`.`forum_id` asc USING EXPLAIN: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 6 Using temporary; Using filesort 1 PRIMARY <derived4> ALL NULL NULL NULL NULL 530415 4 DERIVED n1 ALL NULL NULL NULL NULL 114 5 UNION n2 ALL NULL NULL NULL NULL 2 6 UNION n3 ALL NULL NULL NULL NULL 530299 NULL UNION RESULT ALL NULL NULL NULL NULL NULL 2 DERIVED t ALL NULL NULL NULL NULL 3 3 UNION r ALL NULL NULL NULL NULL 3 NULL UNION RESULT ALL NULL NULL NULL NULL NULL Somebody can help me or give a suggestion?

    Read the article

  • should i advocate migrating from access to (my)sql

    - by HotOil
    Hi: We have a windows MFC app that is written against an access database on a company server. The db is not that big: 19 MB. There are at most 2-3 users accessing it at any one time. It is used in a factory environment where access speed (or lack thereof) over the intranet becomes noticeable as it is part of the manufacturing time for our widgets. The scenario is this: as each widget is completed, it gets a record in the db.. by the end of the year, the db is larger and searching for a record takes longer and longer. The solution so far has been to manually move older records to an archival table about once a year. We are reworking other portions of this app right now, and it would be a good time to move to another db if we are going to do it. It is my understanding that if we were using sql, the search time would not go up as the table gets bigger because the entire .mdb does not have to be sent over the network each time. Is this correct? Does anyone have any insight about whether it could be worth it to go to the trouble (time and money) of migrating to a new db, or should I just add more functionality to the application we have now, and maybe automatically purge the older records from time to time, and add additional facilities to the app to get at the older records when needed? Thanks for any wisdom you can share..

    Read the article

  • How to sum up values of an array in assembly?

    - by Pablo Fallas
    I have been trying to create a program which can sum up all the values of an "array" in assembly, I have done the following: ORG 1000H TABLE DB DUP(2,4,6,8,10,12,14,16,18,20) FIN DB ? TOTAL DB ? MAX DB 13 ORG 2000H MOV AL, 0 MOV CL, OFFSET FIN-OFFSET TABLE MOV BX, OFFSET TABLE LOOP: ADD AL, [BX] INC BX DEC CL JNZ LOOP HLT END BTW I am using msx88 to compile this code. But I get an error saying that the code 0 has not been recognized. Any advise?

    Read the article

  • (Oracle) How get total number of results when using a pagination query?

    - by BestPractices
    I am using Oracle 10g and the following paradigm to get a page of 15 results as a time (so that when the user is looking at page 2 of a search result, they see records 16-30). select * from ( select rownum rnum, a.* from (my_query) a where rownum <= 30 ) where rnum > 15; Right now I'm having to run a separate SQL statement to do a "select count" on "my_query" in order to get the total number of results for my_query (so that I can show it to the user and use it to figure out total number of pages, etc). Is there any way to get the total number of results without doing this via a second query, i.e. by getting it from above query? I've tried adding "max(rownum)", but it doesn't seem to work (I get an error [ORA-01747] that seems to indicate it doesnt like me having the keyword rownum in the group by). My rationale for wanting to get this from the original query rather than doing it in a separate SQL statement is that "my_query" is an expensive query so I'd rather not run it twice (once to get the count, and once to get the page of data) if I dont have to; but whatever solution I can come up with to get the number of results from within a single query (and at the same time get the page of data I need) should not add much if any additional overhead, if possible. Please advise. Here is exactly what I'm trying to do for which I receive an ORA-01747 error because I believe it doesnt like me having ROWNUM in the group by. Note, If there is another solution that doesnt use max(ROWNUM), but something else, that is perfectly fine too. This solution was my first thought as to what might work. SELECT * FROM (SELECT r.*, ROWNUM RNUM, max(ROWNUM) FROM (SELECT t0.ABC_SEQ_ID AS c0, t0.FIRST_NAME, t0.LAST_NAME, t1.SCORE FROM ABC t0, XYZ t1 WHERE (t0.XYZ_ID = 751) AND t0.XYZ_ID = t1.XYZ_ID ORDER BY t0.RANK ASC) r WHERE ROWNUM <= 30 GROUP BY r.*, ROWNUM) WHERE RNUM > 15

    Read the article

  • Weird MySQL behavior, seems like a SQL bug

    - by Daniel Magliola
    I'm getting a very strange behavior in MySQL, which looks like some kind of weird bug. I know it's common to blame the tried and tested tool for one's mistakes, but I've been going around this for a while. I have 2 tables, I, with 2797 records, and C, with 1429. C references I. I want to delete all records in I that are not used by C, so i'm doing: select * from i where id not in (select id_i from c); That returns 0 records, which, given the record counts in each table, is physically impossible. I'm also pretty sure that the query is right, since it's the same type of query i've been using for the last 2 hours to clean up other tables with orphaned records. To make things even weirder... select * from i where id in (select id_i from c); DOES work, and brings me the 1297 records that I do NOT want to delete. So, IN works, but NOT IN doesn't. Even worse: select * from i where id not in ( select i.id from i inner join c ON i.id = c.id_i ); That DOES work, although it should be equivalent to the first query (i'm just trying mad stuff at this point). Alas, I can't use this query to delete, because I'm using the same table i'm deleting from in the subquery. I'm assuming something in my database is corrupt at this point. In case it matters, these are all MyISAM tables without any foreign keys, whatsoever, and I've run the same queries in my dev machine and in the production server with the same result, so whatever corruption there might be survived a mysqldump / source cycle, which sounds awfully strange. Any ideas on what could be going wrong, or, even more importantly, how I can fix/work around this? Thanks! Daniel

    Read the article

  • active record relations – who needs it?

    - by M2_
    Well, I`m confused about rails queries. For example: Affiche belongs_to :place Place has_many :affiches We can do this now: @affiches = Affiche.all( :joins => :place ) or @affiches = Affiche.all( :include => :place ) and we will get a lot of extra SELECTs, if there are many affiches: Place Load (0.2ms) SELECT "places".* FROM "places" WHERE "places"."id" = 3 LIMIT 1 Place Load (0.3ms) SELECT "places".* FROM "places" WHERE "places"."id" = 3 LIMIT 1 Place Load (0.8ms) SELECT "places".* FROM "places" WHERE "places"."id" = 444 LIMIT 1 Place Load (1.0ms) SELECT "places".* FROM "places" WHERE "places"."id" = 222 LIMIT 1 ...and so on... And (sic!) with :joins used every SELECT is doubled! Technically we cloud just write like this: @affiches = Affiche.all( ) and the result is totally the same! (Because we have relations declared). The wayout of keeping all data in one query is removing the relations and writing a big string with "LEFT OUTER JOIN", but still there is a problem of grouping data in multy-dimentional array and a problem of similar column names, such as id. What is done wrong? Or what am I doing wrong? UPDATE: Well, i have that string Place Load (2.5ms) SELECT "places".* FROM "places" WHERE ("places"."id" IN (3,444,222,57,663,32,154,20)) and a list of selects one by one id. Strange, but I get these separate selects when I`m doing this in each scope: <%= link_to a.place.name, **a.place**( :id => a.place.friendly_id ) %> the marked a.place is the spot, that produces these extra queries.

    Read the article

  • Sqlite issues with HTC Desire HD

    - by Greg
    Recently I have been getting a lot of complaints about the HTC Desire series and it failing while invoking sql statements. I have received reports from users with log snapshots that contain the following. I/Database( 2348): sqlite returned: error code = 8, msg = statement aborts at 1: [pragma journal_mode = WAL;] E/Database( 2348): sqlite3_exec to set journal_mode of /data/data/my.app.package/files/localized_db_en_uk-1.sqlite to WAL failed followed by my app basically burning in flames because the call to open the database results in a serious runtime error that manifests itself as the cursor being left open. There shouldn't be a cursor at this point as we are trying to open it. This only occurs with the HTC Desire HD and Z. My code basically does the following (changed a little to isolate the problem area). SQLiteDatabase db; String dbName; public SQLiteDatabase loadDb(Context context) throws IOException{ //Close any old db handle if (db != null && db.isOpen()) { db.close(); } // The name of the database to use from the bundled assets. String dbAsset = "/asset_dir/"+dbName+".sqlite"; InputStream myInput = context.getAssets().open(dbAsset, Context.MODE_PRIVATE); // Create a file in the app's file directory since sqlite requires a path // Not ideal but we will copy the file out of our bundled assets and open it // it in another location. FileOutputStream myOutput = context.openFileOutput(dbName, Context.MODE_PRIVATE); byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer)) > 0) { myOutput.write(buffer, 0, length); } // Close the streams myOutput.flush(); // Guarantee Write! myOutput.getFD().sync(); myOutput.close(); myInput.close(); // Not grab the newly written file File fileObj = context.getFileStreamPath(dbName); // and open the database return db = SQLiteDatabase.openDatabase(fileObj.getAbsolutePath(), null, SQLiteDatabase.OPEN_READONLY | SQLiteDatabase.NO_LOCALIZED_COLLATORS); } Sadly this phone is only available in the UK and I don't have one in my inventory. I am only getting reports of this type from the HTC Desire series. I don't know what changed as this code has been working without any problem. Is there something I am missing?

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • DOMAIN REDIRECT PROBLEM WITH JQUERY / JAVASCRIPT

    - by GiovanniDema
    Hi guys, first time here. I got a strange problem. I have a fullscreen image scaler javascript (as GOTOCHINA website) that works very well on my website. Then, I purchased a domain redirect pointing on my website and when redirecting suddenly internet explorer 7 and internet explorer 8 give me this error Messagge: is not a valid argument. Line: 34 Char: 17 URI: http://*****/scaler.js The script is var db=document.body; var imag=document.getElementById('wallpaper'); var dbsize={}; var imgsrc=imag.src; var keyStop=function(e){ var e=window.event||e||{}; var tag=e.target.tagName.toLowerCase(); if(tag!='textarea'&&!(tag=='input'&&(e.target.type=='text'||e.target.type=='password'))){ if(e.keyCode==32||e.keyCode==39||e.keyCode==40){ if(e.preventDefault)e.preventDefault(); else e.returnValue=false; } } } if(this.addEventListener)window.addEventListener('keydown',keyStop,false); else window.attachEvent('onkeydown',keyStop); setInterval(function(){ window.scrollTo(0,0); if(imag.complete){ if(db.clientWidth!=dbsize.w||db.clientHeight!=dbsize.h||imag.src!=imgsrc){ imgsrc=imag.src; var dbsizew=db.clientWidth; var dbsizeh=db.clientHeight; var newwidth=Math.round(dbsizeh*(imag.offsetWidth/imag.offsetHeight)); var nextvar=dbsizewnewwidth?dbsizew:newwidth; imag.style.width=nextvar+'px'; } } },300); In other words when i open the official website everything's working correctly. When i open redirect domain pointing on official website... the previous error appears. The line is exactly this - imag.style.width=nextvar+'px'; Thanks in advance Giovanni

    Read the article

  • Spring + iBatis + Hessian caching

    - by ILya
    Hi. I have a Hessian service on Spring + iBatis working on Tomcat. I'm wondering how to cache results... I've made the following config in my sqlmap file: <sqlMap namespace="Account"> <cacheModel id="accountCache" type="MEMORY" readOnly="true" serialize="false"> <flushInterval hours="24"/> <flushOnExecute statement="Account.addAccount"/> <flushOnExecute statement="Account.deleteAccount"/> <property name="reference-type" value="STRONG" /> </cacheModel> <typeAlias alias="Account" type="domain.Account" /> <select id="getAccounts" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts; </select> <select id="getAccount" parameterClass="Long" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts where id=#id#; </select> <insert id="addAccount" parameterClass="Account"> fix all; insert into accounts (id, name, pin) values (#id#, #name#, #pin#); </insert> <delete id="deleteAccount" parameterClass="Long"> fix all; delete from accounts where id = #id#; </delete> </sqlMap> Then i've done some tests... I have a hessian client application. I'm calling getAccounts several times and after each call it's a query to DBMS. How to make my service to query DBMS only a first time (after server restart) getAccounts called and for the following calls to use a cache?

    Read the article

  • How To Run Postgres locally

    - by Rohit Rayudu
    I read the Postgres docs for Flask and they said that to run Postgres you should have the following code app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = postgresql://localhost/[YOUR_DB_NAME]' db = SQLAlchemy(app) How do I know my database name? I wrote db as the name - but I got an error sqlalchemy.exc.OperationalError: (OperationalError) FATAL: database "[db]" does not exist Running Heroku with Flask if that helps

    Read the article

  • SQL Server: Mitigating schema changes/upgrades

    - by bradhe
    I haven't spent a ton of time researching this yet, mostly looking for best practices on upgrading/changing DB schemas. We're actively developing a new product and as such we often have additions or changes to our DB schema. We also have many copies of the DB -- one for the test environment, one for the prod environment, dev environments, you name it. We don't really want to have to blow away test data every time we want to make a change to the DB. Are there good ways of automating this or handling this? None of us have really ever had to deal with this so...

    Read the article

  • Trouble with Code First DatabaseGenerated Composite Primary Key

    - by Nick Fleetwood
    This is a tad complicated, and please, I know all the arguments against natural PK's, so we don't need to have that discussion. using VS2012/MVC4/C#/CodeFirst So, the PK is based on the date and a corresponding digit together. So, a few rows created today would be like this: 20131019 1 20131019 2 And one created tomorrow: 20131020 1 This has to be automatically generated using C# or as a trigger or whatever. The user wouldn't input this. I did come up with a solution, but I'm having problems with it, and I'm a little stuck, hence the question. So, I have a model: public class MainOne { //[Key] //public int ID { get; set; } [Key][Column(Order=1)] [DatabaseGenerated(DatabaseGeneratedOption.None)] public string DocketDate { get; set; } [Key][Column(Order=2)] [DatabaseGenerated(DatabaseGeneratedOption.None)] public string DocketNumber { get; set; } [StringLength(3, ErrorMessage = "Corp Code must be three letters")] public string CorpCode { get; set; } [StringLength(4, ErrorMessage = "Corp Code must be four letters")] public string DocketStatus { get; set; } } After I finish the model, I create a new controller and views using VS2012 scaffolding. Then, what I'm doing is debugging to create the database, then adding the following instead of trigger after Code First creates the DB [I don't know if this is correct procedure]: CREATE TRIGGER AutoIncrement_Trigger ON [dbo].[MainOnes] instead OF INSERT AS BEGIN DECLARE @number INT SELECT @number=COUNT(*) FROM [dbo].[MainOnes] WHERE [DocketDate] = CONVERT(DATE, GETDATE()) INSERT INTO [dbo].[MainOnes] (DocketDate,DocketNumber,CorpCode,DocketStatus) SELECT (CONVERT(DATE, GETDATE ())),(@number+1),inserted.CorpCode,inserted.DocketStatus FROM inserted END And when I try to create a record, this is the error I'm getting: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: The object state cannot be changed. This exception may result from one or more of the primary key properties being set to null. Non-Added objects cannot have null primary key values. See inner exception for details. Now, what's interesting to me, is that after I stop debugging and I start again, everything is perfect. The trigger fired perfectly, so the composite PK is unique and perfect, and the data in other columns is intact. My guess is that EF is confused by the fact that there is seemingly no value for the PK until AFTER an insert command is given. Also, appearing to back this theory, is that when I try to edit on of the rows, in debug, I get the following error: The number of primary key values passed must match number of primary key values defined on the entity. Same error occurs if I try to pull the 'Details' or 'Delete' function. Any solution or ideas on how to pull this off? I'm pretty open to anything, even creating a hidden int PK. But it would seem redundant. EDIT 21OCT13 [HttpPost] public ActionResult Create(MainOne mainone) { if (ModelState.IsValid) { var countId = db.MainOnes.Count(d => d.DocketDate == mainone.DocketNumber); //assuming that the date field already has a value mainone.DocketNumber = countId + 1; //Cannot implicitly convert type int to string db.MainOnes.Add(mainone); db.SaveChanges(); return RedirectToAction("Index"); } return View(mainone); } EDIT 21OCT2013 FINAL CODE SOLUTION For anyone like me, who is constantly searching for clear and complete solutions. if (ModelState.IsValid) { String udate = DateTime.UtcNow.ToString("yyyy-MM-dd"); mainone.DocketDate = udate; var ddate = db.MainOnes.Count(d => d.DocketDate == mainone.DocketDate); //assuming that the date field already has a value mainone.DocketNumber = ddate + 1; db.MainOnes.Add(mainone); db.SaveChanges(); return RedirectToAction("Index"); }

    Read the article

  • Have to click twice to submit the form

    - by phil
    Intended function: require user to select an option from the drop down menu. After user clicks submit button, validate if an option is selected. Display error message and not submit the form if user fails to select. Otherwise submit the form. Problem: After select an option, button has to be clicked twice to submit the form. I have no clue at all.. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <script src="jquery-1.4.2.min.js" type="text/javascript"></script> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <style> p{display: none;} </style> </head> <script> $(function(){ // language as an array var language=['Arabic','Cantonese','Chinese','English','French','German','Greek','Hebrew','Hindi','Italian','Japanese','Korean','Malay','Polish','Portuguese','Russian','Spanish','Thai','Turkish','Urdu','Vietnamese']; $('#muyu').append('<option value=0>Select</option>'); //loop through array for (i in language) //js unique statement for iterate array { $('#muyu').append($('<option>',{id:'muyu'+i,val:language[i], html:language[i]})) } $('form').submit(function(){ alert('I am being called!'); // check if submit event is triggered if ( $('#muyu').val()==0 ) {$('#muyu_error').show(); } else {$('#muyu_error').hide(); return true;} return false; }) }) </script> <form method="post" action="match.php"> I am fluent in <select name='muyu' id='muyu'></select> <p id='muyu_error'>Tell us your native language</p> <input type="submit" value="Go"> </form>

    Read the article

  • Delete query in Linq

    - by Ani
    I have this simple code but it shows error. I dont know where I am going wrong. I shows error in last line.."DeleteOnSubmit" linq_testDataContext db = new linq_testDataContext(); var remove = from aremove in db.logins where aremove.username == userNameString && aremove.Password == pwdString select aremove; db.logins.DeleteOnSubmit(remove); Thanks, Ani

    Read the article

  • getJSON for dropbox data

    - by gheil.apl
    Although i used getJSON in http://jsbin.com/dbJSON/edit i have not been able to connect with any of my own made up data. i tried 4, and the example at Flickr for "cats". Only the latter worked... this is the output: {assoc: null,assoc.js: null,stub: null,stub.js: null,cat: [object Object]} i am at that "base", as i did get the image there, but db.tgu.ca/repsychal/poems/10/0512-g2g/assoc.json db.tgu.ca/repsychal/poems/10/0512-g2g/assoc.js db.tgu.ca/repsychal/poems/10/0512-g2g/stub.json db.tgu.ca/repsychal/poems/10/0512-g2g/stub.js were all invisible==null! (they are all URL, just put h t t p //: in front ... a restriction on the # of URL in a post) How do i get "my" data into the page?

    Read the article

  • Unique Constraint At Data Level in GAE

    - by Ngu Soon Hui
    It seems that the unique constraint is not natively supported in GAE, although one can enforce unique check before putting an object to store. But that was in January 2009, what about now? Can I specify unique constraint on a column during schema creation? i.e. class Account(db.Model): name = db.StringProperty() email = db.StringProperty() as unique # something like this @classmethod def create(cls, name, email): a = Account(name=name, email=email) a.put() return a

    Read the article

  • convert xml document to comma delimited (CSV) file using xslt stylesheet.

    - by Brad H
    I need some assistance converting an xml document to a CSV file using an xslt stylesheet. I am trying to use the following xsl and I can't seem to get it right. I want my comma delimited file to include column headings, followed by the data. My biggest issues are removing the final comma after the last item and inserting a carriage return so each group of data appears on a separate line. I have been using XML Notepad. <xsl:template match="/"> <xsl:element name="table"> <xsl:apply-templates select="/*/*[1]" mode="header" /> <xsl:apply-templates select="/*/*" mode="row" /> </xsl:element> </xsl:template> <xsl:template match="*" mode="header"> <xsl:element name="tr"> <xsl:apply-templates select="./*" mode="column" /> </xsl:element> </xsl:template> <xsl:template match="*" mode="row"> <xsl:element name="tr"> <xsl:apply-templates select="./*" mode="node" /> </xsl:element> </xsl:template> <xsl:template match="*" mode="column"> <xsl:element name="th"> <xsl:value-of select="translate(name(.),'qwertyuiopasdfghjklzxcvbnm_','QWERTYUIOPASDFGHJKLZXCVBNM ')" /> </xsl:element>, </xsl:template> <xsl:template match="*" mode="node"> <xsl:element name="td"> <xsl:value-of select="." /> </xsl:element>, </xsl:template>

    Read the article

  • Echo mysql results in a loop?

    - by Roy D. Porter
    I am using turn.js to make a book. Every div within the 'deathnote' div becomes a new page. <div id="deathnote"> //starts book <div style="background-image:url(images/coverpage.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page </div> //ends book What I am doing is trying to get 3 'content' (content being a name and cause of death) divs onto 1 page, and then generate a new page. So here is what i want: <div id="deathnote"> //starts book <div style="background-image:url(images/coverpage.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"> //creates new page but leaves it open <div> CONTENT </div> <div> CONTENT </div> <div> CONTENT </div> </div> //ends the page </div> //ends book Seems simple enough, however the content is data from a MySQL DB, so i have to echo it in using PHP. Here is what i have so far <div id="deathnote"> <div style="background-image:url(images/coverpage.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <?php $pagecount = 0; $db = new mysqli('localhost', 'username', 'passw', 'DB'); if($db->connect_errno > 0){ die('Unable to connect to database [' . $db->connect_error . ']'); } $sql = <<<SQL SELECT * FROM `TABLE` SQL; if(!$result = $db->query($sql)){ die('There was an error running the query [' . $db->error . ']'); } //IGNORE ALL OF THE GARBAGE ABOVE. IT IS SIMPLE CONNECTING SCRIPT THAT I KNOW WORKS //THE METHOD I AM HAVING TROUBLE WITH IS BELOW $pagecount = 0; while($row = $result->fetch_assoc()){ //GETS THE VALUE (and makes sure it isn't nothing echo '<div style="background-image:url(images/paper.jpg);">'; //THIS OPENS A NEW PAGE while ($pagecount !== 3) { //KEEPS COUNT OF HOW MUCH CONTENT DIVS IS ON THE PAGE while($row = $result->fetch_assoc()){ //START A CONTENT DIV echo '<div class="content"><div class="name">' . $row['victim'] . '</div><div class="cod">' . $row['cod'] . '</div></div>'; //END A CONTENT DIV $pagecount++; //UP THE PAGE COUNT } } $pagecount=0; //PUT IT BACK TO 0 echo '</div>'; //END PAGE } $db->close(); ?> <div style="background-image:url(images/backpage.jpg);"></div> //BACK PAGE </div> At the moment i seem to be causing and infinite loop so the page won't load. The problem resides within the while loops. Any help is greatly appreciated. Thanks in advance guys. :)

    Read the article

  • is there a better way to write this frankenstein LINQ query that searches for values in a child tabl

    - by MRV
    I have a table of Users and a one to many UserSkills table. I need to be able to search for users based on skills. This query takes a list of desired skills and searches for users who have those skills. I want to sort the users based on the number of desired skills they posses. So if a users only has 1 of 3 desired skills he will be further down the list than the user who has 3 of 3 desired skills. I start with my comma separated list of skill IDs that are being searched for: List<short> searchedSkillsRaw = skills.Value.Split(',').Select(i => short.Parse(i)).ToList(); I then filter out only the types of users that are searchable: List<User> users = (from u in db.Users where u.Verified == true && u.Level > 0 && u.Type == 1 && (u.UserDetail.City == city.SelectedValue || u.UserDetail.City == null) select u).ToList(); and then comes the crazy part: var fUsers = from u in users select new { u.Id, u.FirstName, u.LastName, u.UserName, UserPhone = u.UserDetail.Phone, UserSkills = (from uskills in u.UserSkills join skillsJoin in configSkills on uskills.SkillId equals skillsJoin.ValueIdInt into tempSkills from skillsJoin in tempSkills.DefaultIfEmpty() where uskills.UserId == u.Id select new { SkillId = uskills.SkillId, SkillName = skillsJoin.Name, SkillNameFound = searchedSkillsRaw.Contains(uskills.SkillId) }), UserSkillsFound = (from uskills in u.UserSkills where uskills.UserId == u.Id && searchedSkillsRaw.Contains(uskills.SkillId) select uskills.UserId).Count() } into userResults where userResults.UserSkillsFound > 0 orderby userResults.UserSkillsFound descending select userResults; and this works! But it seems super bloated and inefficient to me. Especially the secondary part that counts the number of skills found. Thanks for any advice you can give. --r

    Read the article

  • how can i pass parameter to linq query

    - by girish
    i want to pass parameter to linq query... public IEnumerable GetPhotos() { PhotoDBDataContext db = new PhotoDBDataContext(); var tProduct = db.Photos; var query = from p in db.Photos orderby p.PhotoId descending select new { p.Album, p.AlbumId, p.Description, p.Photographer, p.PhotographerId, p.PhotoId, p.Tags, p.Thumbnail, p.Url }; return query; } in above example "orderby p.PhotoId descending" is used, i want to use parameter in place of p.PhotoId is it possible...

    Read the article

  • Trouble using South with Django and Heroku

    - by Dan
    I had an existing Django project that I've just added South to. I ran syncdb locally. I ran manage.py schemamigration app_name locally I ran manage.py migrate app_name --fake locally I commit and pushed to heroku master I ran syncdb on heroku I ran manage.py schemamigration app_name on heroku I ran manage.py migrate app_name on heroku I then receive this: $ heroku run python notecard/manage.py migrate notecards Running python notecard/manage.py migrate notecards attached to terminal... up, run.1 Running migrations for notecards: - Migrating forwards to 0005_initial. > notecards:0003_initial Traceback (most recent call last): File "notecard/manage.py", line 14, in <module> execute_manager(settings) File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/app/lib/python2.7/site-packages/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/app/lib/python2.7/site-packages/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 81, in run_migration migration_function() File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/app/notecard/notecards/migrations/0003_initial.py", line 15, in forwards ('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) django.db.utils.DatabaseError: relation "notecards_semester" already exists I have 3 models. Section, Semester, and Notecards. I've added one field to the Notecards model and I cannot get it added on Heroku. Thank you.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >