Search Results

Search found 32970 results on 1319 pages for 'zend db select'.

Page 444/1319 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Iterate through every node in a XML

    - by Rachel
    Hi I am trying to iterate through every node in a xml, be it the element node, text node or comment. With the below XSL in the very first statement prints the complete xml. How do i copy the very first node in $nodes and call the template process-nodes again removing teh first node in my next iteration? <?xml version='1.0'?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <xsl:call-template name="process-nodes"> <xsl:with-param name="nodes" select="//node()" as="node()*"/> </xsl:call-template> </xsl:template> <xsl:template name="process-nodes"> <xsl:param name="nodes" as="node()*" /> <xsl:copy-of select="$nodes[1]"/> <xsl:if test="$nodes"> <xsl:call-template name="process-nodes"> <xsl:with-param name="nodes" select="remove($nodes, 1)" /> </xsl:call-template> </xsl:if> </xsl:template> </xsl:stylesheet> Note: I am looking for fixing the issue in this kind of implementation rather than changing the template match to <xsl:template match="@* | node()"> as I need to have some processing which requires this approach. Thanks.

    Read the article

  • Specify directory for tasks scanning in Netbeans

    - by hsz
    Hello ! Is it possible in Netbeans to specify a list of directories that it should scan for tasks (@todo) ? I want to be able to exclude some subdirectories from my project - for example /lib/Zend - and only allow to scan /lib/Cms /config /app ... Is it possible to do this ?

    Read the article

  • delphi "Invalid use of keyword" in TQuery

    - by Baldric
    I'm trying to populate a TDBGrid with the results of the following TQuery against the file Journal.db: select * from Journal where Journal.where = "RainPump" I've tried both Journal."Where" and Journal.[Where] to no avail. I've also tried: select Journal.[Where] as "Location" - with the same result. Journal.db is a file created by a third party and I am unable to change the field names. The problem is that the field I'm interested in is called 'where' and understandably causes the above error. How do I reference this field without causing the BDE (presumably) to explode?

    Read the article

  • In datastore, confused on how to pass a list of key_names as an argument to somemodel.get_or_insert(

    - by indiehacker
    Are there examples of how to pass a list of key_names to Model.get_or_insert() ? My Problem: With a method of ParentLayer I want to make the children. The key_names of the new (or editable) entities of class Child will come from such a list below: namesList = ["picture1","picture2"] so I should be able to build a list of key_names with method from the parent class as follows: class ParentLayer(db.Model): def getOrMakeChildren(self, namesList): keyslist = [ db.Key.from_path( 'Child' , name , parent = self.key() ) for name in namesList ] the problem is next where I simply want to get_or_insert entities based on keylist defined above: childrenEntitiesList = Child.get_or_insert(keyslist) # no works? also none of the below attempts worked: #childrenEntitiesList = Child.get_or_insert(keyslist, parent = u'TEST') #childrenEntitiesList = Child.get_or_insert(keyslist, parent=self.key().name() ) #childrenEntitiesList = Child.get_or_insert(keyslist, parent=self.key()

    Read the article

  • How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Cannot implicitly convert type ... to ... problem

    - by Younes
    I have this code: public static IEnumerable<dcCustomer> searchCustomer(string Companyname) { TestdbDataContext db = new TestdbDataContext(); IEnumerable<dcCustomer> myCustomerList = (from Customer res in db.Customers where res.CompanyName == Companyname select res); return myCustomerList; } And whatever i try i keep getting the convert error. Error 1 Cannot implicitly convert type 'System.Collections.Generic.IEnumerable<ConnectionWeb.Customer>' to 'System.Collections.Generic.IEnumerable<ConnectionWeb.DAL.dcCustomer>'. An explicit conversion exists (are you missing a cast?) \\srv01\home$\Z****\Visual Studio 2008\Projects\ConnectionWeb\ConnectionWeb\DAL\dcCustomer.cs 63 20 ConnectionWeb I want to try get myCustomerList to keep the values in an enumerator and return it.

    Read the article

  • How to add columns to sqlite3 python?

    - by user291071
    I know this is simple but I can't get it working! I have no probs with insert,update or select commands, Lets say I have a dictionary and I want to populate a table with the column names in the dictionary what is wrong with my one line where I add a column? ##create con = sqlite3.connect('linksauthor.db') c = con.cursor() c.execute('''create table linksauthor (links text)''') con.commit() c.close() ##populate author columns allauthors={'joe':1,'bla':2,'mo':3} con = sqlite3.connect('linksauthor.db') c = con.cursor() for author in allauthors: print author print type(author) c.execute("alter table linksauthor add column '%s' 'float'")%author ##what is wrong here? con.commit() c.close()

    Read the article

  • PHP ORM library based on the Data Mapper pattern

    - by Matthieu
    For a PHP application with a complex domain model, I don't want to use the Active Record pattern, I need instead the Data Mapper pattern (as presented in Zend Framework). Do you know any library that could help me for the ORM part, or else a link to a documentation on "how to do it right" ? Thanks

    Read the article

  • how to get http get request params in jsf 2.0 bakcing bean?

    - by Marko
    Hi all, I having trouble with passing http get parameters to jsf 2.0 backing bean. User will invoke URl with some params containing id of some entity, which is later used to persist some other entity in db. whole process can be summarized by fallowing: 1. user open page http://www.somhost.com/JsfApp/step-one.xhtml?sid=1 2. user fills some data and goes to next page 3. user fills some more data and then entity is saved to db with sid param from step one. I have session scoped backing bean that hold data from all the pages (steps), but I cant pass param to bean property.. any ideas?

    Read the article

  • What noncluster index would be better to create on SQL Server?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. SELECT CustomerName FROM Customers Which leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 For the first attempt to improve performance, I've created a nonclustered index for this table: CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this first try, I've executed the select statement and the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 Now the second try, I've deleted this nonclustered index in order to create a new one. CREATE NONCLUSTERED INDEX [IX_CategoryName] ON [dbo].[Categories] ( [CategoryId] ASC ) INCLUDE ( [CategoryName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this second try, I've executed the select statement and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 Am I doing something wrong or this is expected? Shall I use the first nonclustered index with two fields, or the second nonclustered with one field (CategoryID) including the second field (CategoryName)?

    Read the article

  • PHP PDO fetch null

    - by Jacob
    How do you check if a columns value is null? Example code: $db = DBCxn::getCxn(); $sql = "SELECT exercise_id, author_id, submission, result, submission_time, total_rating_votes, total_rating_values FROM submissions LEFT OUTER JOIN submission_ratings ON submissions.exercise_id=submission_ratings.exercise_id WHERE id=:id"; $st = $db->prepare($sql); $st->bindParam(":id", $this->id, PDO::PARAM_INT); $st->execute(); $row = $st->fetch(); if($this->total_rating_votes == null) // this doesn't seem to work even though there is no record in submission_ratings???? { ... }

    Read the article

  • Unique serial number in a java web application.

    - by Zenzen
    I've been wondering what's the correct practice for generating unique ids? The thing is in my web app I'll have a plugin system, when a user registers a plugin I want to generate a unique serial ID for it. I've been thinking about storing all numbers in a DB or a file on the server, generating a random number and checking whether it already exists in the DB/file, but that doesn't seem that good. Are there other ways to do it? Would using the UUID be the preferred way to go?

    Read the article

  • Safari Web Inspector is not updating when I update an element with ajax

    - by Ashley
    I have a checkout page http://www.oipolloi.com/oipolloi/shop/viewbasket.php with multiple ajax calls after certain items update (EG look up postage cost when country is changed, then update discount boxes etc). I've asked for help in the past about the best method of making sure ALL calls have returned before allowing the form to be submitted for payment processing: http://stackoverflow.com/questions/2290372/how-do-i-prevent-form-submission-until-multiple-ajax-calls-have-finished-jquery I was fairly happy that the logic in the finished solution was correct, but I have still been receiving reports that people using Safari are able to submit the form without the ajax calls returning properly. I have tried using the Safari Web Inspector to debug but it seems that when you Inspect Element, then update an element with an ajax call, the Inspector doesn't seem to update. I am updating hidden fields, so it's hard to be able to know whether the problem lies with the DOM not being updated properly, or the Inspector itself. I'm using Safari 4.0.5 on PC and you can reproduce the problem above by looking for a div id="countryFieldsBilling" with Web Inspector. It should contain three hidden fields that are initially empty. You can try to make it update (or not) by choosing a country from the select menu at the bottom of 'Shipping Address' box, and then clicking the 'click to use Shipping Address' link at the top of the 'Billing Address' just below. The behaviour I am seeing is that the country chosen in the shipping select gets copied correctly to the country in the billing select, but the hidden inputs in the Web Inspector do not get updated. When these hidden inputs do not get updated, this causes the problem that Mac Safari users report. If you can let me know either how to get Web Inspector to work properly, or something else I may have missed in the behaviour of Mac Safari that may cause these problems, that would be great. Thanks in advance

    Read the article

  • .Net Remote Log Querying

    - by jlafay
    I have a Win Service that I'm working on that consists of the service, WF Service (using WorkflowServiceHost), a Workflow (WorkflowApplication) that queries/processes data from a SQL Server DB, and a Comm Marshall class that handles data flow between the service and the WF. The WF does a lot of heavy data processing and the original app (early VB6) logged all the processing and displayed the results on the screen of the host machine. Critical events will be committed to eventlog because I strongly believe that should be common practice because admins naturally will look there and because it already has support for remote viewing. The workflow will also need to write logging events as it processes and iterates according to our business logic. Such as: records queried, records returned, processed records, etc. The data is very critical and we need to log actions as they occur. The logs are currently kept as text files on disk and I think that is ok. Ideally I would like to record log events in XML so it's easier to query and because it is less costly than a DB, especially since our DB servers do a lot of heavy processing anyways. Since we are replacing essentially a VB6 application with a robust windows service (taking advantage of WF 4.0), it has been requested that a remote client also be created. It receives callbacks from the service after subscribing to it and being added to a collection of subscribers. Basic statistics and summaries are updated client side after receiving basic monitoring data of what is going on with the service. We would like to also provide a way to provide details when we need to examine what is going on further because this is a long running data processing service and issues need to be addressed immediately. What is the best way to implement some type of query from the client that is sent to the service and returned to the client? Would it be efficient to implement another method to expose on the service and then have that pass that off to some querying class/object to examine the XML files by whichever specification and then return it to the client? That's the main concern. I don't want the service to processing to bottleneck much while this occurs. It seems that WF already auto-magically threads well for the most part but I want to make sure this is the right way to go about it. Any suggestions/recommendations on how to architect and implement a small log querying framework for a remote service would be awesome.

    Read the article

  • Index an array expression directly in PostgreSQL

    - by wich
    I'm trying to insert data into a table from a template table. I need to rewrite one of the columns for which I wanted to use a directly indexed array expression, but I can't seem to find how to do this, if it is even possible. The scenario: create table template ( id integer, index integer, foo integer); insert into template values (0, 1, 23), (0, 2, 18), (0, 3, 16), (0, 4, 7), (1, 1, 17), (1, 2, 26), (1, 3, 11), (1, 4, 3); create table data ( data_id integer, foo integer); Now what I'd like to do is the following: insert into data select (array[3,7,5,2])[index], foo from template where id = 1; But this doesn't work, the (array[3,7,5,2])[index] syntax isn't valid. I tried a few variants, but was unable to get anything working and wasn't able to find the correct syntax in the docs, nor even whether this is at all possible or not. As a current workaround I've devised the following, but it is less than ideal, from an elegance perspective at least, but it may also be a performance hit, I haven't looked into that yet. insert into data select arr[index], foo from template, (select array[3,7,5,2] as arr) as q where id = 1; If anyone could suggest a (better) alternative to accomplish this I'd like to hear that as well.

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • SQLite3 or SQLite Manager make me crazy !!! Please help me !! I have a presentation next week

    - by ahmet732
    My friend added 90 rows into the database, I tied it up to my app. In my table view name of my variables are shown in proper fashion but when I tapped one of them, in detailsViewController their description is wrong. It shows very old description of variables not the new ones in database. Moreover, it displays the same description for different variables. What's the problem ? What am i missing? My database is correct. It displays same desscriptions for different values. It makes me worried about. Additionally, when I added a new row to my db, it accepts it but it does not perceive it when i run the app. It shows new row in my tableview if and only if I change the name of my db file. I do not want to use another SQL manager ..

    Read the article

  • DDD and MVC Models hold ID of separate entity or the entity itself?

    - by Dr. Zim
    If you have an Order that references a customer, does the model include the ID of the customer or a copy of the customer object like a value object (thinking DDD)? I would like to do ths: public class Order { public int ID {get;set;} public Customer customer {get;set;} ... } right now I do this: public class Order { public int ID {get;set;} public int customerID {get;set;} ... } It would be more convenient to include the complete customer object rather than an ID to the View Model passed to the form. Otherwise, I need to figure out how to get the vendor information to the view that the order references by ID. This also implies that the repository understand how to deal with the customer object that it finds within the order object when they call save (if we select the first option). If we select the second option, we will need to know where in the view model to put it. It is certain they will select an existing customer. However, it is also certain they may want to change the information in-place on the display form. One could argue to have the controller extract the customer object, submit customer changes separately to the repository, then submit changes to the order, keeping the customerID in the order.

    Read the article

  • Ways of breaking down SQL transactional/call data into reports -- 'square data'?

    - by RizwanK
    I've got a large database of call-traffic information (although the question could be answered with any generic data set.) For instance, a row contains : call endpoint server (endpoint_name) call endpoint status (sip_disconnect_reason) call destination (destination) call completed (duration) [duration 0 is completed] call account group (account_group) It's pretty easy to run SQL reports against the data, i.e. select count(*), endpoint_name from calls where duration0 group by endpoint_name select count(*),destination from calls where blah group by destination I've been calling this filtering or breakdown reports (I get the number of calls per carrier, etc.). Add another breakdown, and you've got two breakdowns, a la select count(*), endpoint_name, sip_disconnect_reason from calls where duration=0 group by endpoint_name, sip_disconnect_reason Of course, if you keep adding breakdowns, you end up making super-large reports and slicing your data so thin that you can't extract any trends from it. So my question is this : Is there a name for this sort of method of report writing? (I've heard words like squares, slicing and breakdown reports applied to them) --- I'm looking for a Python/Reporting toolkit that I can use to make these easier to generate for my end users. aside : Are there other ways of representing transactional data that might be useful rather than the above method? Thanks,

    Read the article

  • Why is doing a top(1) on an indexed column in mssql slow?

    - by reinier
    I'm puzzled by the following. I have a DB with around 10 million rows, and (among other indices) on 1 column is an index. Now I have 700k rows where the campaignid is indeed 3835 For all these rows, the connectionid is the same. I just want to find out this connectionid. use messaging_db; SELECT TOP (1) connectionid FROM outgoing_messages WITH (NOLOCK) WHERE (campaignid_int = 3835) Now this query takes approx 30 seconds to perform! I (with my small db knowledge) would expect that it would take any of the rows, and return me that connectionid If I test this same query for a campaign which only has 1 entry, it goes really fast. So the index works. How would I tackle this and why does this not work?

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • Securing files on IPhone

    - by clearbrian
    Hi Is there a way to decompile the binary from an IPhone app. I jailbroke my IPhone and was surprised to find other app's dbs wide open to be copied. So I exported my most important table and hardcoded it into code. Instead of loading table into array from a db I just generated code to fill the array and kept only the most basic DB info so relationships still work. Took a while but now works fine. I was just wondering am I safe, could someone decompile the binary for the app easily and extract the data. In Java its easy to decompile *.class files though thats bytecode where I presume iphone apps are more low level. I know IPhone sdk 4 can mark files as secure. Anyone know can this be overridden by jailbreaks or is this an unix lock?

    Read the article

  • How to return a record from function, executed by INSERT/UPDATE rule (trigger)?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Slope requires a real as parameter 2?

    - by Dave Jarvis
    Question How do you pass the correct value to udf_slope's second parameter type? Attempts CAST(Y.YEAR AS FLOAT), but that failed (SQL error). Y.YEAR + 0.0, but that failed, too (see error message). slope(D.AMOUNT, 1.0), failed as well Error Message Using udf_slope fails due to: Can't initialize function 'slope'; slope() requires a real as parameter 2 Code SELECT D.AMOUNT, Y.YEAR, slope(D.AMOUNT, Y.YEAR + 0.0) as SLOPE, intercept(D.AMOUNT, Y.YEAR + 0.0) as INTERCEPT FROM YEAR_REF Y, DAILY D Here, D.AMOUNT is a FLOAT and Y.YEAR is an INTEGER. Create Function The slope function was created as follows: CREATE AGGREGATE FUNCTION slope RETURNS REAL SONAME 'udf_slope.so'; Function Signature From udf_slope.cc: double slope( UDF_INIT* initid, UDF_ARGS* args, char* is_null, char* is_error ) Example Usages Reading the fine manual reveals: UDF intercept() Calculates the intercept of the linear regression of two sets of variables. Function name intercept Input parameter(s) 2 (dependent variable: REAL, independent variable: REAL) Examples SELECT intercept(income,age) FROM customers UDF slope() Calculates the slope of the linear regression of two sets of variables. Function name slope Input parameter(s) 2 (dependent variable: REAL, independent variable: REAL) Examples SELECT slope(income,age) FROM customers Thoughts? Thank you!

    Read the article

  • writing to a file in nasm using system calls

    - by yurib
    As part of an assignment I'm supposed to write to a file using system calls. Everything works fine except when I try to open the file in gedit (linux), it says it can't identify the character encoding. Notepad (on windows) opens the file just fine. Why doesn't it work on linux ? here's the code: section .text global _start _start: mov EAX, 8 mov EBX, filename mov ECX, 0700 int 0x80 mov EBX, EAX mov EAX, 4 mov ECX, text mov EDX, textlen int 0x80 mov EAX, 6 int 0x80 mov eax, 1 int 0x80 section .data filename db "./output.txt", 0 text db "hello world", 0 textlen equ $ - text thanks :)

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >