Search Results

Search found 6630 results on 266 pages for 'cname record'.

Page 225/266 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • PHP - "Fat Free Framework" Find Methods and Showing Results in Template

    - by user1672808
    Just started trying the "Fat Free Framework" I'm building a site using a MySQL DB with 265 fields, and 5000+ rows in the DB; I can load() a specific record easily, no problems. When using find(), afind(), and even "select()", template will show blank lines or lines with "filler" text, with the correct number of rows for the query results, but no text/data from the DB itself; Same problem whether using objects or simply arrays from result (afind() and find()). I've copied/pasted the code verbatim from examples and from documentation, with only the DB specific items changed. Still, no luck. CODE IN PHP FILE (function from CLASS): static function home() { $featured=new Axon('boats'); $F3::set('boatlist',$featured->afind('D_CustomerID=173')); F3::set('content',TEMPLATE_DIR .'/home.html'); echo Template::serve(TEMPLATE_DIR .'/layout.html'); } TEMPLATE home.html: <div class="span8"> <h3> Featured Boats </h3> <F3:repeat group="{{@boatlist}}" value="{{@boat}}"> <div style="margin-left: 2em" class="thumbnails"> <p> <a href="boat/{{@boat['D_BoatNum']}}">{{trim(@boat['D_Description'])}}</a> by {{@boat['D_CustomerID']}} </p> <p> {{@boat['D_Price']}} </p> </div> </F3:repeat> </div> The number of rows this produces coincides with the correct number of rows in the DB. However, the actual data from each field does not show. Any ideas?

    Read the article

  • Android: Streaming audio over TCP Sockets

    - by user299988
    Hi, For my app, I need to record audio from MIC on an Android phone, and send it over TCP to the other android phone, where it needs to be played. I am using AudioRecord and AudioTrack class. This works great with a file - write audio to the file using DataOutputStream, and read from it using DataInputStream. However, if I obtain the same stream from a socket instead of a File, and try writing to it, I get an exception. I am at a loss to understand what could possibly be going wrong. Any help would be greatly appreciated. EDIT: The problem is same even if I try with larger buffer sizes (65535 bytes, 160000 bytes). This is the code: Recorder: int bufferSize = AudioRecord.getMinBufferSize(11025, , AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); AudioRecord recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize); byte[] tempBuffer = new byte[bufferSize]; recordInstance.startRecording(); while (/*isRecording*/) { bufferRead = recordInstance.read(tempBuffer, 0, bufferSize); dataOutputStreamInstance.write(tempBuffer); } The DataOutputStream above is obtained as: BufferedOutputStream buff = new BufferedOutputStream(out1); //out1 is the socket's outputStream DataOutputStream dataOutputStreamInstance = new DataOutputStream (buff); Could you please have a look, and let me know what is it that I could be doing wrong here? Thanks,

    Read the article

  • Find items with belongs_to associations in Rails?

    - by dannymcc
    Hi Everyone, I have a model called Kase each "Case" is assigned to a contact person via the following code: class Kase < ActiveRecord::Base validates_presence_of :jobno has_many :notes, :order => "created_at DESC" belongs_to :company # foreign key: company_id belongs_to :person # foreign key in join table belongs_to :surveyor, :class_name => "Company", :foreign_key => "appointedsurveyor_id" belongs_to :surveyorperson, :class_name => "Person", :foreign_key => "surveyorperson_id" I was wondering if it is possible to list on the contacts page all of the kases that that person is associated with. I assume I need to use the find command within the Person model? Maybe something like the following? def index @kases = Person.Kase.find(:person_id) or am I completely misunderstanding everything again? Thanks, Danny EDIT: If I use: @kases= @person.kases I can successfully do the following: <% if @person.kases.empty? %> No Cases Found <% end %> <% if @person.kases %> This person has a case assigned to them <% end %> but how do I output the "jobref" field from the kase table for each record found?

    Read the article

  • PHP: Aggregate Model Classes or Uber Model Classes?

    - by sunwukung
    In many of the discussions regarding the M in MVC, (sidestepping ORM controversies for a moment), I commonly see Model classes described as object representations of table data (be that an Active Record, Table Gateway, Row Gateway or Domain Model/Mapper). Martin Fowler warns against the development of an anemic domain model, i.e. a class that is nothing more than a wrapper for CRUD functionality. I've been working on an MVC application for a couple of months now. The DBAL in the application I'm working on started out simple (on account of my understanding - oh the benefits of hindsight), and is organised so that Controllers invoke Business Logic classes, that in turn access the database via DAO/Transaction Scripts pertinent to the task at hand. There are a few "Entity" classes that aggregate these DAO objects to provide a convenient CRUD wrapper, but also embody some of the "behaviour" of that Domain concept (for example, a user - since it's easy to isolate). Taking a look at some of the code, and thinking along refactoring some of the code into a Rich Domain Model, it occurred to me that were I to try and wrap the CRUD routines and behaviour of say, a Company into a single "Model" class, that would be a sizeable class. So, my question is this: do Models represent domain objects, business logic, service layers, all of the above combined? How do you go about defining the responsibilities for these components?

    Read the article

  • MySQL GIS and Spatial Extensions - how to map regions and query against them

    - by chibineku
    I am trying to make a smartphone app which will return a list of users within a certain proximity, say 100m. It's easy to get the coordinates of my BlackBerry and write them to a database, but in order to return a list of other users within 100m, I need to pull every other record from the database and compare the distance between the two points, checking to see if it's within range, before outputting that user's information. This is going to be time consuming if there are many users involved. So I would like to map areas (countries, cities, I'm not yet sure of the resolution I'll need) so that I can first target a smaller subset of all users. This will save on processing time. I have read the basics of GIS and spatial querying on the mysql website but to be honest the query is over my head and I hate copying and pasting code without understanding it. Plus it only checks for proximity - I want to first check if a coordinate falls within a certain area. Does anyone have any experience of such matters and feel like giving me some pointers? Resources such as any preexisting databases of points describing countries as polygons would be really helpful too. Many thanks to anyone who takes the time :)

    Read the article

  • BeautifulSoup can't parse a webpage?

    - by JLTChiu
    I am using beautiful soup for parsing webpage now, I've heard it's very famous and good, but it doesn't seems works properly. Here's what I did import urllib2 from bs4 import BeautifulSoup page = urllib2.urlopen("http://www.cnn.com/2012/10/14/us/skydiver-record-attempt/index.html?hpt=hp_t1") soup = BeautifulSoup(page) print soup.prettify() I think this is kind of straightforward. I open the webpage and pass it to the beautifulsoup. But here's what I got: Warning (from warnings module): File "C:\Python27\lib\site-packages\bs4\builder\_htmlparser.py", line 149 "Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help.")) ... HTMLParseError: bad end tag: u'</"+"script>', at line 634, column 94 I thought CNN website should be well designed, so I am not very sure what's going on though. Does anyone has idea about this?

    Read the article

  • There is already an open Datareader associated in C#

    - by Celine
    I can't seem to find the reason behind this error. Can someone look into my codes? private void button2_Click_1(object sender, EventArgs e) { con.Open(); string check = "Select block, section, size from interment_location where block='" + textBox12.Text + "' and section='" + textBox13.Text + "' and size='" + textBox14.Text + "'"; SqlCommand cmd1 = new SqlCommand(check, con); SqlDataReader rdr; rdr = cmd1.ExecuteReader(); try { if (textBox1.Text == "" || textBox2.Text == "" || textBox3.Text == "" || textBox4.Text == "" || textBox7.Text == "" || textBox8.Text == "" || textBox9.Text == "" || textBox10.Text == "" || dateTimePicker1.Value.ToString("yyyyMMdd HH:mm:ss") == "" || dateTimePicker2.Value.ToString("yyyyMMdd HH:mm:ss") == "" || textBox11.Text == "" || dateTimePicker3.Value.ToString("yyyyMMdd HH:mm:ss") == "" || textBox12.Text == "" || textBox13.Text == "" || textBox14.Text == "") { MessageBox.Show( "Please input a value!", "Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } else if (rdr.HasRows == true) { MessageBox.Show("Interment Location is already reserved."); textBox12.Clear(); textBox13.Clear(); textBox14.Clear(); textBox12.Focus(); } else if (MessageBox.Show( "Are you sure you want to reserve this record?", "Reserve", MessageBoxButtons.YesNo, MessageBoxIcon.Question) == DialogResult.Yes) { SqlCommand cmd4 = new SqlCommand( "insert into owner_info(ownerf_name, ownerm_name," + " ownerl_name, home_address, tel_no, office)" + " values('" + textBox1.Text + "', '" + textBox2.Text + "', '" + textBox3.Text + "', '" + textBox4.Text + "', '" + textBox7.Text + "', '" + textBox8.Text + "')", con); cmd.ExecuteNonQuery(); SqlCommand cmd5 = new SqlCommand( "insert into deceased_info(deceased_id, name_of_deceased, " + "address, do_birth, do_death, place_of_death, " + "date_of_interment, COD_id, place_of_vigil_id, " + "service_id, owner_id) values('" + ownerid + "','" + textBox9.Text + "', '" + textBox10.Text + "', '" + dateTimePicker1.Value.ToString("yyyyMMdd HH:mm:ss") + "', '" + dateTimePicker2.Value.ToString("yyyyMMdd HH:mm:ss") + "', '" + textBox11.Text + "', '" + dateTimePicker3.Value.ToString("yyyyMMdd HH:mm:ss") + "', '" + ownerid + "', '" + ownerid + "', '" + ownerid + "', '" + ownerid + "')", con); cmd.ExecuteNonQuery(); SqlCommand cmd6 = new SqlCommand( "insert into interment_location(lot_no, block, section, " + "size, garden_id) values('" + ownerid + "','" + textBox12.Text + "', '" + textBox13.Text + "', '" + textBox14.Text + "', '" + ownerid + "')", con); cmd.ExecuteNonQuery(); MessageBox.Show( "Your reservation has been made!", "Reserve", MessageBoxButtons.OK, MessageBoxIcon.Information); } } catch (Exception x) { MessageBox.Show(x.Message); } con.Close(); } This is the only code that I edited after.

    Read the article

  • Using map/reduce for mapping the properties in a collection

    - by And
    Update: follow-up to MongoDB Get names of all keys in collection. As pointed out by Kristina, one can use Mongodb 's map/reduce to list the keys in a collection: db.things.insert( { type : ['dog', 'cat'] } ); db.things.insert( { egg : ['cat'] } ); db.things.insert( { type : [] }); db.things.insert( { hello : [] } ); mr = db.runCommand({"mapreduce" : "things", "map" : function() { for (var key in this) { emit(key, null); } }, "reduce" : function(key, stuff) { return null; }}) db[mr.result].distinct("_id") //output: [ "_id", "egg", "hello", "type" ] As long as we want to get only the keys located at the first level of depth, this works fine. However, it will fail retrieving those keys that are located at deeper levels. If we add a new record: db.things.insert({foo: {bar: {baaar: true}}}) And we run again the map-reduce +distinct snippet above, we will get: [ "_id", "egg", "foo", "hello", "type" ] But we will not get the bar and the baaar keys, which are nested down in the data structure. The question is: how do I retrieve all keys, no matter their level of depth? Ideally, I would actually like the script to walk down to all level of depth, producing an output such as: ["_id","egg","foo","foo.bar","foo.bar.baaar","hello","type"] Thank you in advance!

    Read the article

  • Nhibernate join on a table twice

    - by Zuber
    Consider the following Class structure... public class ListViewControl { public int SystemId {get; set;} public List<ControlAction> Actions {get; set;} public List<ControlAction> ListViewActions {get; set;} } public class ControlAction { public string blahBlah {get; set;} } I want to load class ListViewControl eagerly using NHibernate. The mapping using Fluent is as shown below public UIControlMap() { Id(x => x.SystemId); HasMany(x => x.Actions) .KeyColumn("ActionId") .Cascade.AllDeleteOrphan() .AsBag() .Cache.ReadWrite().IncludeAll(); HasMany(x => x.ListViewActions) .KeyColumn("ListViewActionId") .Cascade.AllDeleteOrphan() .AsBag() .Cache.ReadWrite().IncludeAll(); } This is how I am trying to load it eagerly var baseActions = DetachedCriteria.For<ListViewControl>() .CreateCriteria("Actions", JoinType.InnerJoin) .SetFetchMode("BlahBlah", FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()); var listViewActions = DetachedCriteria.For<ListViewControl>() .CreateCriteria("ListViewActions", JoinType.InnerJoin) .SetFetchMode("BlahBlah", FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()); var listViews = DetachedCriteria.For<ListViewControl>() .SetFetchMode("Actions", FetchMode.Eager) .SetFetchMode("ListViewActions",FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()); var result = _session.CreateMultiCriteria() .Add("listViewActions", listViewActions) .Add("baseActions", baseActions) .Add("listViews", listViews) .SetResultTransformer(new DistinctRootEntityResultTransformer()) .GetResult("listViews"); Now, my problem is that the class ListViewControl get the correct records in both Actions and ListViewActions, but there are multiple entries of the same record. The number of records is equal to the number of joins made to the ControlAction table, in this case two. How can I avoid this? If I remove the SetFetchMode from the listViews query, the actions are loaded lazily through a proxy which I don't want.

    Read the article

  • Can I use a specific model from within a behavior in CakePHP?

    - by Paul Willy
    I'm trying to write a behavior that will give my models access to a simple workflow engine I've devised. The workflow engine itself works as a CakePHP model, with workflow data stored in the database just as any other model data is stored. Basically what I want to do is have the behavior use the workflow model whenever an action is called on the base model. For example, if the edit() action is executed for Posts, then the Post (with the behavior attached) will trigger the workflow behavior with its own model name, action, and id as arguments (e.g. [Post, edit, 1]). Then the behavior will invoke the functionality of the Workflow model, which has a record for what to do when edit is run on Posts (e.g. send e-mail to users who are subscribed to that post) and will carry that out. My question is, what is the proper way to invoke model/controller methods from within the behavior? The model to be used from within the behavior will always be Workflow, but the behavior should be usable from basically any model (aside from Workflow itself). I know I could run SQL queries directly from the behavior, but of course this is not the Cake way :-) Or, am I going about this in the wrong way? I want to store a certain amount of logic in the database so that it is easily configurable by different users, and not have endless configuration checks within the model/controller logic itself so that workflow steps can be easily added/changed/removed in the future.

    Read the article

  • error detection/correction/recovery in serial protocols

    - by Jason S
    I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere. So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome. In particular I have to deal with issues like parsing a stream of bytes into packets verifying a packet is correct (easy with a CRC, for instance) identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different) error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?) how to make variable-length packets robust to error correction / recovery. Any suggestions?

    Read the article

  • In PHP... best way to turn string representation of a folder structure into nested array

    - by Greg Frommer
    Hi everyone, I looked through the related questions for a similar question but I wasn't seeing quite what I need, pardon if this has already been answered already. In my database I have a list of records that I want represented to the user as files inside of a folder structure. So for each record I have a VARCHAR column called "FolderStructure" that I want to identify that records place in to the folder structure. The series of those flat FolderStructure string columns will create my tree structure with the folders being seperated by backslashes (naturally). I didn't want to add another table just to represent a folder structure... The 'file' name is stored in a separate column so that if the FolderStructure column is empty, the file is assumed to be at the root folder. What is the best way to turn a collection of these records into a series of HTML UL/LI tags... where each LI represents a file and each folder structure being an UL embedded inside it's parent?? So for example: file - folderStructure foo - bar - firstDir blue - firstDir/subdir would produce the following HTML: <ul> <li>foo</li> <ul> <li> bar </li> <ul> <li> blue </li> </ul> </ul> </ul> Thanks

    Read the article

  • Help with MySQL Query using CASE statement

    - by hairdresser-101
    I am trying to group a number of customers together based on their "Head Office" or "Parent" location. THis works ok except for a flaw which I didn't forsee when I was developing my system... For customers that did not have a "Parent" (standalone business) I defaulted the parent_id to 0. Therefore, my data would look like this: id parent_id customer 1 0 CustName#1 2 4 CustName#2 - Melbourne 3 4 CustName#2 - Sydney 4 0 CustName#2 (Head Office) What I want to do is Group my results together so that I have one row for CustName#1 and one row for CustName#2 BUT my problem is that there is no parent record for parent_id=0 and these rows are being excluded when using an inner join. I've tried using a case statement but that is not working either (parents are still being ignored) Any help would be greatly appreciated. Here is my query (My CASE is basically trying to get the business_name from the customer table based on the parent_id EXCEPT when the parent_id = 0, THEN just use the customer_name that is listed in the job_summary table): SELECT js.month_of_year, (CASE js.parent_id WHEN 0 THEN js.customer_name ELSE c.business_name END) as customer, SUM(js.jobs), SUM(js.total_cost), sum(js.total_sell) FROM JOB_SUMMARY js INNER JOIN customer c on js.parent_id=c.id group by js.month_of_year, (CASE c.parent_id WHEN 0 THEN js.customer_name ELSE c.business_name END) ORDER BY `customer` ASC

    Read the article

  • date comparisons in Rails

    - by aressidi
    Hi there, I'm having trouble with a date comparison in a named scope. I'm trying to determine if an event is current based on its start and end date. Here's the named scope I'm using which kind of works, though not for events that have the same start and end date. named_scope :date_current, :conditions => ["Date(start_date) <= ? AND Date(end_date) >= ?", Time.now, Time.now] This returns the following record, though it should return two records, not one... >> Event.date_current => [#<Event id: 2161, start_date: "2010-02-15 00:00:00", end_date: "2010-02-21 00:00:00", ...] What it's not returning is this as well >> Event.find(:last) => #<Event id: 2671, start_date: "2010-02-16 00:00:00", end_date: "2010-02-16 00:00:00", ...> The server time seems to be in UTC and I presume that the entries are being stored in the DB in UTC. Any ideas as to what I'm doing wrong or what to try? Thanks!

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

  • database schema eligible for delta synchronization

    - by WilliamLou
    it's a question for discussion only. Right now, I need to re-design a mysql database table. Basically, this table contains all the contract records I synchronized from another database. The contract record can be modified, deleted or users can add new contract records via GUI interface. At this stage, the table structure is exactly the same as the Contract info (column: serial number, expiry date etc.). In that case, I can only synchronize the whole table (delete all old records, replace with new ones). If I want to delta(only synchronize with modified, new, deleted records) synchronize the table, how should I change the database schema? here is the method I come up with, but I need your suggestions because I think it's a common scenario in database applications. 1)introduce a sequence number concept/column: for each sequence, mark the new added records, modified records, deleted records with this sequence number. By recording the last synchronized sequence number, only pass those records with higher sequence number; 2) because deleted contracts can be added back, and the original table has primary key constraints, should I create another table for those deleted records? or add a flag column to indicate if this contract has been deleted? I hope I explain my question clearly. Anyway, if you know any articles or your own suggestions about this, please let me know. Thanks!

    Read the article

  • Error in Postgres execute

    - by RAJA
    I'm using this function... -- Function: dbo.sp_acc_createaccount(character varying, integer, integer, character varying, character varying, character varying, character varying) -- DROP FUNCTION dbo.sp_acc_createaccount(character varying, integer, integer, character varying, character varying, character varying, character varying); CREATE OR REPLACE FUNCTION dbo.sp_acc_createaccount(IN in_orgmgrtype character varying, INOUT in_parentid integer, IN in_levelid integer, IN in_name character varying, IN in_phone character varying, IN in_webpage character varying, IN in_owner character varying, OUT out_accountid integer) RETURNS record AS $BODY$ DECLARE l_CoID int; l_CurrID int; l_OrgMgrId int; errmsg varchar(250); BEGIN IF in_ParentID = -1 THEN errmsg := 'execute sp_Acc_GetCompanyIDForUser failed'; l_CoID := dbo.sp_Acc_GetCompanyIDForUser(in_user); IF l_CoID = -2 THEN RAISE EXCEPTION 'execute sp_Acc_GetCompanyIDForUser failed'; END IF; errmsg := 'execute sp_Acc_GetOrgMgrIDForCompany failed'; l_OrgMgrID := dbo.sp_Acc_GetOrgMgrIDForCompany(in_OrgMgrType, l_CoID); IF l_OrgMgrID = -2 THEN RAISE EXCEPTION 'execute sp_Acc_GetOrgMgrIDForCompany failed'; END IF; in_ParentID := l_OrgMgrID; ELSE errmsg := 'Select orgmgrid failed'; SELECT OrgMgrID INTO l_CurrID FROM dbo.OrgMgr WHERE Name = in_Name AND ParentID = in_ParentID; END IF; -- if not, add it IF l_CurrID IS NULL THEN errmsg := 'Insert into orgmgr(account creation) failed'; INSERT INTO dbo.OrgMgr (ParentID, LevelID, Name, PrimaryPhone, WebPage, Owner) VALUES (in_ParentID, in_LevelID, in_Name, in_Phone, in_WebPage, in_Owner); out_AccountID := currval('dbo.OrgMgr_accountid_seq'); ELSE out_AccountID := -1; END IF; COMMIT; EXCEPTION WHEN RAISE_EXCEPTION THEN out_AccountID := 99; RAISE NOTICE 'ERROR : %',errmsg; WHEN OTHERS THEN out_AccountID := 99; RAISE EXCEPTION 'ERROR : %',errmsg; END $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; ALTER FUNCTION dbo.sp_acc_createaccount(character varying, integer, integer, character varying, character varying, character varying, character varying) OWNER TO postgres; But.. it's showing error in execute time .. ERROR: SPI_execute_plan failed executing query "ROLLBACK": SPI_ERROR_TRANSACTION

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

  • Using Constraints on Hierarchical Data in a Self-Referential Table

    - by pbarney
    Suppose you have the following table, intended to represent hierarchical data: +--------+-------------+ | Field | Type | +--------+-------------+ | id | int(10) | | parent | int(10) | | name | varchar(45) | +--------+-------------+ The table is self-referential in that the parent_id refers to id. So you might have the following data: +----+--------+---------------+ | id | parent | name | +----+--------+---------------+ | 1 | 0 | fruit | | 2 | 0 | vegetable | | 3 | 1 | apple | | 4 | 1 | orange | | 5 | 3 | red delicious | | 6 | 3 | granny smith | | 7 | 3 | gala | +----+--------+---------------+ Using MySQL, I am trying to impose a (self-referential) foreign key constraint upon the data to update on cascades and prevent deletion of fruit if they have "children." So I used the following: CREATE TABLE `idtlp_main`.`fruit` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `parent` INT(10) UNSIGNED, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE = InnoDB; From what I understand, this should fit my requirements. (And parent must default to null to allow insertions, correct?) The problem is, if I change the id of a record, it will not cascade: Cannot delete or update a parent row: a foreign key constraint fails (`iddoc_main`.`fruit`, CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE) What am I missing? Feel free to correct me if my terminology is screwed up... I'm new to constraints.

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

  • jQuery ajax delete script not actually deleting.

    - by werm
    I have a little personal webapp that I'm working on. I have a link that, when clicked, is supposed to make an ajax call to a php that is supposed to delete that info from a database. For some unknown reason, it won't actually delete the row from the database. I've tried everything I know, but still nothing. I'm sure it's something incredibly easy... Here are the scripts involved. Database output: $sql = "SELECT * FROM bookmark_app"; foreach ($dbh->query($sql) as $row) { echo '<div class="box" id="',$row['id'],'"><img src="images/avatar.jpg" width="75" height="75" border="0" class="avatar"/> <div class="text"><a href="',$row['url'],'">',$row['title'],'</a><br/> </div> /*** Click to delete ***/ <a href="?delete=',$row['id'],'" class="delete">x</a></div> <div class="clear"></div>'; } $dbh = null; Ajax script: $(document).ready(function() { $("a.delete").click(function(){ var element = $(this); var noteid = element.attr("id"); var info = 'id=' + noteid; $.ajax({ type: "GET", url: "includes/delete.php", data: info, success: function(){ element.parent().eq(0).fadeOut("slow"); } }); return false; }); }); Delete code: include('connect.php'); //delete.php?id=IdOfPost if($_GET['id']){ $id = $_GET['id']; //Delete the record of the post $delete = mysql_query("DELETE FROM `db` WHERE `id` = '$id'"); //Redirect the user header("Location:xxxx.php"); }

    Read the article

  • Associate new Authlogic Model to existing Models

    - by BriteLite
    Hello, While playing around with Rails (since I am a newbie) while reading Agile Rails book I came across an issue using the Gem Authlogic that I don't know how to address. I have a simple business Model. The tables store the following information: Name, Address, Latitude, and Longitude. The above approach has been working fine, because using the console I can enter the information and it shows up, where I need it to. My issue now is that I want to add authentication to it. As in assign those records in the table, to individual accounts. Since Authlogic is an authentication gem, can this be done? What I am trying to get to here is that, I enter a few records and leave it at that. Few days later, I want to assign those individual rows in the table to an authlogic model so the person to whom the record should belong can authenticate to it and make changes. Any code samples, blog posts to better help me understand would be great! Thank You.

    Read the article

  • Referencing object's identity before submitting changes in LINQ

    - by Axarydax
    Hi, is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges? Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database). I'd like to do it that way, so I create a Directory object, set its name and attributes, then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better. What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.

    Read the article

  • MS Access to sql server searching

    - by malou17
    How to use this code if we are going to use sql server database becaUSE in this code we used MS Access as the database private void btnSearch_Click(object sender, System.EventArgs e) { String pcode = txtPcode.Text; int ctr = productsDS1.Tables[0].Rows.Count; int x; bool found = false; for (x = 0; x<ctr; x++) { if (productsDS1.Tables[0].Rows[x][0].ToString() == pcode) { found = true; break; } } if (found == true) { txtPcode.Text = productsDS1.Tables[0].Rows[x][0].ToString(); txtDesc.Text = productsDS1.Tables[0].Rows[x][1].ToString(); txtPrice.Text = productsDS1.Tables[0].Rows[x][2].ToString(); } else { MessageBox.Show("Record Not Found"); } private void btnNew_Click(object sender, System.EventArgs e) { int cnt = productsDS1.Tables[0].Rows.Count; string lastrec = productsDS1.Tables[0].Rows[cnt][0].ToString(); int newpcode = int.Parse(lastrec) + 1; txtPcode.Text = newpcode.ToString(); txtDesc.Clear(); txtPrice.Clear(); txtDesc.Focus(); here's the connectionstring Jet OLEDB:Global Partial Bulk Ops=2;Jet OLEDB:Registry Path=;Jet OLEDB:Database Locking Mode=0;Data Source="J:\2009-2010\1st sem\VC#\Sample\WindowsApplication_Products\PointOfSales.mdb"

    Read the article

  • What do these C# auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >