Search Results

Search found 7391 results on 296 pages for 'record locking'.

Page 94/296 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • How to the view count of a question in momery?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. When a user visited a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save them in the database, because the records will be huge. So I want to keep them in the memory, and persistent the number to database every 10 minutes. I have searched about "cache" of rails, but I haven't found an example. I need an simple sample of how to do this, thanks for help~

    Read the article

  • Nested queries in Arel

    - by Schrockwell
    I am attempting to nest SELECT queries in Arel and/or Active Record in Rails 3 to generate the following SQL statement. SELECT sorted.* FROM (SELECT * FROM points ORDER BY points.timestamp DESC) AS sorted GROUP BY sorted.client_id An alias for the subquery can be created by doing points = Table(:points) sorted = points.order('timestamp DESC').alias but then I'm stuck as how to pass it into the parent query (short of calling #to_sql, which sounds pretty ugly). How do you use a SELECT statement as a sub-query in Arel (or Active Record) to accomplish the above? Maybe there's an altogether different way to accomplish this query that doesn't use nested queries?

    Read the article

  • javascript form validation in rails?

    - by Elliot
    Hey guys, I was wondering how to go about form validation in rails. Specifically, here is what I'm trying to do: The form lets the user select an item from a drop down menu, and then enter a number along with it. Using RJS, the record instantly shows up in a table below the form. Resetting the form with AJAX isn't a problem. The issue is, I don't want the person to be able to select the same item from that drop down menu twice (in 1 day at least). Without using ajax, this isn't a problem (as I have a function for the select statement currently), but now that the page isn't reloading, I need a way to make sure people cant add the same item twice in one day. That said, is there a way to use some javascript/ajax validation to make sure the same record hasn't been submitted during that day, before a duplicate can be created? Thanks in advance! Elliot

    Read the article

  • connect to web API from ..NET

    - by Saif Khan
    How can I access and consume a web API from .NET? The API is not a .NET API. Here is sample code I have in Ruby require 'uri' require 'net/http' url = URI.parse("http://account.codebasehq.com/widgets/tickets") req = Net::HTTP::Post.new(url.path) req.basic_auth('dave', '6b2579a03c2e8825a5fd0a9b4390d15571f3674d') req.add_field('Content-type', 'application/xml') req.add_field('Accept', 'application/xml') xml = "<ticket><summary>My Example Ticket</summary><status-id>1234</status-id><priority-id>1234</priority-id><ticket-type>bug</ticket-type></ticket>" res = Net::HTTP.new(url.host, url.port).start {|http| http.request(req, xml)} case res when Net::HTTPCreated puts "Record was created successfully." else puts "An error occurred while adding this record" end Where can I find information on consuming API like this from .NET? I am aware how to use .NET webservices.

    Read the article

  • Android: Voice Recording and saving audio

    - by user1320912
    I am working on application that will record the voice of the user and save the file on the SD card and then allow the user to listen to the audio again. I am able to allow the user to record his voice using the RecognizerIntent, but I cant figure out how to save the audio file and allow the user to hear the audio. I would appreciate it if someone could help me out. I have displayed my code below: // Setting up the onClickListener for Audio Button attachVoice = (Button) findViewById(R.id.AttachVoice_questionandanswer); attachVoice.setOnClickListener(new OnClickListener() { public void onClick(View v) { Intent voiceIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); voiceIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); voiceIntent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Please Speak"); startActivityForResult(voiceIntent, VOICE_REQUEST); } }); protected void onActivityResult(int requestCode, int resultCode, Intent data) { if(requestCode == VOICE_REQUEST && resultCode == RESULT_OK){ }

    Read the article

  • ExtJS combo setting problem

    - by Hubidubi
    Hi I run into an interesting problem while was using combos in input form. My form contains combos that get data from json stores. It works fine when adding new record, but when the form is opened for editing an existing record, sometimes the id appears as selected not its value (eg: there's 5 instead of "apple"). I think it tries to set the value before it finishes loading the combo. Is there a way to solve this? I put the code down here that creates combos: function dictComboMaker( store, fieldLabel, hiddenName, name, allowBlank, myToolTipp ) { comboo = { xtype : 'combo', id: 'id-'+name, allowBlank: allowBlank, fieldLabel : fieldLabel, forceSelection : true, displayField : 'value', valueField : 'id', editable: false, name: name, hiddenName : hiddenName, minChars : 2, mode: 'remote', triggerAction : 'all', store : store }; function dictJsonMaker(url) { store = new Ext.data.JsonStore({ root : 'results', // 1 fields : [ 'id','value' ], url : url, autoLoad: true}); return store; } var comboKarStore = dictJsonMaker('/service/karok'); var comboKar= dictComboMaker(comboKarStore, 'Kar', 'karid', 'kar', false, ''); // then comboKar is added to the form Hubidubi

    Read the article

  • Firing trigger for bulk insert

    - by Deepa
    ALTER TRIGGER [dbo].[TR_O_SALESMAN_INS] ON [dbo].[O_SALESMAN] AFTER INSERT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for trigger here DECLARE @SLSMAN_CD NVARCHAR(20) DECLARE @SLSMAN_NAME NVARCHAR(20) SELECT @SLSMAN_CD = SLSMAN_CD,@SLSMAN_NAME=SLSMAN_NAME FROM INSERTED IF NOT EXISTS(SELECT * FROM O_SALESMAN_USER WHERE SLSMAN_CD = @SLSMAN_CD) BEGIN INSERT INTO O_SALESMAN_USER(SLSMAN_CD, PASSWORD, USER_CD) VALUES(@SLSMAN_CD, @SLSMAN_CD,@SLSMAN_NAME ) END END This is the trigger written for a table(O_SALESMAN) to fetch few columns from it and insert it into one another table(O_SALESMAN_USER). Presently bulk data is getting inserted into O_SALESMAN table through a stored procedure, where as the trigger is getting fired only once and O_SALESMAN_USER is having only one record inserted each time whenever the stored procedure is being executed,i want trigger to run after each and every record that gets inserted into O_SALESMAN such that both tables should have same count which is not happening..so please let me know what can be modified in this Trigger to achieve the same....

    Read the article

  • Doctrine CodeIgniter MySQL CRUD errors

    - by 01010011
    Hi, I am using CI + Doctrine + MySQL and I am getting the following CRUD errors: (1) When trying to create a new record in the database with this code: $book_title = 'The Peloponnesian War'; $b = new Book(); $b-title = $book_title; $b-price = 10.50; $b-save(); I get this error: Fatal error: Uncaught exeption 'Doctrine_Connection_Mysql_Exception' with message 'SQLSTATE[42S22]: Column not found: 1054 Unknown column 'title' in 'field list' in ... (2) When trying to fetch a record from the database and display on my view page with this code: $book_title = 'The Peloponnesian War'; $title = $book_title; $search_results = Doctrine::getTable('Book')-findOneByTitle($title); echo $search_results-title; (in view file) I get this error: Fatal error: Uncaught exception 'Doctrine_Connection_Mysql_Exception' with message 'SQLSTATE[45S22]: Column not found: 1054 Unknown column 'b.id' in 'field list" in ... And finally, when I try to update a record as follows: $book_title = 'The Peloponnesian War'; $title = $book_title; $u = Doctrine::getTable('Book')-find($title); $u-title = $title; $u-save(); I get a similar error: Fatal error: Uncaught exception 'Doctrine_Connection_Mysql_Exception' with message 'SQLSTATE[42S22]: Column not found: 1054 Unknown column 'b.id' in 'field list''in ... Here is my Doctrine_Record model: class Book extends Doctrine_Record{ public function setTableDefinition() { $this->hasColumn('book_id'); $this->hasColumn('isbn10','varchar',20); $this->hasColumn('isbn13','varchar',20); $this->hasColumn('title','varchar',100); $this->hasColumn('edition','varchar',20); $this->hasColumn('author_f_name','varchar',20); $this->hasColumn('author_m_name','varchar',20); $this->hasColumn('author_l_name','varchar',20); $this->hasColumn('cond','enum',null, array('values' => array('as new','very good','good','fair','poor'))); $this->hasColumn('price','decimal',8, array('scale' =>2)); $this->hasColumn('genre','varchar',20); } public function setUp() { $this->setTableName('Book'); //$this->actAs('Timestampable'); } Any assistance will be really appreciated. Thanks in advance.

    Read the article

  • CentOS Client - Unable to Establish iSCSI connection with multiple interfaces on the initiator

    - by slashdot
    So after upgrading to CentOS 6.2, I am seemingly no longer able to login into my iSCSI targets. I have multiple interfaces on different subnets on the system, and I first thought that it had to do with the fact that I may not be binding correct interfaces, which seems to be the case when looking at netstat, as this is clearly wrong: [root]? netstat -na|grep .90 tcp 0 1 10.10.100.60:42354 10.10.8.90:3260 SYN_SENT tcp 0 1 10.10.100.60:40777 10.10.9.90:3260 SYN_SENT I then went ahead and disabled all but one interface, and so as a result netstat appears to be correct, but the issue with login remains. I am positive that the target never sees a packet, because I see nothing by SYN_SENT. I know the problem is on my client, because the target is servicing multiple systems, none of which are CentOS 6.2. At this point I am pretty confident that some things changed between CentOS 6.0/6.1 and 6.2. So, if anyone have any thoughts, or ran into this, I would very much like to hear your thoughts. [root]? iscsiadm --mode node --targetname iqn.2011-12.dom.homer:01:lab-centos-servers-00001 --portal 10.10.8.90:3260,2 --interface=sw-iscsi-0 --login Logging in to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260] (multiple) iscsiadm: Could not login to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not log into all portals [root]? netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.10.8.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2.7 10.10.9.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3.7 10.10.100.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2.7 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3.7 0.0.0.0 10.10.100.1 0.0.0.0 UG 0 0 0 eth0 Output of ip addr show for the two interfaces involved: [root]? for i in 2.7 3.7; do ip addr show eth$i; done 6: eth2.7@eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:0c:29:94:5b:8d brd ff:ff:ff:ff:ff:ff inet 10.10.8.60/24 brd 10.10.8.255 scope global eth2.7 inet6 fe80::20c:29ff:fe94:5b8d/64 scope link valid_lft forever preferred_lft forever 7: eth3.7@eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:0c:29:94:5b:97 brd ff:ff:ff:ff:ff:ff inet 10.10.9.60/24 brd 10.10.9.255 scope global eth3.7 inet6 fe80::20c:29ff:fe94:5b97/64 scope link valid_lft forever preferred_lft forever Update 01/06/2012: This issue is getting even more interesting by the day it seems. I went a few weeks back and grabbed a snapshot of this system from before upgrading to 6.2. I spun up a new system from the snapshot, and reconfigured interface info and host keys, as well as iSCSI initiator and iscsi interface info to match new MACs. Changed nothing else. Then, I attempted to connect to my targets, and no issues at all. I cannot say that this was unexpected. I then went ahead and compared sysctl settings from both systems and there were differences after the upgrade, but nothing seemingly relevant to iSCSI or IP that could contribute to this. I also noticed that by default now two sessions per connection were enabled after the upgrade, but I changed it back to 1 session in /etc/iscsi/iscsid.conf. On the problematic system we can see that source interface is seemingly wrong, but even when I disable the 10.10.100 interface, problems persist. So, while this may be relevant, I could not validate it for certain. Needless to say, further research is necessary. Something is clearly different between releases. Working system is on 6.1, and non-working is 6.2. ::Working System:: tcp 0 0 10.10.8.210:39566 10.10.8.90:3260 ESTABLISHED tcp 0 0 10.10.9.210:46518 10.10.9.90:3260 ESTABLISHED [root]? ip route show 10.10.8.0/24 dev eth2.6 proto kernel scope link src 10.10.8.210 10.10.9.0/24 dev eth3.7 proto kernel scope link src 10.10.9.210 10.10.100.0/22 dev eth0 proto kernel scope link src 10.10.100.210 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth2.6 scope link metric 1006 169.254.0.0/16 dev eth3.7 scope link metric 1007 default via 10.10.100.1 dev eth0 ::Non-working System:: tcp 0 1 10.10.100.60:44737 10.10.9.90:3260 SYN_SENT tcp 0 1 10.10.100.60:55479 10.10.8.90:3260 SYN_SENT [root]? ip route show 10.10.8.0/24 dev eth2.6 proto kernel scope link src 10.10.8.60 10.10.9.0/24 dev eth3.7 proto kernel scope link src 10.10.9.60 10.10.100.0/22 dev eth0 proto kernel scope link src 10.10.100.60 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth2.6 scope link metric 1006 169.254.0.0/16 dev eth3.7 scope link metric 1007 default via 10.10.100.1 dev eth0 And the result is still same: [root]? iscsiadm: Could not login to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not login to [iface: sw-iscsi-1, target: iqn.2011-12.dom.homer:02:lab-centos-servers-00001, portal: 10.10.9.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not log into all portals Update 01/08/2012: I believe I have been able to figure out the answer to my issue. It is quite obscure and I doubt this will happen to anyone else any time soon. It turns out that setting iface.iscsi_ifacename and iface.hwaddress in the interfaces configuration file is not legal. When one manually adds an iscsi target, such as below, all settings from the interface config file are copied into the node config file, that gets created by the below command. Result is parameters iface.iscsi_ifacename and iface.hwaddress together in the same config file. These parameters are seemingly mutually exclusive, which does not exactly make sense, or there is perhaps an oversight in the codepath. Perhaps I will investigate further. # iscsiadm -m node --op new -T iqn.2011-12.dom.homer:01:lab-centos-servers-00001 -p 10.10.8.90,3260,2 -I sw-iscsi-0 # iscsiadm -m node --op new -T iqn.2011-12.dom.homer:02:lab-centos-servers-00001 -p 10.10.9.90,3260,2 -I sw-iscsi-1 Notice, below I commented out iface.hwaddress and iface.ipaddress, after which I re-added targets, with same command as above. All works just fine. [root]? cat * # BEGIN RECORD 2.0-872.33.el6 iface.iscsi_ifacename = sw-iscsi-0 iface.net_ifacename = eth2.6 #iface.hwaddress = XX:XX:XX:XX:XX:XX #iface.ipaddress = 10.10.8.60 iface.transport_name = tcp iface.vlan_id = 6 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD # BEGIN RECORD 2.0-872.33.el6 iface.iscsi_ifacename = sw-iscsi-1 iface.net_ifacename = eth3.7 #iface.hwaddress = XX:XX:XX:XX:XX:XX #iface.ipaddress = 10.10.9.60 iface.transport_name = tcp iface.vlan_id = 7 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD Again, chances of this happening to someone else are slim to none, so likely waste of time typing this up. But, if someone does encounter this issue, I hope this post will help.

    Read the article

  • Is it possible to use ContainsTable to get results for more than one column?

    - by LockeCJ
    Consider the following table: People FirstName nvarchar(50) LastName nvarchar(50) Let's assume for the moment that this table has a full-text index on it for both columns. Let's suppose that I wanted to find all of the people named "John Smith" in this table. The following query seems like a perfectly rational way to accomplish this: SELECT * from People p INNER JOIN CONTAINSTABLE(People,*,'"John*" AND "Smith*"') Unfortunately, this will return no results, assuming that there is no record in the People table that contains both "John" and "Smith" in either the FirstName or LastName columns. It will not match a record with "John" in the FirstName column, and "Smith" in the LastName column, or vice-versa. My question is this: How does one accomplish what I'm trying to do above? Please consider that the example above is simplified. The real table I'm working with has ten columns and the input I'm receiving is a single string which is split up based on standard word breakers (space, dash, etc.)

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory comsumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paridigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • Making a DateTime field in SQLExpress database?

    - by Mike
    I'm putting together a simple test database to learn MVC with. I want to add a DateTime field to show when the record was CREATED. ID = int Name = Char DateCreated = (dateTime, DateTime2..?) I have a feeling that this type of DateTime capture can be done automatically - but that's all I have, a feeling. Can it be done? And if so how? While we're on the subject: if I wanted to include another field that captured the DateTime of when the record was LAST UPDATED how would I do that. I'm hoping to not do this manually. Many thanks Mike

    Read the article

  • Making a DateTime field in a database automatic?

    - by Mike
    I'm putting together a simple test database to learn MVC with. I want to add a DateTime field to show when the record was CREATED. ID = int Name = Char DateCreated = (dateTime, DateTime2..?) I have a feeling that this type of DateTime capture can be done automatically - but that's all I have, a feeling. Can it be done? And if so how? While we're on the subject: if I wanted to include another field that captured the DateTime of when the record was LAST UPDATED how would I do that. I'm hoping to not do this manually. Many thanks Mike

    Read the article

  • Want to load jquery dialog from a different web-page

    - by Jake
    I'm a bit of a n00b with jquery so this one is probably an RTFM question: I'm writing an application to create a somewhat complex record for my client. Building the record requires doing a couple of server side searches inside a dialog. Right now I have everything framed up in 1 file (asp.net) and it's ok. But I can see as I add the business logic and the communication with the server this is going to get really ugly. I'm alreay putting most of the javascript in external files, but I'd like to move the HTML for the dialogs out too. How do I get the jquery dialog method to load the dialog body from the html files? Something like: getDialogHTML(dialogHolderDiv); <---magic goes here var dialogOptions = { ... }; $("#"+dialogHolderDiv).dialog(dialogOptions); $("#"+dialogHolderDiv).dialog('open'); any help will be apperciated .

    Read the article

  • Video packet capture over multiple IP cameras

    - by nimals1986
    Hello We are working on a C language application which is simple RTSP/RTP client to record video from Axis a number of Cameras . We launch a pthread for each of the camera which establishes the RTP session and begins to record the packets captured suing the recvfrom() call... A single camera single pthread records fine for well over a day without issues.. but testing with more cameras available,about 25(so 25 pthreads), the recording to file goes fine for like 15 to 20 mins and then the recording just stops ..the application still keeps running .. Its been over a month and a half we have been trying with varied implementations but nothing seems to help .. Please provide suggestions.. We are using CentOS 5 platform

    Read the article

  • Writing a rails validator with integer

    - by user297008
    I was trying to write a validation for Rails to ensure that a price entered on a form was greater than zero. It works…sort of. The problem is that when I run it, val is turned into an integer, so it thinks that .99 is less than .1. What's going on, and how should I fix the code? class Product < ActiveRecord::Base protected def self.validates_greater_than_zero(*attr_names) validates_each(attr_names) do |record, attr, val| record.errors.add(attr, "should be at least 0.01 (current val = #{val.to_f})") if val.nil? || val < 0.01 end end public validates_presence_of :title, :description, :image_url validates_numericality_of :price validates_greater_than_zero :price end

    Read the article

  • Ext RowEditor.js does not fire 'afteredit' event

    - by terrani
    Hi, I have a Ext grid with RowEditor plugin. I have the following code to add 'afteredit' event to the roweditor object. store.on('update',function(){ }); editor.on("afteredit",function(roweditor,changes,record,index){ $.ajax({ url: $("#web").val() + "/registration/client/address-save" ,type: 'post' ,data: record.json ,dataType: 'json' ,success: function(data){ if(data.success == true){ alert("Update Successfully"); } } }); }); when I click a row and edit a value, sometimes the grid fires 'afteredit' event, but sometimes it doesn't. Do I have a problem with my code above?

    Read the article

  • Server database -> client database update based on version

    - by user296191
    Hi, What is the recommended method of collecting items in a server database, versioning the database then deploying only the version differences to a client ? Should it by a field in the table (ie. Version: 3.3.9876) against each record ? Should it be DateTime (server based) in each record ? And whats the best way to just deploy the changes to a client with an older version of the database ? Is it a DUMP to a file with a Bulk import of some description ? Open to comments.. Suggestions. Database can be anything (firebird, mysql, sqlserver, sqlite)... Any info greatly appreciated.

    Read the article

  • jquery repeat process once get reply from php file

    - by air
    i have one php file which process adding of record in Database fro array. for example in array i have 5 items aray an='abc','xyz','ert','wer','oiu' i want to call one php file in j query ajax method um = an.split(','); var counter = 0; if(counter < uemail.length) { $("#sending_count").html("Processing Record "+ ecounter +" of " + an.length); var data = {uid: um[counter] $.ajax({ type: "POST", url: "save.php", data: data, success: function(html){ echo "added"; counter++; } what it do, it complete all the prcess but save.php is still working what i want after one process it stop untill process of save.php complete then it wait for next 10 sec and start adding of 2nd element. Thanks

    Read the article

  • Normalize or Denormalize in high traffic websites

    - by Inam Jameel
    what is the best practice for database design for high traffic websites like this one stackoverflow? should one must use normalize database for record keeping or normalized technique or combination of both? is it sensible to design normalize database as main database for record keeping to reduce redundancy and at the same time maintain another denormalized form of database for fast searching? or main database should be denormalize and one can make normalized views in the application level for fast database operations? or beside above mentioned approach? what is the best practice of designing high traffic websites???

    Read the article

  • Handling pointer while updating a key value in rpgle

    - by abhinav singh
    my code goes like this femp uf e k disk dvar1 s 5p 0 c *loval setll emp c read emp c dow not %eof(emp) C eval ecode = ecode + 10 c eval var1=ecode c update recemp c var1 setgt emp c read emp c enddo c eval *inlr=*on here is a file named emp with record format name recemp with ecode as the key ...now when i am reading the file and then updating the ecode without using setgt ..the pointer is not moving ahead it is updating the same ecode value many time ...now when i use set gt pointer picks the next record but it dint work when two ecode values are same ...else also it will not be working with descending key values...is there any solution so that i can set pointer regardless of the fact whether the values are same or ascending or descending .......thanks

    Read the article

  • How to save the view count of a question in memory?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. If a user visits a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save this information in the database because the records will be huge. So, I would like to keep the information in memory and only persist the number to the database every 10 minutes. I have searched about "cache" in Rails, but I haven't found an example. I would like a simple sample of how to do this, thanks for help~

    Read the article

  • How to the view count of a question in memory?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. If a user visits a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save this information in the database because the records will be huge. So, I would like to keep the information in memory and only persist the number to the database every 10 minutes. I have searched about "cache" in Rails, but I haven't found an example. I would like a simple sample of how to do this, thanks for help~

    Read the article

  • Using transactions with ADO.NET Data Adapters.

    - by Ergwun
    Scenario: I want to let multiple (2 to 20, probably) server applications use a single database using ADO.NET. I want individual applications to be able to take ownership of sets of records in the database, hold them in memory (for speed) in DataSets, respond to client requests on the data, perform updates, and prevent other applications from updating those records until ownership has been relinquished. I'm new to ADO.NET, but it seems like this should be possible using transactions with Data Adapters (ADO.NET disconnected layer). Question part 1: Is that the right way to try and do this? Question part 2: If that is the right way, can anyone point me at any tutorials or examples of this kind of approach (in C#)? Question part 3: If I want to be able to take ownership of individual records and release them independently, am I going to need a separate transaction for each record, and by extension a separate DataAdapter and DataSet to hold each record, or is there a better way to do that? Each application will likely hold ownership of thousands of records simultaneously.

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >