Search Results

Search found 6492 results on 260 pages for 'trigger io'.

Page 29/260 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • jQuery trigger function on change

    - by Michael Pasqualone
    I have the following two slider functions which work well and display like so: I have diskAmount and transferAmount stored in global vars, however what I am now trying to figure out is how do I get the sum of the two to initially show as the monthly fee, and then update when either of the two sliders are changed. So in my screenshot above the initial state will be $24.75, and the fee will update if the sliders change. Can anyone point me in the right direction? I get can the initial fee to show, but can't figure out how to get 1 function to call another function within jQuery. $(function() { $("#disk").slider({ value:3, min: 1, max: 80, step: 1, slide: function(event, ui) { $("#diskamount").val(ui.value + ' Gb'); $("#diskamountUnit").val('$' + parseFloat(ui.value * diskCost).toFixed(2)); } }); // Set initial diskamount state $("#diskamount").val($("#disk").slider("value") + ' Gb'); // Set initial diskamountUnit state diskAmount = $("#disk").slider("value") * diskCost; $("#diskamountUnit").val('$' + diskAmount.toFixed(2)); }); $(function() { $("#data").slider({ value:25, min: 1, max: 200, step: 1, slide: function(event, ui) { $("#dataamount").val(ui.value + ' Gb'); $("#dataamountUnit").val('$' + parseFloat(ui.value * transferCost).toFixed(2)); } }); // Set initial dataamount state $("#dataamount").val($("#data").slider("value") + ' Gb'); // Set initial dataamountUnit state transferAmount = $("#data").slider("value") * transferCost; $("#dataamountUnit").val('$' + transferAmount.toFixed(2)); });

    Read the article

  • How to close IOs?

    - by blackdog
    when i managed IO, i found a problem. i used to close it like this: try { // my code } catch (Exception e) { // my code } finally{ if (is != null) { is.close(); } } but the close method also would throw exception. if i have more than one IO, i have to close all of them. so the code maybe like this: try { // my code } catch (Exception e) { // my code } finally{ if (is1 != null) { is1.close(); } if(is2 != null{ is2.close(); } // many IOs } if is1.close() throws an exception, is2, is3 would not close itself. So i have to type many try-catch-finally to control them. is there other way to solve the problem?

    Read the article

  • Qt request never trigger the finished() signal

    - by user63898
    i have something realy strange i have this code : m_url.setUrl("www.cnn.com"); QNetworkRequest request; request.setRawHeader("User-Agent", USER_AGENT.toUtf8()); request.setRawHeader("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.7"); request.setRawHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"); request.setRawHeader("Accept-Language", "en-us,en;q=0.5"); request.setRawHeader("Connection", "Keep-Alive"); request.setUrl(m_url); reply = qnam.get(request); //QNetworkAccessManager qnam member of the class; connect(reply, SIGNAL(finished()), this, SLOT(httpFinished())); the httpFinished() function that is under private slots: never triggered , why ?

    Read the article

  • jQuery changing images with animation and waiting for it to trigger hyperlink

    - by user1476298
    I want to switch images on .click() using the url attr I can do so, but I can't figure out how to make it wait, until the animation is done to redirect to the new webpage. Here's the js $(function() { $('.cv').click(function(){ $("#cv img").fadeOut(2000, function() { $(this).load(function() { $(this).fadeIn(); }); $(this).attr("src", "images/cv2.png"); return true; }); }); }); Here's the html: <div id="cv" class="img one-third column"> <a class="cv" target="#" href="cv.joanlascano.com.ar"> <img src="images/cv1.png" alt="Curriculum"/> <br />CV</a> </div> Thank you in advantage!

    Read the article

  • Trigger file download on a page with content

    - by jeffreyveon
    I have seen many websites triggering a file save-as dialog on a page with existing HTML content. How do they do this? I know about setting the right headers such as Content-disposition etc. but when I do that, the content of the page does not load, and immediately the file download is triggered...

    Read the article

  • Trigger alert when database entries are added, not when they are removed

    - by Jeremy
    I have a jQuery script running that makes a periodic AJAX call using the following code. var a = moment(); var dayOfMonth = a.format("MMM Do"); var timeSubmitted = a.format("h:mm a"); var count_cases = -1; var count_claimed = -1; setInterval(function(){ //check if new lead was added to the db $.ajax({ type : "POST", url : "inc/new_lead_alerts_process.php", dataType: 'json', cache: false, success : function(response){ $.getJSON("inc/new_lead_alerts_process.php", function(data) { if (count_cases != -1 && count_cases != data.count) { window.location = "new_lead_alerts.php?id="+data.id; } count_cases = data.count; }); } }); This is the PHP that runs with each call: $count = mysql_fetch_array(mysql_query("SELECT count(*) as count FROM leads")); $client_id = mysql_fetch_array(mysql_query("SELECT id, client_id FROM leads ORDER BY id DESC LIMIT 1")); echo json_encode(array("count" => $count['count'], "id" => $client_id['id'], "client_id" => $client_id['client_id'])); I need to change the code so that the alert only triggers when a new entry is added to the database, not when an existing entry is removed. As it stands, the alert fires on both events. Any help is greatly appreciated.

    Read the article

  • Oracle checking existence before deletion in a trigger

    I have analyzed a hibernate generated oracle database and discovered that a delete of a row from a single table will spawn the firing of 1200+ triggers in order to delete the related rows in child tables. The triggers are all auto-generated the same - an automatic delete of a child row without checking for existence first. As it is impossible to predict which child tables will actually have related rows, I think a viable solution to preventing the firing of the cascaded delete down a deeply branched completely empty limb, would be to check for the existence of a related row before attempting to delete. In other dbms', I could simply state " if exists....." before deleting. Is there a comparable way to do this in oracle?

    Read the article

  • SDK 3.2 - Trigger video from UIScrollView Subview (Audio but no video)

    - by stalure
    I am having a hard time getting video to play in an application that I am working on and I think the answer has to do with the views and view controllers. I have the flow depicted below. When the button is clicked, the audio from the video is playing, but nothing is displayed on screen. Anyone have any ideas? -(IBAction) playMovie { Play Movie Code } ViewController UIScrollView UIView UIButton - (playMovie)

    Read the article

  • I am using following PHP code for trigger creation but always get error, please help me to resolve i

    - by Parth
    I am using following PHP code for trigger creation but always get error, please help me to resolve it. $link = mysql_connect('localhost','root','rainserver'); mysql_select_db('information_schema'); echo $trgquery = "DELIMITER $$ DROP TRIGGER `update_data` $$ CREATE TRIGGER `update_data` AFTER UPDATE on `jos_menu` FOR EACH ROW BEGIN IF (NEW.menutype != OLD.menutype) THEN INSERT INTO jos_menuaudit set menuid=OLD.id, oldvalue = OLD.menutype, newvalue = NEW.menutype, field = 'menutype'; END IF; IF (NEW.name != OLD.name) THEN INSERT INTO jos_menuaudit set menuid=OLD.id, oldvalue = OLD.name, newvalue = NEW.name, field = 'name'; END IF; IF (NEW.alias != OLD.alias) THEN INSERT INTO jos_menuaudit set menuid=OLD.id, oldvalue = OLD.alias, newvalue = NEW.alias, field = 'alias'; END IF; END$$ DELIMITER ;"; echo "<br>"; //$trig = mysqli_query($link,$trgquery) or die("Error Exist".mysqli_error($link)); $trig = mysql_query($trgquery) or die("Error Exist".mysql_error()); I get the error as: Error ExistYou have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '$$ DROP TRIGGER `update_data` $$ CREATE TRIGGER `update_data` AFTER UPDATE on `j' at line 1 PLease help me to create my trigger...

    Read the article

  • Error in creating trigger using PHP script.. Please help!!!! Geeks

    - by Parth
    I've been successful in creating a simple trigger with a PHP script. The problem comes when I have to use "DELIMITER" in my trigger creating because I have nested if/then statements. For example, the following DOES work: $sql = 'CREATE TRIGGER `database`.`detail` BEFORE INSERT on `database`.`vibez_detail` FOR EACH ROW set @a=new.realtime_value'; $junk = mysqli_query($link, $sql); However, if I need to use "DELIMITER" I get errors galore. Here's what works if I go to the MySQL client at the server: DELIMITER $$; DROP TRIGGER `database`.`detail`$$ CREATE TRIGGER `database`.`detail` BEFORE INSERT on `database`.`vibez_detail` FOR EACH ROW BEGIN set new.data_rate = 69; insert into alarm_trips_parent values (null, new.data_rate, 6969, now()); set @tripped_time = now(); set @tripped:=true; end if; end if; END$$ DELIMITER ;$$ How can I construct the PHP code to get this to work? Seems the mysqli_query($link, $sql) chokes as soon as I put "DELIMITER" in the $sql variable. I get the following error when I attempt to do so: Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELIMITER //; CREATE TRIGGER `database`.`detail` BEFORE INS' at line 1 Even The concept for using mysqli_multi_query(); didn't worked for me.... :(

    Read the article

  • Is there a java library / package analogous to <stdio.h>?

    - by Roboprog
    I have been doing Java on and off for about 14 years, and almost nothing else the last 6 years or so. I really hate the java.io package -- its legion of subclasses and adapters. I do like exceptions, rather than having to always poll "errno" and the like, but I could surely live without declared exceptions. Is there anything that functions like the Unix/ANSI stdio.h routines in C? I know we will never be rid of java.io and its conventions until java itself is retired, as they have metastasized throughout the many frameworks that have accreted to java. That said, I would like something that works kind of like this (let's call it package javax.stdio): Have a main utility class, perhaps FileStar, that can read and write files (or pipes), either text or binary, either sequentially or random access, with constructors that mimic fopen() and popen(). This class should have a load of useful methods that do things like fread(), fwrite(), fgets(), fputs(), fseek(), and whatever else (fprintf()?). Methods that are incompatible with the open/construct mode simply throw up (just like some of the collections classes/methods do when restricted). Then, have a bunch of interfaces that suggest how you intend to use the stream once you have created it: Sequential, RandomAccess, ReadOnly, WriteOnly, Text, Binary, plus combinations of these that make sense. Perhaps even have methods to return the appropriate type-cast (interface), throwing up if you have asked for something incompatible. For extra flavor, skip the declared exceptions -- e.g. - javax.stdio.IOException extends RuntimeException. Is there an open source project like this floating around?

    Read the article

  • Return value from Object match

    - by Hito_kun
    I'm, by no means, JS fluent, so forgive me if im asking for some really basic stuff, but I've not being able to find a proper answer to my question. Im writting my first Node.js (plus Extra Framework and Socket.io) app and Im having some fun setting up the server side of a FB-like messenger (surprise!!!). So, let's say I have this data structure to store online users(This is a JSON Array, but I'm not sure it is the best way to do it or should I go with Javascript Objects): [ { "site": 45, "users": [ { "idUser": 5, "idSocket": "qwe87r7w8qwe", "name": "Carlos Ray Norris" }, { "idUser": 6, "idSocket": "v8d9d0fgfs7d", "name": "John Connor" } ] }, { "site": 48, "users": [ { "idUser": 22, "idSocket": "qwe87r7w8qwe", "name": "David Bowie" }, { "idUser": 23, "idSocket": "v8d9d0fgfs7d", "name": "Barack H. Obama" } ] } ] What I want to do is to search in the array for x value given y. In this case, retrieving the idSocket knowing the idUser WITHOUT having to run through the array values. So I have basically 2 questions: first, what would be the proper way to store users online? and secondly, how to find values matching with the values I already know (find the idSocket that has a given idUser). I would like a pure JS approach(or using some of the tools given by Node, Socket.io or Express), but if that's not possible then I can look for some JQuery.

    Read the article

  • Disk performance below expectations

    - by paulH
    this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed). I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B. Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks. We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'. Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume). Server A (original setup with 6Gpbs disks) D: Read (MB/s) 63 MB/s D: Write (MB/s) 170 MB/s E: Read (MB/s) 68 MB/s E: Write (MB/s) 320 MB/s Server B (original setup with 3Gpbs disks) D: Read (MB/s) 52 MB/s D: Write (MB/s) 88 MB/s E: Read (MB/s) 112 MB/s E: Write (MB/s) 130 MB/s Server A (new setup with 3Gpbs disks) D: Read (MB/s) 55 MB/s D: Write (MB/s) 85 MB/s E: Read (MB/s) 67 MB/s E: Write (MB/s) 180 MB/s Server B (new setup with 6Gpbs disks) D: Read (MB/s) 61 MB/s D: Write (MB/s) 95 MB/s E: Read (MB/s) 69 MB/s E: Write (MB/s) 180 MB/s Can anybody suggest any ideas what is going on here? The drives in use are as follows: Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS Hitachi Hus153030vls300 300GB Server SAS Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

    Read the article

  • Delay before download starts when serving files using nginx

    - by glumbo
    I am currently using nginx to serve downloads off my website. Users sometimes need to wait about 5 seconds before their download starts after clicking a download link. I'm not sure if I need to start using raid 10 (I'm currently using raid 50) or if this is a problem with my nginx configuration. I am also on a 1gbit line but download sometimes go as low as 10kB/s. My server: Dual Xeon 5620 CPU, 12x2TB drives with 8GB ram. This is my nginx.conf #user nobody; worker_processes 12; worker_rlimit_nofile 10240; worker_rlimit_sigpending 32768; error_log logs/error.log crit; #pid logs/nginx.pid; events { worker_connections 2048; } http { include mime.types; default_type application/octet-stream; access_log off; limit_conn_log_level info; log_format xfs '$arg_id|$arg_usr|$remote_addr|$body_bytes_sent|$status'; #sendfile on; #tcp_nopush on; reset_timedout_connection on; server_tokens off; autoindex off; keepalive_timeout 0; #keepalive_timeout 65; limit_zone one $binary_remote_addr 10m; perl_modules perl; perl_require download.pm;

    Read the article

  • Measuring 'total bytes written' under Linux

    - by badnews
    We're quite interested in exploring the possibility of using SSD drives in a server environment. However, one thing that we need to establish is expected drive longevity. According to this article manufacturer's are reporting drive endurance in terms of 'total bytes written' (TBW). E.g. from that article a Crucial C400 SSD is rated at 72TB TBW. Do any scripts/tools exist under the Linux ecosystem to help us measure TBW? (and then make a more educated decision on the feasibility of using SSD drives)

    Read the article

  • Bad I/O scheduler?

    - by user62367
    os: up-to-date Fedora 14. Working as a "normal desktop". It's doing very well, but if i start VirtualBox, and e.g.: install a guest on it, it just "freezez". I mean if there are disk activities on a VirtualBox guest, then the computer becomes unrespondable..even the mouse is laggin for about 50 minutes.. What could be the bottleneck? What could be the problem? If anyone has any tips/howtos to speed it up, please help! It has a normal 2,5" HDD, with 5400 RPM. Does it worth for me if i buy a 2,5" HDD with 7200 RPM? T7200 cpu, 4 GByte RAM, "vm.swappiness = 0". Thank you!

    Read the article

  • Different block sizes for partition and underlying logical disk on HP Raid Controller (Linux)

    - by Wawrzek
    Following links collected in this thread I started to check blockdev and found the following output indicating different sizes for partition c0d9p1 and the underlying device (c0d9): [root@machine ~]# blockdev --report /dev/cciss/c0d9 RO RA SSZ BSZ StartSec Size Device rw 256 512 4096 0 3906963632 /dev/cciss/c0d9 [root@machine ~]# blockdev --report /dev/cciss/c0d9p1 RO RA SSZ BSZ StartSec Size Device rw 256 512 2048 1 3906959039 /dev/cciss/c0d9p1 We have a lot of small files, so yes the block size is smaller than normal. The device is a logical driver on an HP P410 raid controller, simple disk without any raid - RAID 0 on one disk to be precise. (Please note that above configuration is a feature not a bug). Therefore, I have the following questions. Can the above discrepancy in the block size affect disk performance? Can I control the block size using hpacucli?

    Read the article

  • How to identify heavy write to disk?

    - by Darth
    I have this problem with server running CakePHP application. The server is insanely slow, I first thought that it's application problem, but then I found constant 5-6MB/s write to disk. What is the easiest way to find cause of such a heavy write? The server is running Gentoo.

    Read the article

  • I/O intensive MySql server on Amazon AWS

    - by rhossi
    We recently moved from a traditional Data Center to cloud computing on AWS. We are developing a product in partnership with another company, and we need to create a database server for the product we'll release. I have been using Amazon Web Services for the past 3 years, but this is the first time I received a spec with this very specific hardware configuration. I know there are trade-offs and that real hardware will always be faster than virtual machines, and knowing that fact forehand, what would you recommend? 1) Amazon EC2? 2) Amazon RDS? 3) Something else? 4) Forget it baby, stick to the real hardware Here is the hardware requirements This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 1 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 2 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. Thanks in advance. Best,

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, "sync" option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >