Search Results

Search found 1349 results on 54 pages for 'sec'.

Page 46/54 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • Is it possible to include a Sexpr before the expression has been evaluated in Sweave / R ?

    - by PaulHurleyuk
    Hello, I'm writing a Sweave document, and I want to include a small section that details the R and package versions, platofrms and how long ti took to evalute the doucment, however, I want to put this in the middle of the document ! I was using a \Sexpr{elapsed} to do this (which didn't work), but thought if I put the code printing elapsed in a chunk that evaluates at the end, I could then include the chunk half way through, which also fails. My document looks something like this % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] @ Text and Sweave Code in here % This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx sec to process. <<>>= <<elapsed>> @ More text and Sweave code in here <<label=bye, include=FALSE, echo=FALSE>>= odbcCloseAll() endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= print(elapsedtime) @ \end{document} But this doesn't seem to work (amazingly !) Does anyone know how I could do this ? Thanks Paul.

    Read the article

  • generation of random numbers in java

    - by S.PRATHIBA
    Hi all, I want to create 30 tables which consists of the following fields.For example, Service_ID Service_Type consumer_feedback 75 Computing 1 35 Printer 0 33 Printer -1 3 rows in set (0.00 sec) mysql select * from consumer2; Service_ID Service_Type consumer_feedback 42 data 0 75 computing 0 mysql select * from consumer3; Service_ID Service_Type consumer_feedback 43 data -1 41 data 1 72 computing -1 As you can infer from the above tables, i am getting the feedback values.I have generated these consumer_feedback values,Service_ID,Service_Type using the concept of random numbers .I have used the funtion int min1=31;//printer int max1=35;//the values are generated if the Service_Type is printer. int provider1 = (int) (Math.random() * (max1 - min1 + 1) ) + min1; int min2=41;//data int max2 =45 int provider2 = (int) (Math.random() * (max2 - min2 + 1) ) + min2; int min3=71;//computing int max3=75; int provider3 = (int) (Math.random() * (max3 - min3 + 1) ) + min3; int min5 = -1;//feedback values int max5 =1; int feedback = (int) (Math.random() * (max5 - min5 + 1) ) + min5; I need the Service_Types to be distributed uniformly in all the 30 tables.Similarly I need feedback value of 1 to be generated many times other than 0 and -1.Please Help me.

    Read the article

  • Changing Apache2.2.11 httpd.conf has no effect

    - by Adrian
    Hi, Hopefully someone can help here. I recently installed wampserver ver 2.0 with Apache ver 2.2.11. My issue is, I have some large php scripts which timeout at the default 5 min (300 sec) browser limit (I'm using ie8). It is critcal I get this limit extended. I have tried changing the httpd.conf file to include the following: TimeOut 1200 My objective was to set the timeout at 1200 seconds, or 20 min. I had just chosen a random location to place this directive within the httpd.conf file as I cannot locate any documentation to suggest it belongs in a specific place within the file. Regardless, the changes I make appear in the httpd.conf file that can be found in the system tray for wampserver, however they have no effect - the browser still times out after 5 minutes. I thought perhaps I had the capitals incorrect, so I changed to: Timeout 1200 This change had no effect either. Can someone please help, this is very frustrating. Maybe the command can only be used within a specific module? If so, I have no idea which one, nor do I know the syntax to specify this. Regards Adrian.

    Read the article

  • MySQL query does not return any data

    - by Alex L
    Hi, I need to retrieve data from a specific time period. The query works fine until I specify the time period. Is there something wrong with the way I specify time period? I know there are many entries within that time-frame. This query returns empty: SELECT stop_times.stop_id, STR_TO_DATE(stop_times.arrival_time, '%H:%i:%s') as stopTime, routes.route_short_name, routes.route_long_name, trips.trip_headsign FROM trips JOIN stop_times ON trips.trip_id = stop_times.trip_id JOIN routes ON routes.route_id = trips.route_id WHERE stop_times.stop_id = 5508 HAVING stopTime BETWEEN DATE_SUB(stopTime,INTERVAL 1 MINUTE) AND DATE_ADD(stopTime,INTERVAL 20 MINUTE); Here is it's EXPLAIN: +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | 1 | SIMPLE | stop_times | ref | trip_id,stop_id | stop_id | 5 | const | 605 | Using where | | 1 | SIMPLE | trips | eq_ref | PRIMARY,route_id | PRIMARY | 4 | wmata_gtfs.stop_times.trip_id | 1 | | | 1 | SIMPLE | routes | eq_ref | PRIMARY | PRIMARY | 4 | wmata_gtfs.trips.route_id | 1 | | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ 3 rows in set (0.00 sec) The query works if I remove the HAVING clause (don't specify time range). Returns: +---------+----------+------------------+-----------------+---------------+ | stop_id | stopTime | route_short_name | route_long_name | trip_headsign | +---------+----------+------------------+-----------------+---------------+ | 5508 | 06:31:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 06:57:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:23:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:49:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:15:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:41:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 09:08:00 | "80" | "" | "FORT TOTTEN" | I am using Google Transit format Data loaded into MySQL. The query is supposed to provide stop times and bus routes for a given bus stop. For a bus stop, I am trying to get: Route Name Bus Name Bus Direction (headsign) Stop time The results should be limited only to buses times from 1 min ago to 20 min from now. Please let me know if you could help.

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • C#. How to pass message from unsafe callback to managed code?

    - by maxima120
    Is there a simple example of how to pass messages from unsafe callback to managed code? I have a proprietary dll which receives some messages packed in structs and all is coming to a callback function. The example of usage is as follows but it calls unsafe code too. I want to pass the messages into my application which is all managed code. *P.S. I have no experience in interop or unsafe code. I used to develop in C++ 8 yrs ago but remember very little from that nightmarish times :) P.P.S. The application is loaded as hell, the original devs claim it processes 2mil messages per sec.. I need a most efficient solution.* static unsafe int OnCoreCallback(IntPtr pSys, IntPtr pMsg) { // Alias structure pointers to the pointers passed in. CoreSystem* pCoreSys = (CoreSystem*)pSys; CoreMessage* pCoreMsg = (CoreMessage*)pMsg; // message handler function. if (pCoreMsg->MessageType == Core.MSG_STATUS) OnCoreStatus(pCoreSys, pCoreMsg); // Continue running return (int)Core.CALLBACKRETURN_CONTINUE; } Thank you.

    Read the article

  • Is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in Apache module: 1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any. So can a Apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • MySQLPython is ignoring my my.cnf file. Where does it get its information?

    - by ?????
    When I try to use MySQLPython (via SQLAlchemy) I get the error File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-macosx-10.6-x86_64.egg/MySQLdb/connections.py", line 188, in __init__ super(Connection, self).__init__(*args, **kwargs2) sqlalchemy.exc.OperationalError: (OperationalError) (2002, "Can't connect to local MySQL server through socket '/opt/local/var/run/mysql5/mysqld.sock' (2)") None None but no other mysql client on my machine sees it fine! My my.cnf file states: [client] port = 3306 socket = /tmp/mysql/mysql.sock [safe_mysqld] socket = /tmp/mysql/mysql.sock [mysqld_safe] socket = /tmp/mysql/mysql.sock [mysqld] socket = /tmp/mysql/mysql.sock port = 3306 and the mysql.sock file is, indeed, located in /tmp/mysql I verified that ~/.my.cnf and /var/lib/mysql/my.cnf aren't overriding it. The mysql5 client program, etc, has no trouble connecting and neither does a groovy/grails installation on the same machine using jdbc/mysql connection thrilllap-2:~ swirsky$ mysql5 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 6 Server version: 5.1.47 Source distribution Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL v2 license Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | test | +--------------------+ 2 rows in set (0.00 sec) mysql> Why can't MySQLdb for python figure this out? Where would it look if not the my.cnf files?

    Read the article

  • How to play simultaneous multiply audio sources in Silverlight

    - by Shurup
    I want to play simultaneous multiply audio sources in Silverlight. So I've created a prototype in Silverlight 4 that should play a two mp3 files containing the same ticks sound with an intervall 1 second. So these files must be sounded as one sound if they will be played together with any whole second offsets (0 and 1, 0 and 2, 1 and 1 seconds, etc.) I my prototype I use two MediaElement (me and me2) objects. DateTime startTime; private void Play_Clicked(object sender, RoutedEventArgs e) { me.SetSource(new FileStream(file1), FileMode.Open))); me2.SetSource(new FileStream(file2), FileMode.Open))); var timer = new DispatcherTimer { Interval = TimeSpan.FromMilliseconds(1) }; timer.Tick += RefreshData; timer.Start(); } First file should be played at 00:00 sec. and the second in 00:02 second. void RefreshData(object sender, EventArgs e) { if(me.CurrentState != MediaElementState.Playing) { startTime = DateTime.Now; me.Play(); return; } var elapsed = DateTime.Now - startTime; if(me2.CurrentState != MediaElementState.Playing && elapsed >= TimeSpan.FromSeconds(2)) { me2.Play(); ((DispatcherTimer)sender).Stop(); } } The tracks played every time different and not simultaneous as they should (as one sound).

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • Problem in getting Http Response in chrome

    - by Bhaskasr
    Am trying to get http response from php web service in javascript, but getting null in firefox and chrome. plz tell me where am doing mistake here is my code, function fetch_details() { if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest() alert("first"); } else { xhttp=new ActiveXObject("Microsoft.XMLHTTP") alert("sec"); } xhttp.open("GET","url.com",false); xhttp.send(""); xmlDoc=xhttp.responseXML; alert(xmlDoc.getElementsByTagName("Inbox")[0].childNodes[0].nodeValue); } I have tried with ajax also but am not getting http response here is my code, please guide me var xmlhttp = null; var url = "url.com"; if (window.XMLHttpRequest) { xmlhttp = new XMLHttpRequest(); alert(xmlhttp); //make sure that Browser supports overrideMimeType if ( typeof xmlhttp.overrideMimeType != 'undefined') { xmlhttp.overrideMimeType('text/xml'); } } else if (window.ActiveXObject) { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } else { alert('Perhaps your browser does not support xmlhttprequests?'); } xmlhttp.open('GET', url, true); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState == 4) { alert(xmlhttp.responseXML); } }; } // Make the actual request xmlhttp.send(null); I am getting xmlhttp.readyState = 4 xmlhttp.status = 0 xmlhttp.responseText = "" plz tell me where am doing mistake

    Read the article

  • Match subpatterns in any order

    - by Yaroslav
    I have long regexp with two complicated subpatters inside. How i can match that subpatterns in any order? Simplified example: /(apple)?\s?(banana)?\s?(orange)?\s?(kiwi)?/ and i want to match both of apple banana orange kiwi apple orange banana kiwi It is very simplified example. In my case banana and orange is long complicated subpatterns and i don't want to do something like /(apple)?\s?((banana)?\s?(orange)?|(orange)?\s?(banana)?)\s?(kiwi)?/ Is it possible to group subpatterns like chars in character class? UPD Real data as requested: 14:24 26,37 Mb 108.53 01:19:02 06.07 24.39 19:39 46:00 my strings much longer, but it is significant part. Here you can see two lines what i need to match. First has two values: length (14 min 24 sec) and size 26.37 Mb. Second one has three values but in different order: size 108.53 Mb, length 01 h 19 m 02 s and date June, 07 Third one has two size and length Fourth has only length There are couple more variations and i need to parse all values. I have a regexp that pretty close except i can't figure out how to match patterns in different order without writing it twice. (?<size>\d{1,3}\[.,]\d{1,2}\s+(?:Mb)?)?\s? (?<length>(?:(?:01:)?\d{1,2}:\d{2}))?\s* (?<date>\d{2}\.\d{2}))? NOTE: that is only part of big regexp that forks fine already.

    Read the article

  • node.js UDP data lost at high package rates

    - by koleto
    I am observing a significant data-lost on a UDP connection with node.js 0.6.18 and 0.8.0 . It appears at high packet rates about 1200 packet per second with frames about 1500 byte limit. Each data packages has a incrementing number so it easy to track the number of lost packages. var server = dgram.createSocket("udp4"); server.on("message", function (message, rinfo) { //~processData(message); //~ writeData(message, null, 5000); }).bind(10001); On the receiving callback I tested two cases I first saved 5000 packages in a file. The result ware no dropped packages. After I have included a data processing routine and got about 50% drop rate. What I expected was that the process data routine should be completely asynchronous and should not introduce dead time to the system, since it is a simple parser to process binary data in the package and to emits events to a further processing routine. It seems that the parsing routine introduce dead time in which the event handler is unable to handle each packets. At the low package rates (< 1200 packages/sec) there are no data lost observed! Is this a bug or I am doing something wrong?

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • libcurl (c api) READFUNCTION for http PUT blocking forever

    - by Duane
    I am using libcurl for a RESTful library. I am having two problems with a PUT message, I am just trying to send a small content like "hello" via put. My READFUNCTION for PUT's blocks for a very large amount of time (minutes) when I follow the manual at curl.haxx.se and return a 0 indicating I have finished the content. (on os X) When I return something 0 this succeeds much faster (< 1 sec) When I run this on my linux machine (ubuntu 10.4) this blocking event appears to NEVER return when I return 0, if I change the behavior to return the size written libcurl appends all the data in the http body sending way more data and it fails with a "too much data" message from the server. my readfunction is below, any help would be greatly appreciated. I am using libcurl 7.20.1 typedef struct{ void *data; int body_size; int bytes_remaining; int bytes_written; } postdata; size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) { if(stream) { postdata ud = (postdata)stream; if(ud->bytes_remaining) { if(ud->body_size > size*nmemb) { memcpy(ptr, ud->data+ud->bytes_written, size*nmemb); ud->bytes_written+=size+nmemb; ud->bytes_remaining = ud->body_size-size*nmemb; return size*nmemb; } else { memcpy(ptr, ud->data+ud->bytes_written, ud->bytes_remaining); ud->bytes_remaining=0; return 0; } }

    Read the article

  • MySQL SELECT Statement not working when executed from PHP

    - by Andrew K
    Hello all, I have the following piece of code, executing a pretty simple MySQL query: $netnestquery = 'SELECT (`nested`+1) AS `nest` FROM `ipspace6` WHERE `id`<='.$adaddr.' AND `subnet`<='.$postmask.' AND `type`="net" AND `addr` NOT IN(SELECT `id` FROM `ipspace6` WHERE `addr`<'.$adaddr.' AND `type`="broadcast") ORDER BY `id`,`subnet` DESC LIMIT 1'; $netnestresults = mysql_query($netnestquery); $netnestrow = mysql_fetch_array($netnestresults); $nestlvl = $netnestrow['nest']; echo '<br> NESTQ: '.$netnestquery; Now, when I execute this in PHP, I get no results; an empty query. However, when I copy and paste the query echoed by my code (for debug purposes) into the mysql command line, I get a valid result: mysql> SELECT (`nested` + 1) AS `nest` FROM `ipspace6` WHERE `id`<=50552019054038629283648959286463168512 AND `subnet`<=36 AND `type`='net' AND `addr` NOT IN (SELECT `id` FROM `ipspace6` WHERE `addr`<50552019054038629283648959286463168512 AND `type`='broadcast') ORDER BY `id`,`subnet` DESC LIMIT 1; +------+ | nest | +------+ | 1 | +------+ 1 row in set (0.00 sec) Can anybody tell me what I'm doing wrong? I can't put quotes around my variables, as then MySQL will try to evaluate the variable as a string, when it is, in fact, a very large decimal. I think I might just be making a stupid mistake somewhere, but I can't tell where.

    Read the article

  • IE8 removes background-color of header row of Asp:Gridview

    - by Hitesh Riziya
    I am using Asp.net 4.0 GridView control to display data from database. I have applied the inbuilt theme to GridView. <asp:GridView ID="gv" runat="server" CellPadding="4" EmptyDataText="No records found." ForeColor="#333333" OnRowCommand="gv_RowCommand" Width="99%" OnPageIndexChanging="gv_PageIndexChanged" PageSize="50" AllowPaging="True" GridLines="None" AutoGenerateColumns="true"> <AlternatingRowStyle BackColor="White" /> <EditRowStyle BackColor="#7C6F57" /> <FooterStyle BackColor="#1C5E55" Font-Bold="True" ForeColor="White" /> <HeaderStyle CssClass="GridHeader" BackColor="#1C5E55" Font-Bold="True" ForeColor="White" HorizontalAlign="Left" /> <PagerStyle BackColor="#666666" ForeColor="White" HorizontalAlign="Center" /> <RowStyle BackColor="#E3EAEB" /> <SelectedRowStyle BackColor="#C5BBAF" Font-Bold="True" ForeColor="#333333" /> <SortedAscendingCellStyle BackColor="#F8FAFA" /> <SortedAscendingHeaderStyle BackColor="#246B61" /> <SortedDescendingCellStyle BackColor="#D4DFE1" /> <SortedDescendingHeaderStyle BackColor="#15524A" /></asp:GridView> I tried setting the CSS forcefully in my Master page .GridHeader { background-color:#1C5E55 !important;} But I am still missing the background-color. I can see the backgroundcolor applied to grid (for less-than 1 sec) while the page loading the js/css content NOTE: I already tried clearing cache of IE, ctrl + F5, shift + reload etc. Here is sample page of my issue. http://vd2.weenggs.com/Items.aspx email: [email protected] pass: test Thanks

    Read the article

  • How can I save an ASP.NET IFRAME from a custom entity's OnSave event, reliably?

    - by Forgotten Semicolon
    I have a custom ASP.NET solution deployed to the ISV directory of an MS Dynamics CRM 4.0 application. We have created a custom entity, whose data entry requires more dynamism than is possible through the form builder within CRM. To accomplish this, we have created an ASP.NET form surfaced through an IFRAME on the entity's main form. Here is how the saving functionality is currently laid out: There is an ASP.NET button control on the page that saves the entity when clicked. This button is hidden by CSS. The button click event is triggered from the CRM OnSave javascript event. The event fires and the entity is saved. Here are the issues: When the app pool is recycled, the first save (or few) may: not save be delayed (ie. the interface doesn't show an update, until the page has been refreshed after a few sec) Save and Close/New may not save For issue 1.1 and 2, what seems to be happening is that while the save event is fired for the custom ASP.NET page, the browser has moved on/refreshed the page, invalidating the request to the server. This results in the save not actually completing. Right now, this is mitigated with a kludge javascript delay loop for a few seconds after calling the button save event, inside the entity OnSave event. Is there any way to have the OnSave event wait for a callback from the IFRAME?

    Read the article

  • A more effecient millisecond conversion method?

    - by cube
    I am currently using this method to convert milliseconds to min:sec:1/10sec. However it does not seem to be efficient at all. Would anyone know of a faster more efficient and optimized way of accomplishing the same. mills.prototype.formatTime = function(time) { var elapsedTime = (time * 1000); //Minutes var elapsedM = (elapsedTime/60000)|0; var remaining = elapsedTime - (elapsedM * 60000); //add a leading zero if it's a single digit number if (elapsedM < 10) { elapsedM = "0"+elapsedM; } //Seconds var elapsedS = ((remaining/1000)|0); remaining -= (elapsedS*1000); ////add leading zero if (elapsedS<10) { elapsedS = "0"+elapsedS; } //Hundredths var elapsedFractions = ((remaining/10)|0); if (elapsedFractions < 10) { elapsedFractions = "0"+elapsedFractions; } //display results nicely var time_data = elapsedM+":"+elapsedS+":"+elapsedFractions; //return time_data; return[time_data,elapsedM,elapsedS,elapsedFractions] };

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • Is it "right" to translate error messages?

    - by Iraklis
    This is somehow subjective depending on the target translation language, but bear with me for a sec. I have recently been involved in a translation project. The goal was to translate the strings of an MVC framework to the Greek language. 70% of the language strings of the framework where translated, however 30% where intentionally left out. The decision was that we will not translate error messages aimed towards the developer of the application. The reasoning behind this (in short) was: are aimed towards designers/programmers. Programmers ( and even designers :) ) should have a basic understanding of English, at least enough so they can search on it on Google if they do not know what it means. (racist?) are aimed towards the developer and in a perfect world should not be displayed to the end user of the application as they concern the inner workings of the web application itself. i.e "You must set the database name in your database config file." and perhaps most importantly, they make the life of the developer harder when he tries to get more information/help regarding the error. For example the above error yields 8 results in Google (in quotes), whereas its Greek translation yields exactly 0. I know that this depends on the popularity of the target translation language and the application itself. For example I'm guessing that there are is vast amount of documentation regarding German SAP error messages (i know, i know, SAP IS German, but you get the point), as opposed to Greek Error Messages documentation regarding random application X which has about 500 installations worldwide. So to summarize: When you develop language translation packs for your applications do you translate error messages? Do you only do for predominant languages like English/Spanish/German/French? Or do you live them intact? I'm not looking for the "right" or "correct" answer, I'm looking for a "best-practices" answer, or if this problem is defined in any "official" standard/policy that you have had experience with.

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • self join to select consecutive numbers

    - by shantanuo
    CREATE TABLE `mybug` ( `SEAT_NO` decimal(2,0) NOT NULL, `ROW_NO` decimal(2,0) NOT NULL, `COL_NO` decimal(2,0) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; INSERT INTO `mybug` VALUES ('1','1','1'),('26','7','2'),('31','8','2'),('32','8','1'),('33','9','1'),('34','9','2'),('35','9','5'),('36','9','6'),('37','10','6'),('38','10','5'),('39','10','2'),('40','10','1'),('41','11','1'),('42','11','2'),('43','11','4'),('44','11','5'); +---------+--------+--------+ | SEAT_NO | ROW_NO | COL_NO | +---------+--------+--------+ | 1 | 1 | 1 | | 26 | 7 | 2 | | 31 | 8 | 2 | | 32 | 8 | 1 | | 33 | 9 | 1 | | 34 | 9 | 2 | | 35 | 9 | 5 | | 36 | 9 | 6 | | 37 | 10 | 6 | | 38 | 10 | 5 | | 39 | 10 | 2 | | 40 | 10 | 1 | | 41 | 11 | 1 | | 42 | 11 | 2 | | 43 | 11 | 4 | | 44 | 11 | 5 | +---------+--------+--------+ 16 rows in set (0.00 sec) In the above chart, I need to select 2 seats those are in the same row AND column numbers are next to each other. For e.g. Seat Numbers 38 & 39 can not be issued even if both the seats are from the same row because the column numbers 2 & 5 are not adjacent. The expected results are as follows: 31 33 35 37 39 41 43 These are the starting numbers and the next seat will be automatically booked as well. for e.g. 31 & 32

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >