Search Results

Search found 69968 results on 2799 pages for 'real time updates'.

Page 154/2799 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • How to use VC++ intrinsic functions w/o run-time library

    - by Adrian McCarthy
    I'm involved in one of those challenges where you try to produce the smallest possible binary, so I'm building my program without the C or C++ run-time libraries (RTL). I don't link to the DLL version or the static version. I don't even #include the header files. I have this working fine. For some code constructs, the compiler generates calls to memset(). For example: struct MyStruct { int foo; int bar; }; MyStruct blah = {}; // calls memset() Since I don't include the RTL, this results in a missing symbol at link time. I've been getting around this by avoiding those constructs. For the given example, I'll explicitly initialize the struct. MyStruct blah; blah.foo = 0; blah.bar = 0; But memset() can be useful, so I tried adding my own implementation. It works fine in Debug builds, even for those places where the compiler generates an implicit call to memset(). But in Release builds, I get an error saying that I cannot define an intrinsic function. You see, in Release builds, intrinsic functions are enabled, and memset() is an intrinsic. I would love to use the intrinsic for memset() in my release builds, since it's probably inlined and smaller and faster than my implementation. But I seem to be a in catch-22. If I don't define memset(), the linker complains that it's undefined. If I do define it, the compiler complains that I cannot define an intrinsic function. I've tried adding #pragma intrinsic(memset) with and without declarations of memset, but no luck. Does anyone know the right combination of definition, declaration, #pragma, and compiler and linker flags to get an intrinsic function without pulling in RTL overhead? Visual Studio 2008, x86, Windows XP+.

    Read the article

  • MySQL: Get average of time differences?

    - by Nebs
    I have a table called Sessions with two datetime columns: start and end. For each day (YYYY-MM-DD) there can be many different start and end times (HH:ii:ss). I need to find a daily average of all the differences between these start and end times. An example of a few rows would be: start: 2010-04-10 12:30:00 end: 2010-04-10 12:30:50 start: 2010-04-10 13:20:00 end: 2010-04-10 13:21:00 start: 2010-04-10 14:10:00 end: 2010-04-10 14:15:00 start: 2010-04-10 15:45:00 end: 2010-04-10 15:45:05 start: 2010-05-10 09:12:00 end: 2010-05-10 09:13:12 ... The time differences (in seconds) for 2010-04-10 would be: 50 60 300 5 The average for 2010-04-10 would be 103.75 seconds. I would like my query to return something like: day: 2010-04-10 ave: 103.75 day: 2010-05-10 ave: 72 ... I can get the time difference grouped by start date but I'm not sure how to get the average. I tried using the AVG function but I think it only works directly on column values (rather than the result of another aggregate function). This is what I have: SELECT TIME_TO_SEC(TIMEDIFF(end,start)) AS timediff FROM Sessions GROUP BY DATE(start) Is there a way to get the average of timediff for each start date group? I'm new to aggregate functions so maybe I'm misunderstanding something. If you know of an alternate solution please share. I could always do it ad hoc and compute the average manually in PHP but I'm wondering if there's a way to do it in MySQL so I can avoid running a bunch of loops. Thanks.

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • Working with datetime type in Quickbooks My Time files

    - by jakemcgraw
    I'm attempting to process Quickbooks My Time imt files using PHP. The imt file is a plaintext XML file. I've been able to use the PHP SimpleXML library with no issues but one: The numeric representations of datetime in the My Time XML files is something I've never seen before: <object type="TIMEPERIOD" id="z128"> <attribute name="notes" type="string"></attribute> <attribute name="start" type="date">308073428.00000000000000000000</attribute> <attribute name="running" type="bool">0</attribute> <attribute name="duration" type="double">3600</attribute> <attribute name="datesubmitted" type="date">310526237.59616601467132568359</attribute> <relationship name="activity" type="1/1" destination="ACTIVITY" idrefs="z130"></relationship> </object> You can see that attritube[@name='start'] has a value of: 308073428.00000000000000000000 This is not Excel based method of storage 308,073,428 is too many days since 1900-01-00 and it isn't Unix Epoch either. So, my question is, has anyone ever seen this type of datetime representation?

    Read the article

  • Android - Two different programs at the same time in an emulator

    - by Léa Massiot
    I'm new to Android development. My OS is WinXP. I'm trying to install two different applications on an Android Device Emulator in command line. I have two Android projects "ap1" and "ap2". In the "ap1" project directory, I ran "ant debug". I got an "ap1.apk" executable. In the "ap2" project directory, I ran "ant debug". I got an "ap2.apk" executable. I created an Android Virtual Device: cmd_line android create avd -n avd1 -t 1 --abi x86 I launched the emulator: cmd_line emulator -avd avd1 -verbose The "adb devices" command returns: List of devices attached emulator-5554 device I installed the first program on the emulator: cmd_line adb -s emulator-5554 install "ap1.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 = It worked. I installed the second program on the emulator: cmd_line adb -s emulator-5554 install "ap2.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg2.android/.AnotherActivity1 = It worked. All this works except that the second executable "replaced" of the first one. If I try to run the first executable, I get an error: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 Starting: Intent { act=android.intent.action.MAIN cmp=my.pkg.android/.Activity1 } Error type 3 Error: Activity class {my.pkg.android/my.pkg.android.Activity1} does not exist. It looks like I can't have the two apps at the same time in the emulator. What do you think? What do I have to do to have the two apps available (at the same time) in the emulator? Thank you for helping. Best regards.

    Read the article

  • MySQL get point in time totals from related tables

    - by batfastad
    Hi everyone We have an order book and invoicing system and I've been tasked with trying to output monthly rolling totals from these tables. But I don't know really where to start with this. I think there's some SQL syntax that I don't even know about yet. I'm familiar with INNER/LEFT/JOINS and GROUP BY etc but grouping by date is confusing since I don't know how to limit the data to only the current date that's being grouped by at that point. I think this will involve joining the tables to themselves or possibly a sub-select. I always thought it best to avoid sub-selects apart from when absolutely necessary. Basically the system has 3 tables orders: order_id, currency, order_stamp orders_lines: order_line_id, invoice_id, order_id, price invoices: invoice_id, invoice_stamp order_stamp and invoice_stamp are UTC unix timestamps stored as integers, not MySQL timestamps. I'm trying to get a listing by year/month showing the total of current unbilled orders (sum of price), at that point in time. Current orders are ones where order_stamp is less than or equal to 00:00 on the 1st of the month. Unbilled orders are ones where invoice_stamp is null or invoice_stamp is greater than 00:00 on the 1st of the month. At that point in time there may not be a related invoice yet and invoice_id might be null. Anyone got any suggestions on what I should join to what and what I need to group by? Cheers, B

    Read the article

  • Increasing time resolution of BOOST::progress timer

    - by feelfree
    BOOST::progress_timer is a very useful class to measure the running time of a function. However, the default implementation of progress_timer is not accurate enough and a possible way of increasing time resolution is to reconstruct a new class as the following codes show: #include <boost/progress.hpp> #include <boost/static_assert.hpp> template<int N=2> class new_progress_timer:public boost::timer { public: new_progress_timer(std::ostream &os=std::cout):m_os(os) { BOOST_STATIC_ASSERT(N>=0 &&N<=10); } ~new_progress_timer(void) { try { std::istream::fmtflags old_flags = m_os.setf(std::istream::fixed,std::istream::floatfield); std::streamsize old_prec = m_os.precision(N); m_os<<elapsed()<<"s\n" <<std::endl; m_os.flags(old_flags); m_os.precison(old_prec); } catch(...) { } } private: std::ostream &m_os; }; However, when I compile the codes with VC10, the following error appear: 'precison' : is not a member of 'std::basic_ostream<_Elem,_Traits>' Any ideas? Thanks.

    Read the article

  • Calling a jQuery method at declaration time, and then onEvent

    - by cesarsalazar
    I've been writing JS (mainly jQuery) for quite a few months now, but today I decided to make my first abstraction as a jQuery method. I already have working code but I feel/know that I'm not doing it the right way, so I come here for some enlightenment. Note: Please do not reply that there's already something out there that does the trick as I already know that. My interest in this matter is rather educational. What my code is intended to do (and does): Limit the characters of a textfield and change the color of the counter when the user is approaching the end. And here's what I have: $(function(){ $('#bio textarea').keyup(function(){ $(this).char_length_validation({ maxlength: 500, warning: 50, validationSelector: '#bio .note' }) }) $('#bio textarea').trigger('keyup'); }) jQuery.fn.char_length_validation = function(opts){ chars_left = opts.maxlength - this.val().length; if(chars_left >= 0){ $(opts.validationSelector + ' .value').text(chars_left); if(chars_left < opts.warning){ $(opts.validationSelector).addClass('invalid'); } else{ $(opts.validationSelector).removeClass('invalid'); } } else{ this.value = this.value.substring(0, opts.maxlength); } } In the HTML: <div id="bio"> <textarea>Some text</textarea> <p class="note> <span class="value">XX</span> <span> characters left</span> </p> </div> Particularly I feel really uncomfortable binding the event each on each keyup instead of binding once and calling a method later. Also, (and hence the title) I need to call the method initially (when the page renders) and then every time the user inputs a character. Thanks in advance for your time :)

    Read the article

  • Rails log shows unexpected data as to the time spent on a DB stuff

    - by Arhimed
    I'm running on WinXP + Ruby 1.8.6 + Rails 2.3.5 (frozen to the project) in development environment. Looking at development.log I observe inconsistent data as to the time spent on a database stuff. Example #1 (good): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 12:15:54) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (563.0ms) SHOW FIELDS FROM `cities` City Load (15.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 953ms (DB: 578) | 302 Found [http://xyz/] Example #2 (unexpected): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 12:15:36) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (0.0ms) SHOW FIELDS FROM `cities` City Load (0.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 47ms (DB: 32) | 302 Found [http://xyz/] Example #2 shows 32ms were spent on DB while there were just 2 sql querries and both of zero time spent. Example #3 (unexpected): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 11:21:24) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (63.0ms) SHOW FIELDS FROM `cities` City Load (62.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 1187ms (DB: 297) | 302 Found [http://xyz/] Example #3 shows 297ms while there were querries of 63ms and 62ms (125ms in total). Can't understand it. Could someone explain? Thanks in advance.

    Read the article

  • Summing Row in SQL query for time range

    - by user3703334
    I'm trying to group a large amount of data into smaller bundles. Currently the code for my query is as follows SELECT [DateTime] ,[KW] FROM [POWER] WHERE datetime >= '2014-04-14 06:00:00' and datetime < '2014-04-21 06:00:00' ORDER BY datetime which gives me DateTime KW 4/14/2014 6:00:02.0 1947 4/14/2014 6:00:15.0 1946 4/14/2014 6:00:23.0 1947 4/14/2014 6:00:32.0 1011 4/14/2014 6:00:43.0 601 4/14/2014 6:00:52.0 585 4/14/2014 6:01:02.0 582 4/14/2014 6:01:12.0 580 4/14/2014 6:01:21.0 579 4/14/2014 6:01:32.0 579 4/14/2014 6:01:44.0 578 4/14/2014 6:01:53.0 578 4/14/2014 6:02:01.0 577 4/14/2014 6:02:12.0 577 4/14/2014 6:02:22.0 577 4/14/2014 6:02:32.0 576 4/14/2014 6:02:42.0 578 4/14/2014 6:02:52.0 577 4/14/2014 6:03:02.0 577 4/14/2014 6:03:12.0 577 4/14/2014 6:03:22.0 578 . . . . 4/21/2014 5:59:55.0 11 Now there is a reading every 10 seconds from a substation. Now I want to group this data into hourly readings. Thus 00:00-01:00 = sum([KW]] for where datetime >= '^date^ 00:00:00' and datetime < '^date^ 01:00:00' I've tried using a convert to change the datetime into date and time field and then only to add all the time fields together with no success. Can someone please assist me, I'm not sure what is right way of doing this. Thanks ADDED Ok so the spilt between Datetime is working nicely, but as if I add a SUM([KW]) function SQL gives an error. And if I include any of the group functions it also nags. Below is what works, I still need to sum the KW per the grouping of hours. I've tried using Group By Hour and Group by DATEPART(Hour,[DateTime]) Both didn't work. SELECT DATEPART(Hour,[DateTime]) Hour ,DATEPART(Day,[DateTime]) Day ,DATEPART(Month,[DateTime]) Month ,([KVAReal]) ,([KVAr]) ,([KW]) FROM [POWER].[dbo].[IT10t_PAC3200] WHERE datetime >= '2014-04-14 06:00:00' and datetime < '2014-04-21 06:00:00' order by datetime

    Read the article

  • IIS doesn't send two responses to the same client at the same time (only for ASP)

    - by dr. evil
    I've got 2 ASP pages. I do a request to the first page from Firefox (which takes 30 seconds to process on server-side), and during the execution of 30 seconds I do another request from Firefox to the second page (takes less than 1 second in server-side), but it does come after 31 second. Because it waits first requests to finish. When I request to the first page from Firefox and then request the second page from IE it's just instant. So basically ASP - IIS 6 somehow limiting every client to one request (long processing request) at a time. I need to get around this problem in my .NET client application. This is tested in 3 different systems. If you want to test you can try the ASP scripts at the end. This behaviour is same in a long SQL execution or just in a time consuming ASP operation. Note: It's not about HTTP Keep Alive It's not about persistent connection limit (we tried to increase this in firefox and in .NET with Net.ServicePointManager.DefaultConnectionLimit) It's not about User Agent This doesn't happen in ASP.NET so I assume it's something to the with ASP.dll I'm trying to solve this on the client not the server. I don't have direct control over the server it's a 3rd party solution. Is there any way to get around this? Sample ASP Code: First ASP: <% Set cnn = Server.CreateObject("Adodb.Connection") cnn.Open "Provider=sqloledb;Data Source=.;Initial Catalog=master;User Id=sa;Password=;" cnn.Execute("WAITFOR DELAY '0:0:30'") cnn.Close %> Second ASP: <% Response.Write "bla bla" %>

    Read the article

  • Very simple python functions takes spends long time in function and not subfunctions

    - by John Salvatier
    I have spent many hours trying to figure what is going on here. The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening? Have I done something obviously inefficient? Am I misunderstanding how cProfile works? def grad_logp(self, variable, calculation_set ): p = params(self.p,self.parents) return self.n(variable, self.p) def n (self, variable, p ): gradient = self.gg(variable, p) return np.reshape(gradient, np.shape(variable.value)) def gg(self, variable, p): if variable is self: gradient = self._grad_logps['x']( x = self.value, **p) else: gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()]) return gradient

    Read the article

  • Is it getting to be time for C# to support compile-time macros?

    - by Robert Rossney
    Thus far, Microsoft's C# team has resisted adding formal compile-time macro capabilities to the language. There are aspects of programming with WPF that seem (to me, at least) to be creating some compelling use cases for macros. Dependency properties, for instance. It would be so nice to just be able to do something like this: [DependencyProperty] public string Foo { get; set; } and have the body of the Foo property and the static FooProperty property be generated automatically at compile time. Or, for another example an attribute like this: [NotifyPropertyChanged] public string Foo { get; set; } that would make the currently-nonexistent preprocessor produce this: private string _Foo; public string Foo { get { return _Foo; } set { _Foo = value; OnPropertyChanged("Foo"); } } You can implement change notification with PostSharp, and really, maybe PostSharp is a better answer to the question. I really don't know. Assuming that you've thought about this more than I have, which if you've thought about it at all you probably have, what do you think? (This is clearly a community wiki question and I've marked it accordingly.)

    Read the article

  • First Time Architecturing?

    - by cam
    I was recently given the task of rebuilding an existing RIA. The new RIA that I've designed is based on Silverlight, with a WCF service to connect to MS SQL Server. This is my first time doing something like this, so I'm not sure how to design the entire thing. Basically, the client can look through graphs of "stocks" (allowing the client to choose different time periods, settings, etc). I've written the whole application essentially, but I'm not sure how to put it together. The graphs are supposed to be directly based on the database, and to create the datapoints on the graph, some calculations need to be done (not very expensive ones). The problem I'm having is to decide where to put the calculations (client or serverside? Or half and half?) What factors should I look for to help me decide where the calculations should be done? And how can I go about optimizing this (caching, etc)? Obviously this is a very broad subject, so I'm not expecting an immediate answer, but any help/pointing in the right direction/resources would be appreciated.

    Read the article

  • Tricky SQL query - need to get time frames

    - by Andrew
    I am stumbled upon a problem, when I need a query which will produce a list of speeding time frames. Here is the data example [idgps_unit_location] [dt] [idgps_unit] [lat] [long] [speed_kmh] 26 10/18/2012 18:53 2 47 56 30 27 10/18/2012 18:53 2 49 58 31 28 10/18/2012 18:53 2 28 37 15 29 10/18/2012 18:54 2 56 65 33 30 10/18/2012 18:54 2 152 161 73 31 10/18/2012 18:55 2 134 143 64 32 10/18/2012 18:56 2 22 31 12 36 10/18/2012 18:59 2 98 107 47 37 10/18/2012 18:59 2 122 131 58 38 10/18/2012 18:59 2 91 100 44 39 10/18/2012 19:00 2 190 199 98 40 10/18/2012 19:01 2 194 203 101 41 10/18/2012 19:02 2 182 191 91 42 10/18/2012 19:03 2 162 171 78 43 10/18/2012 19:03 2 174 183 83 44 10/18/2012 19:04 2 170 179 81 45 10/18/2012 19:05 2 189 198 97 46 10/18/2012 19:06 2 20 29 10 47 10/18/2012 19:07 2 158 167 76 48 10/18/2012 19:08 2 135 144 64 49 10/18/2012 19:08 2 166 175 79 50 10/18/2012 19:09 2 9 18 5 51 10/18/2012 19:09 2 101 110 48 52 10/18/2012 19:09 2 10 19 7 53 10/18/2012 19:10 2 32 41 20 54 10/18/2012 19:10 1 54 63 85 55 10/19/2012 19:11 2 55 64 50 I need a query that would convert this table into the following report that shows frames of time when speed was 80: [idgps_unit] [dt_start] [lat_start] [long_start] [speed_start] [dt_end] [lat_end] [long_end] [speed_end] [speed_average] 2 10/18/2012 19:00 190 199 98 10/18/2012 19:02 182 191 91 96.66666667 2 10/18/2012 19:03 174 183 83 10/18/2012 19:05 189 198 97 87 1 10/18/2012 19:10 54 63 85 10/18/2012 19:10 54 63 85 85 Now, what have I tried? I tried putting this into separate tables, queries and do some joins... Nothing works and I am very frustrated... I am not even sure if this could be done via the query. Asking for the expert help!

    Read the article

  • inserting time delay with cocos2d

    - by KDaker
    I am trying to add several labels that appear sequentially with a time delay between each. The labels will display either 0 or 1 and the value is calculated randomly. I am running the following code: for (int i = 0; i < 6; i++) { NSString *cowryString; int prob = arc4random()%10; if (prob > 4) { count++; cowryString = @"1"; } else { cowryString = @"0"; } [self runAction:[CCSequence actions:[CCDelayTime actionWithDuration:0.2] ,[CCCallFuncND actionWithTarget:self selector:@selector(cowryAppearWithString:data:) data:cowryString], nil]]; } the method that makes the labels appear is this: -(void)cowryAppearWithString:(id)sender data:(NSString *)string { CCLabelTTF *clabel = [CCLabelTTF labelWithString:string fontName:@"arial" fontSize:70]; CGSize screenSize = [[CCDirector sharedDirector] winSize]; clabel.position = ccp(200.0+([cowries count]*50),screenSize.height/2); id fadeIn = [CCFadeIn actionWithDuration:0.5]; [clabel runAction:fadeIn]; [cowries addObject:clabel]; [self addChild:clabel]; } The problem with this code is that all the labels appear at the same moment with the same delay. I understand that if i use [CCDelayTime actionWithDuration:0.2*i] the code will work. But the problem is that i might also need to iterate this entire for loop and have the labels appear again after they have appeared the first time. how is it possible to have actions appear with delay and the actions dont always follow the same order or iterations???

    Read the article

  • cURL/PHP Request Executes 50% of the Time

    - by makavelli
    After searching all over, I can't understand why cURL requests issued to a remote SSL-enabled host are successful only 50% or so of the time in my case. Here's the situation: I have a sequence of cURL requests, all of them issued to a HTTPS remote host, within a single PHP script that I run using the PHP CLI. Occasionally when I run the script the requests execute successfully, but for some reason most of the times I run it I get the following error from cURL: * About to connect() to www.virginia.edu port 443 (#0) * Trying 128.143.22.36... * connected * Connected to www.virginia.edu (128.143.22.36) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac * Closing connection #0 If I try again a few times I get the same result, but then after a few tries the requests will go through successfully. Running the script after that again results in an error, and the pattern continues. Researching the error 'alert bad record mac' didn't give me anything helpful, and I hesitate to blame it on an SSL issue since the script still runs occasionally. I'm on Ubuntu Server 10.04, with php5 and php5-curl installed, as well as the latest version of openssl. In terms of cURL specific options, CURLOPT_SSL_VERIFYPEER is set to false, and both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT are set to 4 seconds. Further illustrating this problem is the fact that the same exact situation occurs on my Mac OS X dev machine - the requests only go through ~50% of the time.

    Read the article

  • Encode_JSON Errors in Lasso 8.6.2 After Period of Time

    - by ATP_JD
    We are in the process of converting apps from Lasso 8 to Lasso 9, and as an intermediate step, have upgraded from 8.5.5 to 8.6.2 (which runs alongside 9 on our new box, in different virtual hosts). I am finding that with 8.6.2 we are getting a slew of errors on pages that call encode_json. The weird thing with these errors is that they don't start happening until some period of time after the site starts. Then, some hours later, all encode_json calls begin to fail with error messages like this: An error occurred while processing your request. Error Information Error Message: No tag, type or constant was defined under the name "?????????????????" with arguments: array: (pair: (-find)=([\x{0020}-\x{21}\x{23}-\x{5b}\x{5d}-\x{10fff}])), (r) at: onCompare with params: 'r' at: JSON with params: 'reload', -Options=array: (-Internal) at: JSON with params: @map: (reload)=(false), (tcstring)=(LZU), (timestring)=(10:42 AM&nbsp;&nbsp;&nbsp;1442Z) at: [...].lasso with params: 'pageloadtime'='1383038310' on line: 31 at position: 1 Error Code: -9948 (Yes, those Chinese(?) characters are in the error message.) I have removed the 8.5.5 encode_json tag from LassoStartup, so we are using the correct built-in method. The encode_json method fails for any and all parameters I throw at it from simple strings to arrays of maps. Upon restarting the site, encode_json resumes working for an hour or two, seemingly depending on load. On 8.5.5, we don't have this problem. Does anyone have experience with this issue? Any advice regarding trying the 8.5.5 tag swap encode_json to see if I can override the built-in method? Maybe it will work better? Thanks in advance for your time and assistance. -Justin

    Read the article

  • Cake returned the time consumed in data lookup in JQuery Alert Box

    - by kwokwai
    Hi all, When I was doing some self-learning on JQuery Ajax in Cakephp, I found out some strange behaviour in the JQuery Alert Box. Here are a few lines of code of the JQuery Ajax I used: $(document).ready(function(){ $(document).change(function(){ var usr = $("#data\\[User\\]\\[name\\]").val(); $.post{"http://www.mywebsite.com/controllers/action/", usr, function(msg){alert(msg);} } }); }); The Alert box shows me a message returned from the Action: Helloworld <!--0.656s--> I am not sure why the number of time consumption was displayed in the Alert box, since it was not in my code as follows: function action($data=null){ $this->autoRender = false; $result2=$this->__avail($data); if($result2==1) {return "OK";} else {return "NOT";} } CakePHP rteurned some extra information in the Alert box. Later I altered a single line of code and tried out this instead, and the time consumption was not displayed on screen then: $(document).ready(function(){ $(document).change(function(){ var usr = $("#data\\[User\\]\\[name\\]").val(); $.post{"http://www.mywebsite.com/controllers/action/", usr, function(msg){$("#username").append('<span>'+msg+</span'>);} } }); });

    Read the article

  • MATLAB, time match filter

    - by Paul
    OK, I am still getting the hang of MATLAB. I have two files in different format. One Excel file. data1.xls, size= 86400 X 62. It looks like: Date/Time par1 par2 par3 par4 par5 par6 par6 par7 par8 par9 08/02/09 00:06:45 0 3 27 9.9 -133.2 0 0 0 1 0 Another file, data2.csv, size = 144 X 27. (If nothing is missing.) It looks like: date time P01 P02 P03 P04 P05 P06 P07 P08 P09 P10 P11 8/16/2009 0:00 51 45 46 54 53 52 524 5 399 89 78 Now I am using Data10minAvg = mean(reshape(Data,300,144,62)); to get the 10 min average of the first Excel file. Now I need to match up that file I am making above with the .csv file. The problem is many timestamps are missing in the .csv file. How do I make data2.csv into a file of size 144 X 27, replacing the missing datestamps by rows of zero? It will really help me than compare data1.xls file with newdata2.csv.

    Read the article

  • Big-Oh running time of code in Java (are my answers accurate

    - by Terry Frederick
    the Method hasTwoTrueValues returns true if at least two values in an array of booleans are true. Provide the Big-Oh running time for all three implementations proposed. // Version 1 public boolean has TwoTrueValues( boolean [ ] arr ) { int count = 0; for( int i = 0; i < arr. length; i++ ) if( arr[ i ] ) count++; return count >= 2; } // Version 2 public boolean hasTwoTrueValues( boolean [ ] arr ) { for( int i = 0; i < arr.length; i++ ) for( int j = i + 1; j < arr.length; j++ ) if( arr[ i ] && arr[ j ] ) return true; } // Version 3 public boolean hasTwoTrueValues( boolean [ ] arr ) { for( int i = 0; i < arr.length; i++ if( arr[ i ] ) for( int j = i + 1; j < arr.length; j++ ) if( arr[ j ] ) return true; return false; } For Version 1 I say the running time is O(n) Version 2 I say O(n^2) Version 3 I say O(n^2) I am really new to this Big Oh Notation so if my answers are incorrect could you please explain and help.

    Read the article

  • Optimal (Time paradigm) solution to check variable within boundary

    - by kumar_m_kiran
    Hi All, Sorry if the question is very naive. I will have to check the below condition in my code 0 < x < y i.e code similar to if(x > 0 && x < y) The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location). At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down. So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span? Other Info x is unsigned integer (which must be checked to be greater than 0 and less than y). y is unsigned integer. y is a non-const and varies for every comparison. Here time is the constraint compared to space. Language - C++. Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double). Thanks in advance for any input. PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.

    Read the article

  • Creation of AsyncTask taking too much time Android

    - by user2842342
    I am making a network call in an AsyncTask, but the problem i am facing is the amount of time it is taking to start the doInBackground method. Here is a part of my code: button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Log.d("Temp:",System.currentTimeMillis()+""); new Move().execute(); /*some other logic } } And my AsyncTask is: private class Move extends AsyncTask<Void, Void, Void> { @Override protected Void doInBackground(Void... temp) { Log.d("start:",System.currentTimeMillis()+""); gson.fromJson(Web.Request.get(some_URL),Void.class); Log.d("end:",System.currentTimeMillis()+""); return null; } } These are the logs i got: 32658-998/com.example.game D/temp:? 1408923006159 32658-998/com.example.game D/start:? 1408923035163 32658-998/com.example.game D/end:? 1408923035199 So actually it took almost 29 secs to reach the first line in doInBackground method, where as it took just 36 ms to finish the network call. I tried it many times, the time taken is almost in the same order. Is it possible to start the AsyncTask immediately? Or is there any other way to solve this problem.(other than a running a simple thread?) Thank you :)

    Read the article

  • how to pass time Interval in list pref option

    - by user1748932
    <ListPreference android:entries="@array/listOptions2" android:entryValues="@array/listValues2" android:key="listprefrefresh" android:summary="set Refresh The Applciation" android:title="Set TIme Intervale" /> <item>10 </item> <item>30</item> </integer-array> <integer-array name="listValues2"> <item>10000</item> <item>30000</item> </integer-array> public static final String PREF_BEER_SIZE2 = "listprefrefresh"; Preference beerPref2 = (Preference) findPreference(PREF_BEER_SIZE2); beerPref2 .setOnPreferenceChangeListener(new Preference.OnPreferenceChangeListener() { public boolean onPreferenceChange(Preference preference, Object newValue) { // TODO Auto-generated method stub final ListPreference listrefresh = (ListPreference) preference; final int idx = listrefresh .findIndexOfValue((String) newValue); if (idx == 0 ) { handler.post(timedTask); // } else if (idx == 1) { // System.out.println("2"); } return true; } }); This is my code i want Pass Time how can i implement right now? I am passing Integr value.please tell me

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >