Search Results

Search found 1364 results on 55 pages for 'sec goat'.

Page 46/55 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Ruby script/console and Ruby script/server using two different DBs?

    - by aronchick
    Has anyone seen where script/console and script/server load two different databases (though both report using the same)? Here's the first output $ script/server => Booting WEBrick => Rails 2.3.5 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-03-21 15:54:05] INFO WEBrick 1.3.1 [2010-03-21 15:54:05] INFO ruby 1.8.7 (2010-01-10) [i386-mingw32] [2010-03-21 15:54:05] INFO WEBrick::HTTPServer#start: pid=7148 port=3000 No errors. I then run my standard code for entering a form - no problems. Checking the Dev Database (.yml at bottom): mysql> select * from books; [...] | 712 | Book | Book Name | 2010-03-21 22:29:22 | 2010-03-21 22:29:22 | [...] 712 rows in set (0.00 sec) The code CLEARLY saved it seconds ago And now here's the output of script/console: $ script/console Loading development environment (Rails 2.3.5) >> Books.all => [] Nothing. Further, upon further inspection, it's using the production database, but I can't figure out why. Any thoughts here? All consoles have been closed and reopened. UPDATE: Requested .yml file (can't see how it'd be helpful (user name and password are all the same for each)) - development: adapter: mysql database: BooksDBdev username: <user name> password: <long string> timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: mysql database: BooksDBtest username: <user name> password: <long string> timeout: 5000 production: adapter: mysql database: BooksDB username: <user name> password: <long string> timeout: 5000

    Read the article

  • MySQL Cursor Issue

    - by James Inman
    I've got the following code - this is the first time I've really attempted using cursors. DELIMITER $$ DROP PROCEDURE IF EXISTS demo$$ DROP TABLE IF EXISTS temp$$ CREATE TEMPORARY TABLE temp( id INTEGER NOT NULL AUTO_INCREMENT, start DATETIME NOT NULL, end DATETIME NOT NULL, PRIMARY KEY(id) ) $$ CREATE PROCEDURE demo() BEGIN DECLARE done INT DEFAULT 0; DECLARE a, b DATETIME; DECLARE cur1 CURSOR FOR SELECT MAX(end) AS end FROM ( SELECT id, start, end, @r := @r + (start > @edate) AS num, @edate := GREATEST(@edate, end) FROM ( SELECT @r := 0, @edate := '0001-01-01' ) vars, student_lectures WHERE ( student_id = 1 AND start >= '2010-04-26 00:00:00' AND end <= '2010-04-30 23:59:59' ) ORDER BY start ) q GROUP BY num; DECLARE cur2 CURSOR FOR SELECT MIN(start) AS start FROM ( SELECT id, start, end, @r := @r + (start > @edate) AS num, @edate := GREATEST(@edate, end) FROM ( SELECT @r := 0, @edate := '0001-01-01' ) vars, student_lectures WHERE ( student_id = 1 AND start >= '2010-04-26 00:00:00' AND end <= '2010-04-30 23:59:59' ) ORDER BY start ) q GROUP BY num LIMIT 1, 18446744073709551615; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; OPEN cur1; OPEN cur2; REPEAT FETCH cur1 INTO a; FETCH cur2 INTO b; IF NOT done THEN INSERT INTO temp(start, end) VALUES(a,b); END IF; UNTIL done END REPEAT; CLOSE cur1; CLOSE cur2; END $$ SELECT * FROM temp; I'm not getting anything outputted into the temp table. Running the following query gives me output, so I know there's rows it should be matching - but I imagine I've made some obvious mistake. SELECT MAX(end) AS end FROM ( SELECT id, start, end, @r := @r + (start > @edate) AS num, @edate := GREATEST(@edate, end) FROM ( SELECT @r := 0, @edate := '0001-01-01' ) vars, student_lectures WHERE ( student_id = 1 AND start >= '2010-04-26 00:00:00' AND end <= '2010-04-30 23:59:59' ) ORDER BY start ) q GROUP BY num; The output this query returns: +---------------------+ | end | +---------------------+ | 2010-04-26 13:00:00 | | 2010-04-26 18:15:00 | | 2010-04-27 11:00:00 | | 2010-04-27 13:00:00 | | 2010-04-27 18:15:00 | | 2010-04-28 13:00:00 | | 2010-04-29 13:00:00 | | 2010-04-29 18:15:00 | | 2010-04-30 13:00:00 | | 2010-04-30 15:15:00 | | 2010-04-30 17:15:00 | +---------------------+ 11 rows in set (0.02 sec)

    Read the article

  • Problem with fork exec kill when redirecting output in perl

    - by Edu
    I created a script in perl to run programs with a timeout. If the program being executed takes longer then the timeout than the script kills this program and returns the message "TIMEOUT". The script worked quite well until I decided to redirect the output of the executed program. When the stdout and stderr are being redirected, the program executed by the script is not being killed because it has a pid different than the one I got from fork. It seems perl executes a shell that executes my program in the case of redirection. I would like to have the output redirection but still be able to kill the program in the case of a timeout. Any ideas on how I could do that? A simplified code of my script is: #!/usr/bin/perl use strict; use warnings; use POSIX ":sys_wait_h"; my $timeout = 5; my $cmd = "very_long_program 1>&2 > out.txt"; my $pid = fork(); if( $pid == 0 ) { exec($cmd) or print STDERR "Couldn't exec '$cmd': $!"; exit(2); } my $time = 0; my $kid = waitpid($pid, WNOHANG); while ( $kid == 0 ) { sleep(1); $time ++; $kid = waitpid($pid, WNOHANG); print "Waited $time sec, result $kid\n"; if ($timeout > 0 && $time > $timeout) { print "TIMEOUT!\n"; #Kill process kill 9, $pid; exit(3); } } if ( $kid == -1) { print "Process did not exist\n"; exit(4); } print "Process exited with return code $?\n"; exit($?); Thanks for any help.

    Read the article

  • UnknownHostException while redirecting queries to google and getting results in JSon object

    - by shilpa
    Loading classifier from D:\PROJECT\classifiers\NERDemo\classifiers\ner-eng-ie.crf-3-all2008.ser.gz ... done [2.0 sec]. Original Query was riot in India. Parsing Queries and expanding tokens from the Ontologies.. {locations=[India], events=[riot]} Search query is null Something went wrong... java.net.UnknownHostException: ajax.googleapis.com at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at sun.net.NetworkClient.doConnect(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.<init>(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at org.girs2.SearchHandler.makeQuery(SearchHandler.java:35) at org.girs2.GIRS.search(GIRS.java:37) at org.girs2.GIRS.main(GIRS.java:62) Exception in thread "main" java.lang.NullPointerException at org.girs2.GIRS.search(GIRS.java:44) at org.girs2.GIRS.main(GIRS.java:62)

    Read the article

  • Insert MANY key value pairs fast into berkeley db with hash access

    - by Kungi
    Hi, i'm trying to build a hash with berkeley db, which shall contain many tuples (approx 18GB of key value pairs), but in all my tests the performance of the insert operations degrades drastically over time. I've written this script to test the performance: #include<iostream> #include<db_cxx.h> #include<ctime> #define MILLION 1000000 int main () { long long a = 0; long long b = 0; int passes = 0; int i = 0; u_int32_t flags = DB_CREATE; Db* dbp = new Db(NULL,0); dbp->set_cachesize( 0, 1024 * 1024 * 1024, 1 ); int ret = dbp->open( NULL, "test.db", NULL, DB_HASH, flags, 0); time_t time1 = time(NULL); while ( passes < 100 ) { while( i < MILLION ) { Dbt key( &a, sizeof(long long) ); Dbt data( &b, sizeof(long long) ); dbp->put( NULL, &key, &data, 0); a++; b++; i++; } DbEnv* dbep = dbp->get_env(); int tmp; dbep->memp_trickle( 50, &tmp ); i=0; passes++; std::cout << "Inserted one million --> pass: " << passes << " took: " << time(NULL) - time1 << "sec" << std::endl; time1 = time(NULL); } } Perhaps you can tell me why after some time the "put" operation takes increasingly longer and maybe how to fix this. Thanks for your help, Andreas

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • Cocos2d shake/accelerometer issue.

    - by Ryan Poolos
    So I a little backstory. I wanted to implement a particle effect and sound effect that both last about 3 sec or so when the user shakes their iDevice. But first issue arrived when the build in UIEvent for shakes refused to work. So I took the advice of a few Cocos veterans to just use some script to get "violent" accelerometer inputs as shakes. Worked great until now. The problem is that if you keep shaking it just stacks the particle and sounds over and over. Now this wouldn't be that big of a deal except it happens even if you are careful to try and not do so. So what I am hoping to do is disable the accelerometer when the particle effect/sound effect start and then reenable it as soon as they finish. Now I don't know if I should do this by schedule, NStimer, or some other function. I am open to ALL suggestions. here is my current "shake" code. - (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { const float violence = 1; static BOOL beenhere; BOOL shake = FALSE; if (beenhere) return; beenhere = TRUE; if (acceleration.x > violence * 1.5 || acceleration.x < (-1.5* violence)) shake = TRUE; if (acceleration.y > violence * 2 || acceleration.y < (-2 * violence)) shake = TRUE; if (acceleration.z > violence * 3 || acceleration.z < (-3 * violence)) shake = TRUE; if (shake) { id particleSystem = [CCParticleSystemQuad particleWithFile:@"particle.plist"]; [self addChild: particleSystem]; // Super simple Audio playback for sound effects! [[SimpleAudioEngine sharedEngine] playEffect:@"Sound.mp3"]; shake = FALSE; } beenhere = FALSE; }

    Read the article

  • How should I handle this Optimistic Concurrency error in this Entity Framework code, I have?

    - by Pure.Krome
    Hi folks, I have the following pseduo code in some Repository Pattern project that uses EF4. public void Delete(int someId) { // 1. Load the entity for that Id. If there is none, then null. // 2. If entity != null, then DeleteObject(..); } Pretty simple but I'm getting a run-time error:- ConcurrencyException: Store, Update, Insert or Delete statement affected an unexpected number of rows (0). Now, this is what is happening :- Two instances of EF4 are running inthe app at the same time. Instance A calls delete. Instance B calls delete a nano second later. Instance A loads the entity. Instance B also loads the entity. Instance A now deletes that entity - cool bananas. Instance B tries to delete the entity, but it's already gone. As such, the no-count or what not is 0, when it expected 1 .. or something like that. Basically, it figured out that the item it is suppose to delete, didn't delete (because it happened a split sec ago). I'm not sure if this is like a race-condition or something. Anyways, is there any tricks I can do here so the 2nd call doesn't crash? I could make it into a stored procedure.. but I'm hoping to avoid that right now. Any ideas? I'm wondering If it's possible to lock that row (and that row only) when the select is called ... forcing Instance B to wait until the row lock has been relased. By that time, the row is deleted, so when Instance B does it's select, the data is not there .. so it will never delete.

    Read the article

  • Is there a way to increase performance on my simple textfilter?

    - by djerry
    Hey guys, I'm writing a filter that will pick out items. I have a list of Objects. The objects contain a number, name and some other irrelevant items. At the moment, the list contains 200 items. When typing in a textbox, i'm looking if the string matches a part of the number/name of the objects in the list. If so, add them to the listbox. Here's the code for my textbox textchanged event : private void txtTelnumber_TextChanged(object sender, TextChangedEventArgs e) { lstOverview.Items.Clear(); string data = ""; foreach (ucTelListItem telList in _allUsers) { data = telList.User.H323 + telList.user.E164; if (data.Contains(txtTelnumber.Text)) lstOverview.Items.Add(telList); } } I sometimes see a little delay when entering a character, especially when i go from 4 records to 200 records (so when i had a filter and 4 records matched, and i backspace and the whole list appears again). My list is a list of usercontrols, cause i found it takes less time to load the usercontrols from a list, then to have to initialize a new usercontrol each time. Can i do something about the code, or is it just adding the usercontrol the listbox that causes the small delay (small delay = <1 sec)? Thanks in advance.

    Read the article

  • C++ : Avoid lot of boolean variable for multiple verification conditions in trading app

    - by Naveen
    Hi i am a junior dev in trading app... we have a order refresh verification unit. It has to verify order confirmation from exchange. We send a bunch of different request in bulk ( NEW, MODIFY, CANCEL ) to exchange... Verification has to happen for max N times with each T intervals for all orders. if verification successful for all the order before N retry then fine.. otherwise we need to indicate as verification unsuccessfull. i hv done a basic coding done in very urgent like below for( N times ) { for_each ( sent_request_order ) // SENT { 1) get all the refreshed order from DB or shared mem i.e REFRESHED 2) find current sent order in REFRESHED if( not_found ) not refreshed from exchange, continue to next order if( found ) case NEW : //check for new status, mark verification done case MODIFY : //check for modified status.. //if not mark pending, go to next order, //revisit the same after T time case CANCEL : //check for cancelled status.. //if not mark pending, go to next order, //revisit the same after T time } if( all_verified ) exit from verification. wait ( T sec ) } order_verification_pending, order_verification_done, order_visited, order_not_visited, all_verified, all_not_verified ... lot of boolean flags used for indication.. is there any better approach for doing this.... splitting responsibilities across the classes......???? i know this is not a general question.... but still flags are making me tidious to handle...

    Read the article

  • MySQL FULLTEXT not working

    - by Ross
    I'm attempting to add searching support for my PHP web app using MySQL's FULLTEXT indexes. I created a test table (using the MyISAM type, with a single text field a) and entered some sample data. Now if I'm right the following query should return both those rows: SELECT * FROM test WHERE MATCH(a) AGAINST('databases') However it returns none. I've done a bit of research and I'm doing everything right as far as I can tell - the table is a MyISAM table, the FULLTEXT indexes are set. I've tried running the query from the prompt and from phpMyAdmin, with no luck. Am I missing something crucial? UPDATE: Ok, while Cody's solution worked in my test case it doesn't seem to work on my actual table: CREATE TABLE IF NOT EXISTS `uploads` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` text NOT NULL, `size` int(11) NOT NULL, `type` text NOT NULL, `alias` text NOT NULL, `md5sum` text NOT NULL, `uploaded` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=6 ; And the data I'm using: INSERT INTO `uploads` (`id`, `name`, `size`, `type`, `alias`, `md5sum`, `uploaded`) VALUES (1, '04 Sickman.mp3', 5261182, 'audio/mp3', '1', 'df2eb6a360fbfa8e0c9893aadc2289de', '2009-07-14 16:08:02'), (2, '07 Dirt.mp3', 5056435, 'audio/mp3', '2', 'edcb873a75c94b5d0368681e4bd9ca41', '2009-07-14 16:08:08'), (3, 'header_bg2.png', 16765, 'image/png', '3', '5bc5cb5c45c7fa329dc881a8476a2af6', '2009-07-14 16:08:30'), (4, 'page_top_right2.png', 5299, 'image/png', '4', '53ea39f826b7c7aeba11060c0d8f4e81', '2009-07-14 16:08:37'), (5, 'todo.txt', 392, 'text/plain', '5', '7ee46db77d1b98b145c9a95444d8dc67', '2009-07-14 16:08:46'); The query I'm now running is: SELECT * FROM `uploads` WHERE MATCH(name) AGAINST ('header' IN BOOLEAN MODE) Which should return row 3, header_bg2.png. Instead I get another empty result set. My options for boolean searching are below: mysql> show variables like 'ft_%'; +--------------------------+----------------+ | Variable_name | Value | +--------------------------+----------------+ | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | +--------------------------+----------------+ 5 rows in set (0.02 sec) "header" is within the word length restrictions and I doubt it's a stop word (I'm not sure how to get the list). Any ideas?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • How to parse large xml files on google app engine?

    - by Alon Carmel
    Hey, I have fairly large xml file 1mb in size that i host on s3. I need to parse that xml file into my app engine datastore entirely. I have written a simple DOM parser that works fine locally but online it reaches the 30sec error and stops. I tried lowering the xml parsing by downloading the xml file into a BLOB at first before the parser then parse the xml file from blob. problem is that blobs are limited to 1mb. so it fails. I have multiple inserts to the datastore which cause it to fail on 30 sec. i saw somewhere that they recommend using the Mapper class and save some exception where the process stopped but as i am a python n00b i cant figure out how to implement it on a DOM parser or an SAX one (please provide an example?) on how to use it. i'm pretty much doing a bad thing right now and i parse the xml using php outside the app engine and push the data via HTTP post to the app engine using a proprietary API which works fine but is stupid and makes me maintain two codes. can you please help me out?

    Read the article

  • Apache module, is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in apache module: 1. apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the apache process though has returned the status to root process but it is also executing parallely where it going through its global store and sending updates to the client, if any. So can a apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • How can I convert seconds to minutes in jQuery while updating an element with the current time?

    - by pghtech
    So I see a number of ways to display allot of seconds in a (static) hr/min/sec. However, I am trying to produce a visual count down timer: $('#someelement').html(minCounter + ' minutes ' + ((secCounter == 0) ? '' : (secCounter + ' seconds'))); My counter is reduced inside a SetInterval that triggers ever 1 second: //....... var counter = redirectTimer; jQuery('#WarningDialogMsg').html(minCounter + ' minutes ' + ((secCounter == 0) ? '' : (secCounter + ' seconds'))); //........ SetInternval( function() { counter -= 1; secCounter = Math.floor(counter % 60); minCounter = Math.floor(counter / 60); //....... $('#someelement').html(minCounter + ' minutes ' + ((secCounter == 0) ? '' : (secCounter + ' seconds'))); }, 1000) It is a two minute counter but I don't want to display 120 seconds. I want to display 1 : 59 (and counting down). I have managed to get it to work using the above, but my main question is: is there a more elegant way to accomplish the above? (note: I am redirecting once "counter == 0").

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • Is it possible to include a Sexpr before the expression has been evaluated in Sweave / R ?

    - by PaulHurleyuk
    Hello, I'm writing a Sweave document, and I want to include a small section that details the R and package versions, platofrms and how long ti took to evalute the doucment, however, I want to put this in the middle of the document ! I was using a \Sexpr{elapsed} to do this (which didn't work), but thought if I put the code printing elapsed in a chunk that evaluates at the end, I could then include the chunk half way through, which also fails. My document looks something like this % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] @ Text and Sweave Code in here % This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx sec to process. <<>>= <<elapsed>> @ More text and Sweave code in here <<label=bye, include=FALSE, echo=FALSE>>= odbcCloseAll() endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= print(elapsedtime) @ \end{document} But this doesn't seem to work (amazingly !) Does anyone know how I could do this ? Thanks Paul.

    Read the article

  • generation of random numbers in java

    - by S.PRATHIBA
    Hi all, I want to create 30 tables which consists of the following fields.For example, Service_ID Service_Type consumer_feedback 75 Computing 1 35 Printer 0 33 Printer -1 3 rows in set (0.00 sec) mysql select * from consumer2; Service_ID Service_Type consumer_feedback 42 data 0 75 computing 0 mysql select * from consumer3; Service_ID Service_Type consumer_feedback 43 data -1 41 data 1 72 computing -1 As you can infer from the above tables, i am getting the feedback values.I have generated these consumer_feedback values,Service_ID,Service_Type using the concept of random numbers .I have used the funtion int min1=31;//printer int max1=35;//the values are generated if the Service_Type is printer. int provider1 = (int) (Math.random() * (max1 - min1 + 1) ) + min1; int min2=41;//data int max2 =45 int provider2 = (int) (Math.random() * (max2 - min2 + 1) ) + min2; int min3=71;//computing int max3=75; int provider3 = (int) (Math.random() * (max3 - min3 + 1) ) + min3; int min5 = -1;//feedback values int max5 =1; int feedback = (int) (Math.random() * (max5 - min5 + 1) ) + min5; I need the Service_Types to be distributed uniformly in all the 30 tables.Similarly I need feedback value of 1 to be generated many times other than 0 and -1.Please Help me.

    Read the article

  • Changing Apache2.2.11 httpd.conf has no effect

    - by Adrian
    Hi, Hopefully someone can help here. I recently installed wampserver ver 2.0 with Apache ver 2.2.11. My issue is, I have some large php scripts which timeout at the default 5 min (300 sec) browser limit (I'm using ie8). It is critcal I get this limit extended. I have tried changing the httpd.conf file to include the following: TimeOut 1200 My objective was to set the timeout at 1200 seconds, or 20 min. I had just chosen a random location to place this directive within the httpd.conf file as I cannot locate any documentation to suggest it belongs in a specific place within the file. Regardless, the changes I make appear in the httpd.conf file that can be found in the system tray for wampserver, however they have no effect - the browser still times out after 5 minutes. I thought perhaps I had the capitals incorrect, so I changed to: Timeout 1200 This change had no effect either. Can someone please help, this is very frustrating. Maybe the command can only be used within a specific module? If so, I have no idea which one, nor do I know the syntax to specify this. Regards Adrian.

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • MySQL query does not return any data

    - by Alex L
    Hi, I need to retrieve data from a specific time period. The query works fine until I specify the time period. Is there something wrong with the way I specify time period? I know there are many entries within that time-frame. This query returns empty: SELECT stop_times.stop_id, STR_TO_DATE(stop_times.arrival_time, '%H:%i:%s') as stopTime, routes.route_short_name, routes.route_long_name, trips.trip_headsign FROM trips JOIN stop_times ON trips.trip_id = stop_times.trip_id JOIN routes ON routes.route_id = trips.route_id WHERE stop_times.stop_id = 5508 HAVING stopTime BETWEEN DATE_SUB(stopTime,INTERVAL 1 MINUTE) AND DATE_ADD(stopTime,INTERVAL 20 MINUTE); Here is it's EXPLAIN: +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | 1 | SIMPLE | stop_times | ref | trip_id,stop_id | stop_id | 5 | const | 605 | Using where | | 1 | SIMPLE | trips | eq_ref | PRIMARY,route_id | PRIMARY | 4 | wmata_gtfs.stop_times.trip_id | 1 | | | 1 | SIMPLE | routes | eq_ref | PRIMARY | PRIMARY | 4 | wmata_gtfs.trips.route_id | 1 | | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ 3 rows in set (0.00 sec) The query works if I remove the HAVING clause (don't specify time range). Returns: +---------+----------+------------------+-----------------+---------------+ | stop_id | stopTime | route_short_name | route_long_name | trip_headsign | +---------+----------+------------------+-----------------+---------------+ | 5508 | 06:31:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 06:57:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:23:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:49:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:15:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:41:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 09:08:00 | "80" | "" | "FORT TOTTEN" | I am using Google Transit format Data loaded into MySQL. The query is supposed to provide stop times and bus routes for a given bus stop. For a bus stop, I am trying to get: Route Name Bus Name Bus Direction (headsign) Stop time The results should be limited only to buses times from 1 min ago to 20 min from now. Please let me know if you could help.

    Read the article

  • Is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in Apache module: 1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any. So can a Apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • MySQLPython is ignoring my my.cnf file. Where does it get its information?

    - by ?????
    When I try to use MySQLPython (via SQLAlchemy) I get the error File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-macosx-10.6-x86_64.egg/MySQLdb/connections.py", line 188, in __init__ super(Connection, self).__init__(*args, **kwargs2) sqlalchemy.exc.OperationalError: (OperationalError) (2002, "Can't connect to local MySQL server through socket '/opt/local/var/run/mysql5/mysqld.sock' (2)") None None but no other mysql client on my machine sees it fine! My my.cnf file states: [client] port = 3306 socket = /tmp/mysql/mysql.sock [safe_mysqld] socket = /tmp/mysql/mysql.sock [mysqld_safe] socket = /tmp/mysql/mysql.sock [mysqld] socket = /tmp/mysql/mysql.sock port = 3306 and the mysql.sock file is, indeed, located in /tmp/mysql I verified that ~/.my.cnf and /var/lib/mysql/my.cnf aren't overriding it. The mysql5 client program, etc, has no trouble connecting and neither does a groovy/grails installation on the same machine using jdbc/mysql connection thrilllap-2:~ swirsky$ mysql5 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 6 Server version: 5.1.47 Source distribution Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL v2 license Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | test | +--------------------+ 2 rows in set (0.00 sec) mysql> Why can't MySQLdb for python figure this out? Where would it look if not the my.cnf files?

    Read the article

  • C#. How to pass message from unsafe callback to managed code?

    - by maxima120
    Is there a simple example of how to pass messages from unsafe callback to managed code? I have a proprietary dll which receives some messages packed in structs and all is coming to a callback function. The example of usage is as follows but it calls unsafe code too. I want to pass the messages into my application which is all managed code. *P.S. I have no experience in interop or unsafe code. I used to develop in C++ 8 yrs ago but remember very little from that nightmarish times :) P.P.S. The application is loaded as hell, the original devs claim it processes 2mil messages per sec.. I need a most efficient solution.* static unsafe int OnCoreCallback(IntPtr pSys, IntPtr pMsg) { // Alias structure pointers to the pointers passed in. CoreSystem* pCoreSys = (CoreSystem*)pSys; CoreMessage* pCoreMsg = (CoreMessage*)pMsg; // message handler function. if (pCoreMsg->MessageType == Core.MSG_STATUS) OnCoreStatus(pCoreSys, pCoreMsg); // Continue running return (int)Core.CALLBACKRETURN_CONTINUE; } Thank you.

    Read the article

  • Problem in getting Http Response in chrome

    - by Bhaskasr
    Am trying to get http response from php web service in javascript, but getting null in firefox and chrome. plz tell me where am doing mistake here is my code, function fetch_details() { if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest() alert("first"); } else { xhttp=new ActiveXObject("Microsoft.XMLHTTP") alert("sec"); } xhttp.open("GET","url.com",false); xhttp.send(""); xmlDoc=xhttp.responseXML; alert(xmlDoc.getElementsByTagName("Inbox")[0].childNodes[0].nodeValue); } I have tried with ajax also but am not getting http response here is my code, please guide me var xmlhttp = null; var url = "url.com"; if (window.XMLHttpRequest) { xmlhttp = new XMLHttpRequest(); alert(xmlhttp); //make sure that Browser supports overrideMimeType if ( typeof xmlhttp.overrideMimeType != 'undefined') { xmlhttp.overrideMimeType('text/xml'); } } else if (window.ActiveXObject) { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } else { alert('Perhaps your browser does not support xmlhttprequests?'); } xmlhttp.open('GET', url, true); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState == 4) { alert(xmlhttp.responseXML); } }; } // Make the actual request xmlhttp.send(null); I am getting xmlhttp.readyState = 4 xmlhttp.status = 0 xmlhttp.responseText = "" plz tell me where am doing mistake

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >