Search Results

Search found 36506 results on 1461 pages for 'unsigned long long int'.

Page 56/1461 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Symfony: embedRelation() controlling options for nesting multiple levels of relations

    - by wulftone
    Hey all, I'm trying to set some conditional statements for nested embedRelation() instances, and can't find a way to get any kind of option through to the second embedRelation. I've got a "Measure-Page-Question" table relationship, and I'd like to be able to choose whether or not to display the Question table. For example, say I have two "success" pages, page1Success.php and page2Success.php. On page1, I'd like to display "Measure-Page-Question", and on page2, I'd like to display "Measure-Page", but I need a way to pass an "option" to the PageForm.class.php file to make that kind of decision. My actions.class.php file has something like this: // actions.class.php $this-form = new measureForm($measure, array('option'=$option)); to pass an option to the "Page", but passing that option through "Page" into "Question" doesn't work. My measureForm.class.php file has an embedRelation in it that is dependent on the "option": // measureForm.class.php if ($this-getOption('option') == "page_1") { $this-embedRelation('Page'); } and this is what i'd like to do in my pageForm.class.php file: // pageForm.class.php if ($this-getOption('option') == "page_1") { // Or != "page_2", or whatever $this-embedRelation('Question'); } I can't seem to find a way to do this. Any ideas? Is there a preferred Symfony way of doing this type of operation, perhaps without embedRelation? Thanks, -Trevor As requested, here's my schema.yml: # schema.yml Measure: connection: doctrine tableName: measure columns: _kp_mid: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true description: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false frequency: type: integer(4) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Page: local: _kp_mid foreign: _kf_mid type: many Page: connection: doctrine tableName: page columns: _kp_pid: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true _kf_mid: type: integer(4) fixed: false unsigned: false primary: false notnull: false autoincrement: false next: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false number: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false previous: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Measure: local: _kf_mid foreign: _kp_mid type: one Question: local: _kp_pid foreign: _kf_pid type: many Question: connection: doctrine tableName: question columns: _kp_qid: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true _kf_pid: type: integer(4) fixed: false unsigned: false primary: false notnull: false autoincrement: false text: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false type: type: integer(4) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Page: local: _kf_pid foreign: _kp_pid type: one

    Read the article

  • What is considered a long execution time?

    - by stjowa
    I am trying to figure out just how "efficient" my server-side code is. Using start and end microtime(true) values, I am able to calculate the time it took my script to run. I am getting times from .3 - .5 seconds. These scripts do a number of database queries to return different values to the user. What is considered an efficient execution time for PHP scripts that will be run online for a website? Note: I know it depends on exactly what is being done, but just consider this a standard script that reads from a database and returns values to the user. Also, I look at Google and see them search the internet in .15 seconds and I feel like my script is crap. Thanks.

    Read the article

  • Lucene search taking TOOO long.

    - by Josh Handel
    I;m using Lucene.net (2.9.2.2) on a (currently) 70Gig index.. I can do a fairly complicated search and get all the document IDs back in 1 ~ 2 seconds.. But to actually load up all the hits (about 700 thousand in my test queries) takes 5+ minutes. We aren't using lucene for UI, this is a datastore between processes where we have hundreds of millions of pre-cached data elements, and the part I am working on exports a few specific fields from each found document. (ergo, pagination doesn't make since as this is an export between processes). My question is what is the best way to get all of the documents in a search result? currently I am using a custom collector that does a get on the document (with a MapFieldSelector) as its collecting.. I've also tried iterating through the list after the collector has finished.. but that was even worse. I'm open to ideas :-). Thanks in advance.

    Read the article

  • Polling duplex does not scale... what's the alternative?

    - by user80855
    Our tests showed that the polling duplex binding simply does not scale and can not be used on a service within a web-farm or even a web garden. We have looked at TCP/IP sockets for a client push method, but the firewall issue is does allow us to use sockets. I was wondering what is the alternative "free" solution to this problem? allowing us to scale and allowing us to push data to client... I have also tried the solution in this article http://tomasz.janczuk.org/2009/09/scale-out-of-silverlight-http-polling.html but at the end, there was too much polling on a database, and performance was affected. Our Silverlight application need a pub/sub design, but it needs to be reliable and scalable... any ideas?

    Read the article

  • Large margin by long title

    - by Kevin Koenen
    I try to show a list of offers. When an advertiser insert a large title, the margin of an item (offer) will be showed what shouldn't. I've tried different layouts but that doesn't work. Link to image :) You see, some of the items has a large margin. Here's the code: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/gallerylayout" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="wrap_content" android:gravity="left|center" android:layout_width="wrap_content" android:paddingBottom="2dp" android:paddingTop="2dp" android:paddingLeft="5dp" android:background="#20000000" android:cacheColorHint="#00000000"> <ImageView android:id="@+id/ad_image_small" android:layout_width="wrap_content" android:layout_height="fill_parent" android:layout_marginRight="6dip" android:src="@drawable/ic_launcher"/> <LinearLayout android:orientation="vertical" android:layout_width="0dip" android:layout_weight="1" android:layout_height="fill_parent"> <TextView android:id="@+id/ad_id" android:visibility="gone" android:layout_width="wrap_content" android:layout_height="wrap_content"/> <TextView android:id="@+id/ad_img_url" android:visibility="gone" android:layout_width="wrap_content" android:layout_height="wrap_content"/> <LinearLayout android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content"> <TextView android:id="@+id/ad_title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:padding="3dp" android:gravity="left" android:textColor="#000000" android:ellipsize="end" android:singleLine="true"/> <TextView android:id="@+id/ad_km" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_margin="3dp" android:layout_marginBottom="1dp" android:textColor="#000000" android:gravity="right"/> </LinearLayout> <LinearLayout android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content"> <TextView android:id="@+id/ad_text" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="3dp" android:textColor="#0099CC" android:ellipsize="end" android:singleLine="true"/> </LinearLayout> <LinearLayout android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_margin="2dp" android:padding="5dp"> <ImageView android:id="@+id/ad_clock" android:layout_width="20dp" android:layout_height="20dp" android:layout_marginRight="5dip" android:src="@drawable/icon_clock"/> <ProgressBar android:id="@+id/progressbar" android:layout_width="match_parent" android:layout_height="8dp" android:layout_margin="5dp" android:paddingLeft="2dp" android:max="100" style="?android:attr/progressBarStyleHorizontal" android:progressDrawable="@drawable/color_progressbar" /> </LinearLayout> </LinearLayout> </LinearLayout> <ImageView android:id="@+id/overlay_image" android:background="#0000" android:src="@drawable/verlopen2x" android:layout_alignTop="@id/ad_image_small" android:layout_alignBottom="@id/ad_image_small" android:layout_width="fill_parent" android:layout_height="wrap_content" android:adjustViewBounds="true" android:visibility="visible" /> I can't see what i'm doing wrong. Thanks in advance!

    Read the article

  • Google Code Jam 2010 Large DataSets Take Too Long to Submit

    - by Travis
    Hey Guys, I'm participating in the 2010 code jam and I solved two of the problems for the small data sets, but I'm not even close to solving the large data sets in the 8 minute time frame. I'm wondering if anyone out there has solved the large data set: What hardware were you running on? What language were you running on? What performance tuning techniques did you do on your code to run as fast as possible? I'm writing the solutions in Ruby, which is not my day to day language, and executing them on my Macbook Pro. My solutions for problem A and problem C are on github at http://github.com/tjboudreaux/codejam2010. I'd appreciate any suggestions that you may have. FWIW, I have alot of experience in C++ from college, my primary language is PHP, and my "sandbox" language is Ruby. Was I just a bit ambitious by taking a shot at this in Ruby, not knowing where the language struggles for performance, or does anyone see anything that's a redflag as to why I can't complete the large dataset in time to submit.

    Read the article

  • Trilateration using 3 latitude and longitude points, and 3 distances

    - by nohat
    There exists an unknown target location (latitude and longitude co-ordinates). I have 3 latitude and longitude co-ordinate pairs and for each pair a distance in kilometers to the target location. How can I calculate the co-ordinates of the target location? For example, say I have the following data points 37.418436,-121.963477 0.265710701754km 37.417243,-121.961889 0.234592423446km 37.418692,-121.960194 0.0548954278262km What I'd like is what would the guts of the function that takes that as input and returns 37.417959,-121.961954 as output look like? I understand how to calculate the distance between two points, from http://www.movable-type.co.uk/scripts/latlong.html I understand the general principle that with three circles you get exactly one point of overlap. What I'm hazy on is the math needed to calculate that point with this input.

    Read the article

  • bat file using winrar taking too long to run

    - by Jessie
    hi guys, i have this scripts which extracts all my folder's and files from my c:\projects locations and put its in winrar and transfers them to c:\backup\project for /f "delims==" %%D in ('DIR C:\projects /A /B /S') do ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "c:\backup\projects.rar" "%%D" ) i have also tried the below script which uses the same source c:\projects but put them in their own separate winrar folder like in the source then transfers the folders into my c:\backup. FOR /F "DELIMS==" %%D in ('DIR C:\projects /AD /B') DO ( "C:\Program Files\WinRAR\WinRAR.EXE" m -r "C:\Backup\%%D.rar" "%%D" ) my question is, my second scripts only takes two hours to run when my first script takes over 24 hours to run, is there any way to make my first script faster? if anything shouldn't my first script be faster?

    Read the article

  • Eclipse refresh taking too long

    - by Nash0
    I am doing TDD on a large Java project in eclipse and am finding it frustrating because every time I run a test I have to wait 30 seconds+ for eclipse to compile and refresh. I estimate that 80%+ of that time is spent refreshing. Is there a way I can drastically reduce the amount of refreshing it is doing? I have looked at server other similar questions but I could not see anything that helps. One way I reduced the compile refresh time was to split the unit tests and code into separate projects. There are 4,700 classes in the src project and 300 in the tests. I am running eclipse 3.5.1 on Java 1.6.0_17-b04 (eclipse.vm). My computer is running windows xp with 3.1 gigs of usable ram. The only plugin I have installed is subclipse.

    Read the article

  • Cleaning up temp folder after long-running subprocess exits

    - by dbr
    I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these. When the image-viewing process exists, I want to remove the temporary images. I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following: p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"]) p.communicate() os.unlink("/example/image1.jpg") os.unlink("/example/image2.jpg") ..as this blocks the main thread, nor could I check for the pid exiting in a thread etc The only solution I can think of means I have to use shell=True, which I would rather avoid: cmd = ['imgviewer'] cmd.append("/example/image2.jpg") for x in cleanup: cmd.extend(["&&", "rm", x]) cmdstr = " ".join(cmd) subprocess.Popen(cmdstr, shell = True) This works, but is hardly elegant, and will fail with filenames containing spaces etc.. Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.

    Read the article

  • Bulk Insert takes 4x as long on first operation of the day

    - by patrick
    I do Bulk Inserts into a table with about 14 million rows at fiver minute increments during a 7 hour period during the day. These inserts take somewhere between 9-14 secs. However, the first insert always takes about 40 secs. Anyone know what SQL Server 2005 would be doing differently on the first insert into a table for that day? From what I've read I should probably use the SqlBulkCopy class instead of just using a bulk insert in a stored procedure. Is that that the general consensus?

    Read the article

  • Getting really weird long Contact Group names

    - by Pentium10
    When looking at the Contact Groups on Google Contacts or in the People application of my HTC Legend phone, I get the groups names ok eg: Friends, Family, VIP, Favorite etc... But in my application I get really wrong names such as "Family" became "System Group: Family" "Friends" became "System Group: Friends" "Favorite" became "Favorite_5656100000000_3245664334564" I use the below code to read these values: public Cursor getFromSystem() { // Get the base URI for the People table in the Contacts content // provider. Uri contacts = ContactsContract.Groups.CONTENT_URI; // Make the query. ContentResolver cr = ctx.getContentResolver(); // Form an array specifying which columns to return. String[] projection = new String[] { ContactsContract.Groups._ID, ContactsContract.Groups.TITLE, ContactsContract.Groups.NOTES }; Cursor managedCursor = cr.query(contacts, projection, ContactsContract.Groups.DELETED + "=0", null, ContactsContract.Groups.TITLE + " COLLATE LOCALIZED ASC"); return managedCursor; } What I am missing?

    Read the article

  • Google Maps API - Get points along route between lat/long

    - by user311374
    I have a web site that I am trying to get completed and I need to have the user click points on a map and then work out the route on the roads between the two points. So the user clicks the first point on 1st street, and then clicks another point on 4th street, and the map will find the best way to get there and plot the route on the map. I am assuming this can be done using directions and parse it up, but I have been searching for an hour now and can't find what I am looking for (maybe bad search terms). I need to be able to plot the map manually (?) so I can calculate the distance, etc... of the route as the user continues to click. The site that is in beta is http://www.RunMyRoute.com/UserRoutes/Create and you can see I am trying to create running routes. I want the user to have the option for the route to follow the roads versus just a straight line between two points on the map. Any help on this would be great! Simon.

    Read the article

  • conditional update records mysql query

    - by Shakti Singh
    Hi, Is there any single msql query which can update customer DOB? I want to update the DOB of those customers which have DOB greater than current date. example:- if a customer have dob 2034 update it to 1934 , if have 2068 updated with 1968. There was a bug in my system if you enter date less than 1970 it was storing it as 2070. The bug is solved now but what about the customers which have wrong DOB. So I have to update their DOB. All customers are stored in customer_entity table and the entity_id is the customer_id Details is as follows:- desc customer_entity -> ; +------------------+----------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+----------------------+------+-----+---------------------+----------------+ | entity_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | entity_type_id | smallint(8) unsigned | NO | MUL | 0 | | | attribute_set_id | smallint(5) unsigned | NO | | 0 | | | website_id | smallint(5) unsigned | YES | MUL | NULL | | | email | varchar(255) | NO | MUL | | | | group_id | smallint(3) unsigned | NO | | 0 | | | increment_id | varchar(50) | NO | | | | | store_id | smallint(5) unsigned | YES | MUL | 0 | | | created_at | datetime | NO | | 0000-00-00 00:00:00 | | | updated_at | datetime | NO | | 0000-00-00 00:00:00 | | | is_active | tinyint(1) unsigned | NO | | 1 | | +------------------+----------------------+------+-----+---------------------+----------------+ 11 rows in set (0.00 sec) And the DOB is stored in the customer_entity_datetime table the column value contain the DOB. but in this table values of all other attribute are also stored such as fname,lname etc. So the attribute_id with value 11 is DOB attribute. mysql> desc customer_entity_datetime; +----------------+----------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+----------------------+------+-----+---------------------+----------------+ | value_id | int(11) | NO | PRI | NULL | auto_increment | | entity_type_id | smallint(8) unsigned | NO | MUL | 0 | | | attribute_id | smallint(5) unsigned | NO | MUL | 0 | | | entity_id | int(10) unsigned | NO | MUL | 0 | | | value | datetime | NO | | 0000-00-00 00:00:00 | | +----------------+----------------------+------+-----+---------------------+----------------+ 5 rows in set (0.01 sec) Thanks.

    Read the article

  • Redirect and parse in realtime stdout of an long running process in vb.net

    - by Richard
    Hello there, This code executes "handbrakecli" (a command line application) and places the output into a string: Dim p As Process = New Process p.StartInfo.FileName = "handbrakecli" p.StartInfo.Arguments = "-i [source] -o [destination]" p.StartInfo.UseShellExecute = False p.StartInfo.RedirectStandardOutput = True p.Start Dim output As String = p.StandardOutput.ReadToEnd p.WaitForExit The problem is that this can take up to 20 minutes to complete during which nothing will be reported back to the user. Once it's completed, they'll see all the output from the application which includes progress details. Not very useful. Therefore I'm trying to find a sample that shows the best way to: Start an external application (hidden) Monitor its output periodically as it displays information about it's progress (so I can extract this and present a nice percentage bar to the user) Determine when the external application has finished (so I can't continue with my own applications execution) Kill the external application if necessary and detect when this has happened (so that if the user hits "cancel", I get take the appropriate steps) Does anyone have any recommended code snippets?

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Very long strings as primary keys in a database for caching

    - by Bill Zimmerman
    Hi, I am working on a web app that allows users to create dynamic PDF files based on what they enter into a form (it is not very structured data). The idea is that User 1 enters several words (arbitrary # of words, practically capped of course), for example: A B C D E There is no such string in the database, so I was thinking: Store this string as a primary key in a MySQL database (it could be maybe around 50-100k of text, but usually probably less than 200 words) Generate the PDF file, and create a link to it in the database When the next user requests A B C D E, then I can just serve the file instead of recreating it each time. (simple cache) The PDF is cpu intensive to generate, so I am trying to cache as much as I can... My questions are: Does anyone have any alternative ideas to my approach What will the database performance be like? Is there a better way to design the schema than using the input string as the primary key?

    Read the article

  • How to make UIButton work like Launcher in SpringBoard, when pressed for long timeinterval

    - by KayKay
    In my ViewController I am using UIButton which triggers some action when touchedUpInside. Upto here no problem, but i also want to add another action for touchDown event. More precisely i dont have an idea how to add that action to which event, that will work the same as when App-Icon is pressed longer in springboard which causes springboard enter in editing mode.Actually what i want is : when this button is kept holding for ,say about 2 seconds it will pop-up an alertView without touch being heldUp (with finger is still on button). I have tried with 2 NSdate difference, one of which is allocated when touchedDown and other when touchUpInside. Its working and popping alert,but only after touchedUpInside. I want it to show alert without touch being removed. Here is the code. (IBAction)touchedDown:(id)sender { momentTouchedDown = [[NSDate alloc] init];//class variable NSLog(@"touched down"); } - (IBAction)touchUpInside:(id)sender { NSLog(@"touch lifted Up\n"); NSDate *momentLifted = [[NSDate alloc] init]; double timeInterval = [momentLifted timeIntervalSinceDate:momentTouchedDown]; NSLog(@"time lifted = %@, time down = %@, time diff = %@", momentLifted, momentTouchedDown, [NSString stringWithFormat:@"%g",timeInterval]); [momentLifted release]; if(timeInterval > 2.0) { NSLog(@"AlertBox has been fired"); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Yes" message:@""//msg delegate:self cancelButtonTitle:@"Cancel" otherButtonTitles:@"OK" , nil]; [alert show]; [alert release]; } } Please provide me an insight..thanks for help in advance.

    Read the article

  • sudo taking long time

    - by Sam
    On a Ubuntu 9 64bit Linux machine, sudo takes longer time to start. "sudo echo hi" takes 2-3 minutes. strace on sudo tells poll("/etc/pam.d/system-auth", POLLIN) timesout after 5 seconds and there are multiple calls(may be a loop) to same system call (which causes 2-3min delay). Any idea why sudo has to wait for /etc/pam.d/system-auth? Any tunable to make sudo to timeout faster? Thanks Samuel

    Read the article

  • C# : DBConnection.Open() timeout is too long.

    - by leo
    Hi, I'm trying to connect to a server that the user inputs. When the server doesn't exist, I'd like to give a quick feedback to the end-user so he can correct what he's typed. Is there any way to test if a server exists before trying to connect ? Thanks

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >