Search Results

Search found 585 results on 24 pages for 'milliseconds'.

Page 2/24 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Rounding up milliseconds when printing with Joda Time

    - by RoToRa
    I'm implementing a count-down using Joda Time. I only need a display accuracy of seconds. However when printing the time, the seconds are displayed as full seconds, so when the count down reaches, for example, 900ms, then "0" seconds is printed, but as a count-down it would make more sense to display "1" second, until the time actually reaches 0ms. Example: void printDuration(Duration d) { System.out.println( d.toPeriod(PeriodType.time()).toString( new PeriodFormatterBuilder().printZeroAlways().appendSeconds().toFormatter() ) ); } printDuration(new Duration(5000)); // Prints "5" => OK printDuration(new Duration(4900)); // Prints "4" => need "5" printDuration(new Duration(1000)); // Prints "1" => OK printDuration(new Duration(900)); // Prints "0" => need "1" printDuration(new Duration(0)); // Prints "0" => OK Basically I need to the seconds to be display rounded up from milliseconds and not rounded down. Is there a way to achieve this with Joda without needing to write my own formatter?

    Read the article

  • NSDate Convert milliseconds to NSDate.

    - by user208041
    Hi Guys I have this 1273636800000 Wed Mar 03 2010 00:00:00 GMT-0500 (EST) I nee dto convert this milliseonds to NSdate format. I tried this NSDate *tr = [NSDate dateWithTimeIntervalSince1970:1273636800000]; and NSDate *tr = [NSDate dateWithTimeIntervalSinceNow:1273636800000]; But I cat get it to work... anyone had thsi problem and found solution V

    Read the article

  • iPhone NSDate - get time in milliseconds and store in coredata

    - by satyam
    I'm implementing iphone app with coredata functionality. I've an attribute in entity "NSDate" which stores the date and time when the record is added. I'm adding records at a faster rate. About 1 record per 25 milli seconds I want to store the date along with milli seconds so that I can retrieve the data in descending order. How can I do this?

    Read the article

  • Show milliseconds with Android Chronometer

    - by Matthew Steeples
    I'm looking for a way to make the Chronometer in Android (preferably 1.6 and upwards) show 10ths of a second while counting up. Is it possible to do this? If not, is there a free (and preferably open source) library that does the same? Failing that I'll write my own, but I'd rather use someone else's!

    Read the article

  • AudioQueue ate my buffer (first 15 milliseconds of it)

    - by iter
    I am generating audio programmatically. I hear gaps of silence between my buffers. When I hook my phone to a scope, I see that the first few samples of each buffer are missing, and in their place is silence. The length of this silence varies from almost nothing to as much as 20 ms. My first thought is that my original callback function takes too much time. I replace it with the shortest one possible--it re-renqueues the same buffer over and over. I observe the same behavior. AudioQueueRef aq; AudioQueueBufferRef aq_buffer; AudioStreamBasicDescription asbd; void aq_callback (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) { OSStatus s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL); } void aq_init(void) { OSStatus s; asbd.mSampleRate = AUDIO_SAMPLES_PER_S; asbd.mFormatID = kAudioFormatLinearPCM; asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; asbd.mBytesPerPacket = 1; asbd.mFramesPerPacket = 1; asbd.mBytesPerFrame = 1; asbd.mChannelsPerFrame = 1; asbd.mBitsPerChannel = 8; asbd.mReserved = 0; int PPM_PACKETS_PER_SECOND = 50; // one buffer is as long as one PPM frame int BUFFER_SIZE_BYTES = asbd.mSampleRate/PPM_PACKETS_PER_SECOND*asbd.mBytesPerFrame; s = AudioQueueNewOutput(&asbd, aq_callback, NULL, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aq); s = AudioQueueAllocateBuffer(aq, BUFFER_SIZE_BYTES, &aq_buffer); // put samples in the buffer buffer_data(my_data, aq_buffer); s = AudioQueueStart(aq, NULL); s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL); }

    Read the article

  • [r] Converting unix seconds in milliseconds to POSIXct/POSIXlt

    - by signalseeker
    Why do I see a difference when I convert a unix timestamp to datetime object in R? > as.POSIXlt(1268736919, origin="1970-01-01", tz="America/New_York") [1] "2010-03-16 06:55:19 EDT" > as.POSIXct(1268736919, origin="1970-01-01", tz="America/New_York") [1] "2010-03-16 11:55:19 EDT" The result from POSIXlt is actually correct. Also, is there a way to do this conversion without specifying the origin? Thanks

    Read the article

  • Converting Openfire IM datetime values in SQL Server to / from VARCHAR(15) and DATETIME data types

    - by Brian Biales
    A client is using Openfire IM for their users, and would like some custom queries to audit user conversations (which are stored by Openfire in tables in the SQL Server database). Because Openfire supports multiple database servers and multiple platforms, the designers chose to store all date/time stamps in the database as 15 character strings, which get converted to Java Date objects in their code (Openfire is written in Java).  I did some digging around, and, so I don't forget and in case someone else will find this useful, I will put the simple algorithms here for converting back and forth between SQL DATETIME and the Java string representation. The Java string representation is the number of milliseconds since 1/1/1970.  SQL Server's DATETIME is actually represented as a float, the value being the number of days since 1/1/1900, the portion after the decimal point representing the hours/minutes/seconds/milliseconds... as a fractional part of a day.  Try this and you will see this is true:     SELECT CAST(0 AS DATETIME) and you will see it returns the date 1/1/1900. The difference in days between SQL Server's 0 date of 1/1/1900 and the Java representation's 0 date of 1/1/1970 is found easily using the following SQL:   SELECT DATEDIFF(D, '1900-01-01', '1970-01-01') which returns 25567.  There are 25567 days between these dates. So to convert from the Java string to SQL Server's date time, we need to convert the number of milliseconds to a floating point representation of the number of days since 1/1/1970, then add the 25567 to change this to the number of days since 1/1/1900.  To convert to days, you need to divide the number by 1000 ms/s, then by  60 seconds/minute, then by 60 minutes/hour, then by 24 hours/day.  Or simply divide by 1000*60*60*24, or 86400000.   So, to summarize, we need to cast this string as a float, divide by 86400000 milliseconds/day, then add 25567 days, and cast the resulting value to a DateTime.  Here is an example:   DECLARE @tmp as VARCHAR(15)   SET @tmp = '1268231722123'   SELECT @tmp as JavaTime, CAST((CAST(@tmp AS FLOAT) / 86400000) + 25567 AS DATETIME) as SQLTime   To convert from SQL datetime back to the Java time format is not quite as simple, I found, because floats of that size do not convert nicely to strings, they end up in scientific notation using the CONVERT function or CAST function.  But I found a couple ways around that problem. You can convert a date to the number of  seconds since 1/1/1970 very easily using the DATEDIFF function, as this value fits in an Int.  If you don't need to worry about the milliseconds, simply cast this integer as a string, and then concatenate '000' at the end, essentially multiplying this number by 1000, and making it milliseconds since 1/1/1970.  If, however, you do care about the milliseconds, you will need to use DATEPART to get the milliseconds part of the date, cast this integer to a string, and then pad zeros on the left to make sure this is three digits, and concatenate these three digits to the number of seconds string above.  And finally, I discovered by casting to DECIMAL(15,0) then to VARCHAR(15), I avoid the scientific notation issue.  So here are all my examples, pick the one you like best... First, here is the simple approach if you don't care about the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15)) + '000'   SELECT @tmp as JavaTime, @dt as SQLTime If you want to keep the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   DECLARE @ms as int   SET @dt = '2010-03-10 14:35:22.123'   SET @ms as DATEPART(ms, @dt)   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST(@ms AS VARCHAR(3)), 3)   SELECT @tmp as JavaTime, @dt as SQLTime Or, in one fell swoop:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST( DATEPART(ms, @dt) AS VARCHAR(3)), 3) as JavaTime   And finally, a way to simply reverse the math used converting from Java date to SQL date. Note the parenthesis - watch out for operator precedence, you want to subtract, then multiply:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(CAST((CAST(@dt as Float) - 25567.0) * 86400000.0 as DECIMAL(15,0)) as VARCHAR(15)) as JavaTime Interestingly, I found that converting to SQL Date time can lose some accuracy, when I converted the time above to Java time then converted  that back to DateTime, the number of milliseconds is 120, not 123.  As I am not interested in the milliseconds, this is ok for me.  But you may want to look into using DateTime2 in SQL Server 2008 for more accuracy.

    Read the article

  • convert string of millisecond into datetime in python

    - by newbie
    I am a newbie in Python. I want to substract interval time from my log file, but the problem is I cannot convert millisecond string of log file into datetime format. For example, I have 15:55:05.12345 and I want to remove 5.12345 seconds from this string, and shwow result of 15.55.00.00000 in Python. How can I do that? Currently, I am using python 2.5. Thank you in advance.

    Read the article

  • How do i convert from milliseconds to seconds units to be shown in a Label?

    - by Doron Muzar
    I have this code: private void Form1_MouseWheel(object sender, MouseEventArgs e) { if (leave == true) { if (e.Delta > 0) { if (timer1.Interval < 5000) { timer1.Interval += 1000; label2.Text = (timer1.Interval/1000).ToString(); } } else { if (timer1.Interval == 1000) { timer1.Interval -= 100; label2.Text = (timer1.Interval / 1000).ToString(); } } } } The original timer1 interval in the designer is set to 1000 milliseconds. In the mouse wheel event i did that it will show in the label2 the untis in seconds. And indeed when i move the mouse wheel up its slowing the timer and show it in seconds 1 2 3 4 5 The problem is with the second part i wanted that when its getting to 1 second or 1000 milliseconds if i will keep wheel it down it will show the units in 100 and change the timer1.interval in 100 units. So in label2 if it was on 1 second so now i will see 900 800 700 600 500 un till 100. And also the timer1 interval should be change to 900 milliseconds 800 700 600 untill 100. When its get to 100 just stop there dont keep getting under 100. The problem is with this part: if (timer1.Interval == 1000) { timer1.Interval -= 100; label2.Text = (timer1.Interval / 1000).ToString(); } Its not working at all. EDIT** My code now: if (leave == true) { if (e.Delta > 0) { if (timer1.Interval < 5000) { timer1.Interval += 1000; label2.Text = (timer1.Interval / 1000).ToString(); } } else { if (timer1.Interval > 1000) { timer1.Interval -= 1000; label2.Text = (timer1.Interval / 1000).ToString(); } else if (timer1.Interval <= 1000 && timer1.Interval > 100) { timer1.Interval -= 100; label2.Text = (timer1.Interval / (double)1000).ToString(); } } } But now if i was at 5 seconds(5000 milliseconds) now i move the wheel back down its counting 5 4 3 2 1 0 and stop on 0 It dosent show under 1 ...0.9 0.8 0.7 as it did before.

    Read the article

  • What sort of things can cause a whole system to appear to hang for 100s-1000s of milliseconds?

    - by Ogapo
    I am working on a Windows game and while rendering, some computers will experience intermittent pauses ("hitches" for lack of a better term). When profiled they appear in seemingly random places in the code. Eventually I noticed that it wasn't just my process that was affected, but (seemingly) every process on the system. All of the threads in my application hitch at once. The CPU utilization drops during these hitches and it appears as if most processes make no progress. This leads me to believe this may be an Operating System or Driver issue, but it only occurs while playing the game (and only on some systems). What sort of operations might the operating system be doing that would require the kernel to pause all user threads and block. Some kind of I/O? At first I thought of paging but my impression is that would only affect a single process, no? Some systems in use: Windows, DirectX (3d), nVidia cards (unknown if replicates on ATI), using overlapped io for streaming

    Read the article

  • How do you convert date taken from a bash script to milliseconds in a Java program?

    - by Matt Pascoe
    I am writing a piece of code in Java that needs to take a time sent from a bash script and parse the time to milliseconds. When I check the millisecond conversion on the date everything is correct except for the month I have sent which is January instead of March. Here is the variable I create in the bash script, which later in the script I pass to the Java program: TIME=`date +%m%d%Y_%H:%M:%S` Here is the Java code which parses the time to milliseconds: String dt = "${scriptstart}"; java.text.SimpleDateFormat scriptStart = new java.text.SimpleDateFormat("MMDDyyyy_HH:mm:ss"); long start = scriptStart.parse(dt).getTime(); The goal of this statement is to find the elapsed time between the start of the script and the current system time. To troubleshoot this I printed out the two: System Time = 1269898069496 (converted = Mon Mar 29 2010 16:27:49 GMT-0500 (Central Daylight Time)) Script Start = 03292010_16:27:45 Script Start in Milli = 1264804065000 (Converted = Fri Jan 29 2010 16:27:45 GMT-0600 (Central Standard Time))

    Read the article

  • How do you convert date taken from a bash script to milliseconds in java program?

    - by Matt Pascoe
    I am writing a piece of code in java that needs to take a time sent from a bash script and parse the time to milliseconds. When I check the millisecond conversion on the date everything is correct except for the month I have sent which is January instead of March. Here is the variable I create in the bash script, which later in the script I pass to the java program: TIME=`date +%m%d%Y_%H:%M:%S` Here is the java code which parses the time to milliseconds: String dt = "${scriptstart}"; java.text.SimpleDateFormat scriptStart = new java.text.SimpleDateFormat("MMDDyyyy_HH:mm:ss"); long start = scriptStart.parse(dt).getTime(); The goal of this statement is to find the elapsed time between the start of the script and the current system time. To troubleshoot this I printed out the two: System Time = 1269898069496 (converted = Mon Mar 29 2010 16:27:49 GMT-0500 (Central Daylight Time)) Script Start = 03292010_16:27:45 Script Start in Milli = 1264804065000 (Converted = Fri Jan 29 2010 16:27:45 GMT-0600 (Central Standard Time))

    Read the article

  • How to delay program for a certain number of milliseconds, or until a key is pressed?

    - by Jack
    I need to delay my program's execution for a specified number of milliseconds, but also want the user to be able to escape the wait when a key is pressed. If no key is pressed the program should wait for the specified number of milliseconds. I have been using Thread.Sleep to halt the program (which in the context of my program I think is ok as the UI is set to minimise during the execution of the main method). I have thought about doing something like this: while(GetAsyncKeyState(System.Windows.Forms.Keys.Escape) == 0 || waitTime > totalWait) { Thread.Sleep(100); waitTime += 100; } As Thread.Sleep will wait until at least the time specified before waking the thread up, there will obviously be a large unwanted extra delay as it is scaled up in the while loop. Is there some sort of method that will sleep for a specified amount of time but only while a condition holds true? Or is the above example above the "correct" way to do it but to use a more accurate Sleep method? If so what method can I use? Thanks in advance for your help.

    Read the article

  • mysql query execution time - can i get this in milliseconds?

    - by Max Williams
    I'm comparing a few different approaches to getting some data in mysql, directly at the console, using the SQL_NO_CACHE option to make sure mysql keeps running the full query every time. Mysql gives me the execution time back in seconds, to two decimal places. I'd really like to get the result back in milliseconds (ideally to one or two decimal places), to get a better idea of improvements (or lack of). Is there an option i can set in mysql to achieve this? thanks, max

    Read the article

  • Why can't I create a Date from a string including milliseconds?

    - by KooiInc
    In javascript you can create a Date object from a string, like var mydate = new Date('2008/05/10 12:08:20'); console.log(mydate); //=> Sat May 10 2008 12:08:20 GMT+0200 Now try this using milliseconds in the string var mydate = new Date('2008/05/10 12:08:20:551'); // or '2008/05/10 12:08:20.551' console.log(mydate); //=> NaN Just out of curiosity: why is this?

    Read the article

  • Milliseconds in DateTime.Now on .NET Compact Framework always zero?

    - by Marcel
    Hi all, i want to have a time stamp for logs on a Windows Mobile project. The accuracy must be in the range a hundred milliseconds at least. However my call to DateTime.Now returns a DateTime object with the Millisecond property set to zero. Also the Ticks property is rounded accordingly. How to get better time accuracy? Remember, that my code runs on on the Compact Framework, version 3.5. I use a HTC touch Pro 2 device.

    Read the article

  • How can I (reasonably) precisely perform an action every N milliseconds?

    - by Jon Cage
    I have a machine which uses an NTP client to sync up to internet time so it's system clock should be fairly accurate. I've got an application which I'm developing which logs data in real time, processes it and then passes it on. What I'd like to do now is output that data every N milliseconds aligned with the system clock. So for example if I wanted to do 20ms intervals, my oututs ought to be something like this: 13:15:05:0000 13:15:05:0020 13:15:05:0040 13:15:05:0060 I've seen suggestions for using the stopwatch class, but that only measures time spans as opposed to looking for specific time stamps. The code to do this is running in it's own thread, so should be a problem if I need to do some relatively blocking calls. Any suggestions on how to achieve this to a reasonable (close to or better than 1ms precision would be nice) would be very gratefully received.

    Read the article

  • An observation on .NET loops – foreach, for, while, do-while

    It’s very common that .NET programmers use “foreach” loop for iterating through collections. Following is my observation whilst I was testing simple scenario on loops. “for” loop is 30% faster than “foreach” and “while” loop is 50% faster than “foreach”. “do-while” is bit faster than “while”. Someone may feel that how does it make difference if I’m iterating only 1000 times in a loop. This test case is only for simple iteration. According to the "Data structure" concepts, best and worst cases are completely based on the data we provide to the algorithm. so we can not conclude that a "foreach" algorithm is not good. All I want to tell that we need to be little cautious even choosing the loops. Example:- You might want to chose quick sort when you want to sort more numbers. At the same time bubble sort may be effective than quick sort when you want to sort less numbers. Take a simple scenario, a request of a simple web application fetches the data of 10000 (10K) rows and iterating them for some business logic. Think, this application is being accessed by 1000 (1K) people simultaneously. In this simple scenario you are ending up with 10000000 (10Million or 1 Crore) iterations. below is the test scenario with simple console application to test 100 Million records. using System;using System.Collections.Generic;using System.Diagnostics;namespace ConsoleApplication1{ class Program { static void Main(string[] args) { var sw = new Stopwatch(); var numbers = GetSomeNumbers(); sw.Start(); foreach (var item in numbers) { } sw.Stop(); Console.WriteLine( String.Format("\"foreach\" took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); for (int i = 0; i < numbers.Count; i++) { } sw.Stop(); Console.WriteLine( String.Format("\"for\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); var it = 0; while (it++ < numbers.Count) { } sw.Stop(); Console.WriteLine( String.Format("\"while\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); var it2 = 0; do { } while (it2++ < numbers.Count); sw.Stop(); Console.WriteLine( String.Format("\"do-while\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); } #region Get me 10Crore (100 Million) numbers private static List<int> GetSomeNumbers() { var lstNumbers = new List<int>(); var count = 100000000; for (var i = 1; i <= count; i++) { lstNumbers.Add(i); } return lstNumbers; } #endregion Get me some numbers }} In above example, I was just iterating through 100 Million numbers. You can see the time to execute various  loops provided in .NET Output "foreach" took 1108 milliseconds "for" loop took 727 milliseconds "while" loop took 596 milliseconds "do-while" loop took 594 milliseconds   Press any key to continue . . . So I feel we need to be careful while choosing the looping strategy. Please comment your thoughts. span.fullpost {display:none;}

    Read the article

  • Milliseconds in DateTime.Now on .NET Compact Framework always zero? [SOLVED]

    - by Marcel
    Hi all, i want to have a time stamp for logs on a Windows Mobile project. The accuracy must be in the range a hundred milliseconds at least. However my call to DateTime.Now returns a DateTime object with the Millisecond property set to zero. Also the Ticks property is rounded accordingly. How to get better time accuracy? Remember, that my code runs on on the Compact Framework, version 3.5. I use a HTC touch Pro 2 device. Based on the answer from MusiGenesis i have created the following class which solved this problem: /// <summary> /// A more precisely implementation of some DateTime properties on mobile devices. /// </summary> /// <devdoc>Tested on a HTC Touch Pro2.</devdoc> public static class DateTimePrecisely { /// <summary> /// Remembers the start time when this model was created. /// </summary> private static DateTime _start = DateTime.Now; /// <summary> /// Remembers the system uptime ticks when this model was created. This /// serves as a more precise time provider as DateTime.Now can do. /// </summary> private static int _startTick = Environment.TickCount; /// <summary> /// Gets a DateTime object that is set exactly to the current date and time on this computer, expressed as the local time. /// </summary> /// <returns></returns> public static DateTime Now { get { return _start.AddMilliseconds(Environment.TickCount - _startTick); } } }

    Read the article

  • What sort of things can cause a whole system to appear to hang for 100s-1000s of milliseconds?

    - by Ogapo
    I am working on a Windows game and while rendering, some computers will experience intermittent pauses ("hitches" for lack of a better term). When profiled they appear in seemingly random places in the code. Eventually I noticed that it wasn't just my process that was affected, but (seemingly) every process on the system. All of the threads in my application hitch at once. The CPU utilization drops during these hitches and it appears as if most processes make no progress. This leads me to believe this may be an Operating System or Driver issue, but it only occurs while playing the game (and only on some systems). What sort of operations might the operating system be doing that would require the kernel to pause all user threads and block. Some kind of I/O? At first I thought of paging but my impression is that would only affect a single process, no? Some systems in use: Windows, DirectX (3d), nVidia cards (unknown if replicates on ATI), using overlapped io for streaming

    Read the article

  • Generating the query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

    - by TheImirOfGroofunkistan
    I have a fairly complex (or ugly depending on how you look at it) stored procedure running on SQL Server 2008. It bases a lot of the logic on a view that has a pk table and a fk table. The fk table is left joined to the pk table slightly more than 30 times (the fk table has a poor design - it uses name value pairs that I need to flatten out. Unfortunately, it's 3rd party and I cannot change it). Anyway, it had been running fine for weeks until I periodically noticed a run that would take 3-5 minutes. It turns out that this is the time it takes to generate the query plan. Once the query plan exists and is cached, the stored procedure itself runs very efficiently. Things run smoothly until there is a reason to regenerate and cache the query plan again. Has anyone seen this? Why does it take so long to generate the plan? Are there ways to make it come up with a plan faster?

    Read the article

  • Generating the SQL query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

    - by TheImirOfGroofunkistan
    I have a fairly complex (or ugly depending on how you look at it) stored procedure running on SQL Server 2008. It bases a lot of the logic on a view that has a pk table and a fk table. The fk table is left joined to the pk table slightly more than 30 times (the fk table has a poor design - it uses name value pairs that I need to flatten out. Unfortunately, it's 3rd party and I cannot change it). Anyway, it had been running fine for weeks until I periodically noticed a run that would take 3-5 minutes. It turns out that this is the time it takes to generate the query plan. Once the query plan exists and is cached, the stored procedure itself runs very efficiently. Things run smoothly until there is a reason to regenerate and cache the query plan again. Has anyone seen this? Why does it take so long to generate the plan? Are there ways to make it come up with a plan faster?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >