Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 209/2372 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • WPF: Timers

    - by Ilya Verbitskiy
    I believe, once your WPF application will need to execute something periodically, and today I would like to discuss how to do that. There are two possible solutions. You can use classical System.Threading.Timer class or System.Windows.Threading.DispatcherTimer class, which is the part of WPF. I have created an application to show you how to use the API.     Let’s take a look how you can implement timer using System.Threading.Timer class. First of all, it has to be initialized.   1: private Timer timer; 2:   3: public MainWindow() 4: { 5: // Form initialization code 6: 7: timer = new Timer(OnTimer, null, Timeout.InfiniteTimeSpan, Timeout.InfiniteTimeSpan); 8: }   Timer’s constructor accepts four parameters. The first one is the callback method which is executed when timer ticks. I will show it to you soon. The second parameter is a state which is passed to the callback. It is null because there is nothing to pass this time. The third parameter is the amount of time to delay before the callback parameter invokes its methods. I use System.Threading.Timeout helper class to represent infinite timeout which simply means the timer is not going to start at the moment. And the final fourth parameter represents the time interval between invocations of the methods referenced by callback. Infinite timeout timespan means the callback method will be executed just once. Well, the timer has been created. Let’s take a look how you can start the timer.   1: private void StartTimer(object sender, RoutedEventArgs e) 2: { 3: timer.Change(TimeSpan.Zero, new TimeSpan(0, 0, 1)); 4:   5: // Disable the start buttons and enable the reset button. 6: }   The timer is started by calling its Change method. It accepts two arguments: the amount of time to delay before the invoking the callback method and the time interval between invocations of the callback. TimeSpan.Zero means we start the timer immediately and TimeSpan(0, 0, 1) tells the timer to tick every second. There is one method hasn’t been shown yet. This is the callback method OnTimer which does a simple task: it shows current time in the center of the screen. Unfortunately you cannot simple write something like this:   1: clock.Content = DateTime.Now.ToString("hh:mm:ss");   The reason is Timer runs callback method on a separate thread, and it is not possible to access GUI controls from a non-GUI thread. You can avoid the problem using System.Windows.Threading.Dispatcher class.   1: private void OnTimer(object state) 2: { 3: Dispatcher.Invoke(() => ShowTime()); 4: } 5:   6: private void ShowTime() 7: { 8: clock.Content = DateTime.Now.ToString("hh:mm:ss"); 9: }   You can build similar application using System.Windows.Threading.DispatcherTimer class. The class represents a timer which is integrated into the Dispatcher queue. It means that your callback method is executed on GUI thread and you can write a code which updates your GUI components directly.   1: private DispatcherTimer dispatcherTimer; 2:   3: public MainWindow() 4: { 5: // Form initialization code 6:   7: dispatcherTimer = new DispatcherTimer { Interval = new TimeSpan(0, 0, 1) }; 8: dispatcherTimer.Tick += OnDispatcherTimer; 9: } Dispatcher timer has nicer and cleaner API. All you need is to specify tick interval and Tick event handler. The you just call Start method to start the timer.   private void StartDispatcher(object sender, RoutedEventArgs e) { dispatcherTimer.Start(); // Disable the start buttons and enable the reset button. } And, since the Tick event handler is executed on GUI thread, the code which sets the actual time is straightforward.   1: private void OnDispatcherTimer(object sender, EventArgs e) 2: { 3: ShowTime(); 4: } We’re almost done. Let’s take a look how to stop the timers. It is easy with the Dispatcher Timer.   1: dispatcherTimer.Stop(); And slightly more complicated with the Timer. You should use Change method again.   1: timer.Change(Timeout.InfiniteTimeSpan, Timeout.InfiniteTimeSpan); What is the best way to add timer into an application? The Dispatcher Timer has simple interface, but its advantages are disadvantages at the same time. You should not use it if your Tick event handler executes time-consuming operations. It freezes your window which it is executing the event handler method. You should think about using System.Threading.Timer in this case. The code is available on GitHub.

    Read the article

  • How to store a shmup level?

    - by pek
    I am developing a 2D shmup (i.e. Aero Fighters) and I was wondering what are the various ways to store a level. Assuming that enemies are defined in their own xml file, how would you define when an enemy spawns in the level? Would it be based on time? Updates? Distance? Currently I do this based on "level time" (the amount of time the level is running - pausing doesn't update the time). Here is an example (the serialization was done by XNA): <?xml version="1.0" encoding="utf-8"?> <XnaContent xmlns:level="pekalicious.xanor.XanorContentShared.content.level"> <Asset Type="level:Level"> <Enemies> <Enemy> <EnemyType>data/enemies/smallenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>60</NumberOfSpawns> <SpawnOffset>PT0.2S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT20S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/boss1</EnemyType> <SpawnTime>PT30S</SpawnTime> <NumberOfSpawns>1</NumberOfSpawns> <SpawnOffset>PT0S</SpawnOffset> </Enemy> </Enemies> </Asset> </XnaContent> Each Enemy element is basically a wave of specific enemy types. The type is defined in EnemyType while SpawnTime is the "level time" this wave should appear. NumberOfSpawns and SpawnOffset is the number of enemies that will show up and the time it takes between each spawn respectively. This could be a good idea or there could be better ones out there. I'm not sure. I would like to see some opinions and ideas. I have two problems with this: spawning an enemy correctly and creating a level editor. The level editor thing is an entirely different problem (which I will probably post in the future :P). As for spawning correctly, the problem lies in the fact that I have a variable update time and so I need to make sure I don't miss an enemy spawn because the spawn offset is too small, or because the update took a little more time. I kinda fixed it for the most part, but it seems to me that the problem is with how I store the level. So, any ideas? Comments? Thank you in advance.

    Read the article

  • JDeveloper 11g R1 (11.1.1.4.0) - New Features on ADF Desktop Integration Explained

    - by juan.ruiz
    One of the areas that introduced many new features on the latest release (11.1.1.4.0)  of JDeveloper 11g R1 is ADF Desktop integration - in this article I’ll provide an overview of these new features. New ADF Desktop Integration Ribbon in Excel - After installing the ADF desktop integration add-in and depending on the mode in which you open the desktop integration workbook, the ADF Desktop integration ribbon for design time and runtime are displayed as a separate tab within Excel. In previous version the ADF Desktop integration environment used to be placed inside the add-ins tab. Above you can see both, design time ribbon as well as runtime ribbon. On the design time ribbon you can manage the workbook and worksheet properties, worksheet component properties, diagnostics, execution and publication of the workbook. The runtime version of the ribbon is totally customizable and represents what it used to be the runtime menu on the spreadsheet, in this ribbon you can include all the operations and actions that could be executed by the end user while working with the spreadsheet data. Diagnostics - A very important aspect for developers is how to debug or verify the interactions of the client with the server, for that ADF desktop integration has provided since day one a series of diagnostics tools. In this release the diagnostics tools are more visible and are really easy to configure. You can access the client console while testing the workbook, or you can simple dump all the messages to a log file – having the ability of setting the output level for both. Security - There are a number of enhancements on security but the one with more impact for developers is tha security now is optional when using ADF Desktop Integration. Until this version every time that you wanted to work with ADFdi it was a must that the application was previously secured. In this release security is optional which means that if you have previously defined security on your application, then you must secure the ADFdi servlet as explained in one of my previous (ADD LINK) posts. In the other hand, if but the time that you start working with ADFdi you have not defined security, you can test and publish your workbooks without adding security. Support for Continuous Integration - In this release we have added tooling for continuous integration building. in the ADF desktop integration space, the concept translates to adding functionality that developers can use to publish ADFdi workbooks as part of their entire application build. For that purpose, we have a publish tool that can be easily invoke from an ANT task such that all the design time workbooks are re-published into the latest version of the application building process. Key Column - At runtime, on any worksheet containing editable tables you will notice a new additional column called the key column. The purpose of this column is to make the end user aware that all rows on the table need to be selected at the time of sorting. The users cannot alter the value of this column. From the developers points of view there are no steps required in order to have the key column included into the worksheets. Installation and Creation of New Workbooks - Both use cases can be executed now directly from JDeveloper. As part of the Tools menu options the developer can install the ADF desktop integration designer. Also, creating new workbooks that previously was done through that convert tool shipped with JDeveloper is now automatic done from the New Gallery. Creating a new ADFdi workbook adds metadata information information to the Excel workbook so you can work in design time. Other Enhancements Support for Excel 2010 and the ADF components ready-only enabled don’t allow to change its value – the cell in Excel is automatically protected, this could cause confusion among customers of previous releases.

    Read the article

  • Sun Ray Hardware Last Order Dates & Extension of Premier Support for Desktop Virtualization Software

    - by Adam Hawley
    In light of the recent announcement  to end new feature development for Oracle Virtual Desktop Infrastructure Software (VDI), Oracle Sun Ray Software (SRS), Oracle Virtual Desktop Client (OVDC) Software, and Oracle Sun Ray Client hardware (3, 3i, and 3 Plus), there have been questions and concerns regarding what this means in terms of customers with new or existing deployments.  The following updates clarify some of these commonly asked questions. Extension of Premier Support for Software Though there will be no new feature additions to these products, customers will have access to maintenance update releases for Oracle Virtual Desktop Infrastructure and Sun Ray Software, including Oracle Virtual Desktop Client and Sun Ray Operating Software (SROS) until Premier Support Ends.  To ensure that customer investments for these products are protected, Oracle  Premier Support for these products has been extended by 3 years to following dates: Sun Ray Software - November 2017 Oracle Virtual Desktop Infrastructure - March 2017 Note that OVDC support is also extended to the above dates since OVDC is licensed by default as part the SRS and VDI products.   As a reminder, this only affects the products listed above.  Oracle Secure Global Desktop and Oracle VM VirtualBox will continue to be enhanced with new features from time-to-time and, as a result, they are not affected by the changes detailed in this message. The extension of support means that customers under a support contract will still be able to file service requests through Oracle Support, and Oracle will continue to provide the utmost level of support to our customers as expected,  until the published Premier Support end date.  Following the end of Premier Support, Sustaining Support remains an 'indefinite' period of time.   Sun Ray 3 Series Clients - Last Order Dates For Sun Ray Client hardware, customers can continue to purchase Sun Ray Client devices until the following last order dates: Product Marketing Part Number Last Order Date Last Ship Date Sun Ray 3 Plus TC3-P0Z-00, TC3-PTZ-00 (TAA) September 13, 2013 February 28, 2014 Sun Ray 3 Client TC3-00Z-00 February 28, 2014 August 31, 2014 Sun Ray 3i Client TC3-I0Z-00 February 28, 2014 August 31, 2014 Payflex Smart Cards X1403A-N, X1404A-N February 28, 2014 August 31, 2014 Note the difference in the Last Order Date for the Sun Ray 3 Plus (September 13, 2013) compared to the other products that have a Last Order Date of February 28, 2014. The rapidly approaching date for Sun Ray 3 Plus is due to a supplier phasing-out production of a key component of the 3 Plus.   Given September 13 is unfortunately quite soon, we strongly encourage you to place your last time buy as soon as possible to maximize Oracle's ability fulfill your order. Keep in mind you can schedule shipments to be delivered as late as the end of February 2014, but the last day to order is September 13, 2013. Customers wishing to purchase other models - Sun Ray 3 Clients and/or Sun Ray 3i Clients - have additional time (until February 28, 2014) to assess their needs and to allow fulfillment of last time orders.  Please note that availability of supply cannot be absolutely guaranteed up to the last order dates and we strongly recommend placing last time buys as early as possible.  Warranty replacements for Sun Ray Client hardware for customers covered by Oracle Hardware Systems Support contracts will be available beyond last order dates, per Oracle's policy found on Oracle.com here.  Per that policy, Oracle intends to provide replacement hardware for up to 5 years beyond the last ship date, but hardware may not be available beyond the 5 year period after the last ship date for reasons beyond Oracle's control. In any case, by design, Sun Ray Clients have an extremely long lifespan  and mean time between failures (MTBF) - much longer than PCs, and over the years we have continued to see first- and second generations of Sun Rays still in daily use.  This is no different for the Sun Ray 3, 3i, and 3 Plus.   Because of this, and in addition to Oracle's continued support for SRS, VDI, and SROS, Sun Ray and Oracle VDI deployments can continue to expand and exist as a viable solution for some time in the future. Continued Availability of Product Licenses and Support Oracle will continue to offer all existing software licenses, and software and hardware support including: Product licenses and Premier Support for Sun Ray Software and Oracle Virtual Desktop Infrastructure Premier Support for Operating Systems (for Sun Ray Operating Software maintenance upgrades/support)  Premier Support for Systems (for Sun Ray Operating Software maintenance upgrades/support and hardware warranty) Support renewals For More Information For more information, please refer to the following documents for specific dates and policies associated with the support of these products: Document 1478170.1 - Oracle Desktop Virtualization Software and Hardware Lifetime Support Schedule Document 1450710.1 - Sun Ray Client Hardware Lifetime schedule Document 1568808.1 - Document Support Policies for Discontinued Oracle Virtual Desktop Infrastructure, Sun Ray Software and Hardware and Oracle Virtual Desktop Client Development For Sales Orders and Questions Please contact your Oracle Sales Representative or Saurabh Vijay ([email protected])

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 6

    - by MarkPearl
    Learning Outcomes Discuss the physical characteristics of magnetic disks Describe how data is organized and accessed on a magnetic disk Discuss the parameters that play a role in the performance of magnetic disks Describe different optical memory devices Magnetic Disk The way data is stored on and retried from magnetic disks Data is recorded on and later retrieved form the disk via a conducting coil named the head (in many systems there are two heads) The writ mechanism exploits the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below with different patterns for positive and negative currents The physical characteristics of a magnetic disk   Summarize from book   The factors that play a role in the performance of a disk Seek time – the time it takes to position the head at the track Rotational delay / latency – the time it takes for the beginning of the sector to reach the head Access time – the sum of the seek time and rotational delay Transfer time – the time it takes to transfer data RAID The rate of improvement in secondary storage performance has been considerably less than the rate for processors and main memory. Thus secondary storage has become a bit of a bottleneck. RAID works on the concept that if one disk can be pushed so far, additional gains in performance are to be had by using multiple parallel components. Points to note about RAID… RAID is a set of physical disk drives viewed by the operating system as a single logical drive Data is distributed across the physical drives of an array in a scheme known as striping Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure (not supported by RAID 0 or RAID 1) Interesting to note that the increase in the number of drives, increases the probability of failure. To compensate for this decreased reliability RAID makes use of stored parity information that enables the recovery of data lost due to a disk failure.   The RAID scheme consists of 7 levels…   Category Level Description Disks Required Data Availability Large I/O Data Transfer Capacity Small I/O Request Rate Striping 0 Non Redundant N Lower than single disk Very high Very high for both read and write Mirroring 1 Mirrored 2N Higher than RAID 2 – 5 but lower than RAID 6 Higher than single disk Up to twice that of a signle disk for read Parallel Access 2 Redundant via Hamming Code N + m Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Parallel Access 3 Bit interleaved parity N + 1 Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Independent Access 4 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, significantly lower than single disk for write Similar to RAID 0 for read, significantly lower than single disk for write Independent Access 5 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, lower than single disk for write Similar to RAID 0 for read, generally  lower than single disk for write Independent Access 6 Block interleaved parity N + 2 Highest of all listed alternatives Similar to RAID 0 for read; lower than RAID 5 for write Similar to RAID 0 for read, significantly lower than RAID 5  for write   Read page 215 – 221 for detailed explanation on RAID levels Optical Memory There are a variety of optical-disk systems available. Read through the table on page 222 – 223 Some of the devices include… CD CD-ROM CD-R CD-RW DVD DVD-R DVD-RW Blue-Ray DVD Magnetic Tape Most modern systems use serial recording – data is lade out as a sequence of bits along each track. The typical recording used in serial is referred to as serpentine recording. In this technique when data is being recorded, the first set of bits is recorded along the whole length of the tape. When the end of the tape is reached the heads are repostioned to record a new track, and the tape is again recorded on its whole length, this time in the opposite direction. That process continued back and forth until the tape is full. To increase speed, the read-write head is capable of reading and writing a number of adjacent tracks simultaneously. Data is still recorded serially along individual tracks, but blocks in sequence are stored on adjacent tracks as suggested. A tape drive is a sequential access device. Magnetic tape was the first kind of secondary memory. It is still widely used as the lowest-cost, slowest speed member of the memory hierarchy.

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • I am trying to create an windows application watcher? [migrated]

    - by Broken_Code
    I recently started coding in c #(in may this year) and well I find it best to learn by working with code. this application http://www.c-sharpcorner.com/UploadFile/satisharveti/ActiveApplicationWatcher01252007024921AM/ActiveApplicationWatcher.aspx. I am trying to recreate it however mine will be saving the information into an sql database(new at this as well). I am having some coding problems though as it does not do what I expect it to do. THis is the main code I am using. private void GetTotalTimer() { DateTime now = DateTime.Now; IntPtr hwnd = APIFunc.getforegroundWindow(); Int32 pid = APIFunc.GetWindowProcessID(hwnd); Process p = Process.GetProcessById(pid); appName = p.ProcessName; const int nChars = 256; int handle = 0; StringBuilder Buff = new StringBuilder(nChars); handle = GetForegroundWindow(); appltitle = APIFunc.ActiveApplTitle().Trim().Replace("\0", ""); //if (GetWindowText(handle, Buff, nChars) > 0) //{ // string strbuff = Buff.ToString(); // StrWindow = strbuff; #region insert statement try { if (Conn.State == ConnectionState.Closed) { Conn.Open(); } if (Conn.State == ConnectionState.Open) { SqlCommand com = new SqlCommand("Select top 1 [Window Title] From TimerLogs ORDER BY [Time of Event] DESC", Conn); SqlDataReader reader = com.ExecuteReader(); startTime = DateTime.Now; string time = now.ToString(); if (!reader.HasRows) { reader.Close(); cmd = new SqlCommand("insert into [TimerLogs] values(@time,@appName,@appltitle,@Elapsed_Time,@userName)", Conn); cmd.Parameters.AddWithValue("@time", time); cmd.Parameters.AddWithValue("@appName", appName); cmd.Parameters.AddWithValue("@appltitle", appltitle); cmd.Parameters.AddWithValue("@Elapsed_Time", blank.ToString()); cmd.Parameters.AddWithValue("@userName", userName); cmd.ExecuteNonQuery(); Conn.Close(); } else if(reader.HasRows) { reader.Read(); if (appltitle != reader.ToString()) { reader.Close(); endTime = DateTime.Now; appduration = endTime.Subtract(startTime); cmd = new SqlCommand("insert into [TimerLogs] values (@time,@appName,@appltitle,@Elapsed_Time,@userName)", Conn); cmd.Parameters.AddWithValue("@time", time); cmd.Parameters.AddWithValue("@appName", appName); cmd.Parameters.AddWithValue("@appltitle", appltitle); cmd.Parameters.AddWithValue("@Elapsed_Time", appduration.ToString()); cmd.Parameters.AddWithValue("@userName", userName); cmd.ExecuteNonQuery(); reader.Close(); Conn.Close(); } } } } catch (Exception) { } //} #endregion ActivityTimer.Start(); Processing = "Working"; } Unfortunately this is the result. it is not saving the data as I expect it to. What am i doing wrong I had thought that with the sql reader it would first check for a value and only save if they do not match however it is saving whether there is a match or not.

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • Get More Value From Your Oracle Premier Support Investment

    - by Get Proactive Customer Adoption Team
    Untitled Document The Return on Investment in Support Training I’m a typical software user. I’ve been using spreadsheets almost daily for the past 10 years or so. I know how to enter simple formulas, format cells, import files, and I can sort and filter. Sometimes I even use a pivot table. I never attended training. I learnt everything I know on the fly. Sometimes it was intuitive and easy, other times I had to spend minutes and even hours searching for a solution. Yet when I see what some other people can do with their spreadsheets, I know I’m utilizing maybe 15% of the functionality. Pity, one day I really have to sign up for training. Why haven’t I done it yet? Ah, you know, I’m a busy person, I have work to do. And if I need to use a feature that I am unfamiliar with, I’ll spend time on it only when I really need it. Now wait. When I recall how much time I spent trying to figure how things work compared to time I spent doing the productive work, I realize it was not insignificant. I’m unable to sum up all the time I spent ‘learning’ on the fly, but I’m sure it’s been days or even weeks. And after all this time, I’ve mastered 15% of its features. If only I had attended training years ago. That investment would have paid back 10 times! Working with My Oracle Support is no different. Our customers typically use simple search, create service requests, and download patches. They think they know how to use My Oracle Support. And they’re right. They know something but often they’re utilizing only a fragment of My Oracle Support’s potential. For the investment that has been made, using only a small subset of the capabilities offered in My Oracle Support leaves value on the table. There is much more available in My Oracle Support. Dozens of diagnostic tools and proactive health checks will keep verifying your Oracle environments against best practices that Oracle gathers every day thanks to our comprehensive knowledge management process. Automated patch recommendations will help prevent known issues, and upgrade planning and more is included in My Oracle Support. Why are you not utilizing all of these best practices, capabilities and tools? Is it because you don’t have time to invest 2-3 hours of your time to learn about the features? Simply because you think you can learn on the fly like I thought I could? Does learning on the fly how to properly use the Service Request escalation process when you already have critical issue sound like a good idea? My advice is: Invest your time now to learn how My Oracle Support can help you prevent issues on your systems. Learn how to find answers faster and resolve problems more efficiently. Understand how to properly complete a service request. Invest in Support training, offered at no additional cost to Oracle Premier Support customers. It will pay back quicker than you think. It will bring you more value than you think. Discover your advantage with Oracle Premier Support's Proactive Portfolio.

    Read the article

  • How many bits for sequence number using Go-Back-N protocol.

    - by Mike
    Hi Everyone, I'm a regular over at Stack Overflow (Software developer) that is trying to get through a networking course. I got a homework problem I'd like to have a sanity check on. Here is what I got. Q: A 3000-km-long T1 trunk is used to transmit 64-byte frames using Go-Back-N protocol. If the propagation speed is 6 microseconds/km, how many bits should the sequence numbers be? My Answer: For this questions what we need to do is lay the base knowledge. What we are trying to find is the size of the largest sequence number we should us using Go-Back-N. To figure this out we need to figure out how many packets can fit into our link at a time and then subtract one from that number. This will ensure that we never have two packets with the same sequence number at the same time in the link. Length of link: 3,000km Speed: 6 microseconds / km Frame size: 64 bytes T1 transmission speed: 1544kb/s (http://ckp.made-it.com/t1234.html) Propagation time = 6 microseconds / km * 3000 km = 18,000 microseconds (18ms). Convert 1544kb to bytes = 1544 * 1024 = 1581056 bytes Transmission time = 64 bytes / 1581056bytes / second = 0.000040479 seconds (0.4ms) So then if we take the 18ms propagation time and divide it by the 0.4ms transmission time we will see that we are going to be able to stuff ( 18 / 0.4) 45 packets into the link at a time. That means that our sequence number should be 2 ^ 45 bits long! Am I going in the right direction with this? Thanks, Mike

    Read the article

  • nokia cell phone not accepting IP from dnsmasq dhcp server

    - by samix
    Hello, I having problem connecting a NOkia cell phone to my home wifi network. The wifi network is provided by a wireless card in a machine running Debian Testing and 2.6.26-2-686 kernel. The cars is D-Link DWL-G520 working in ap mode and has WPA encryption enabled. The wireless network is provided by hostapd using madwifi driver. Windows and Mac machines work properly with this wifi network. When I try to get the Nokia phone to connect to the wifi network, I get these lines in my dnsmasq log (to see lines without wrapping, here is the pastebin link for convenience - http://pastebin.com/m466c8fd2): Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 IEEE 802.11: disassociated Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 IEEE 802.11: associated Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 RADIUS: starting accounting session 4AE664FA-00000036 Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 WPA: pairwise key handshake completed (WPA) Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 WPA: group key handshake completed (WPA) Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 Available DHCP range: 192.168.5.150 -- 192.168.5.199 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 DHCPDISCOVER(ath0) 0.0.0.0 11:22:33:44:55:66 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 DHCPOFFER(ath0) 192.168.5.21 11:22:33:44:55:66 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 requested options: 12:hostname, 6:dns-server, 15:domain-name, Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 requested options: 1:netmask, 3:router, 28:broadcast, 120:sip-server Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 tags: known, ath0 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 next server: 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 1 option: 53:message-type 02 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 54:server-identifier 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 51:lease-time 00:00:46:50 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 58:T1 00:00:23:28 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 59:T2 00:00:3d:86 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 1:netmask 255.255.255.0 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 28:broadcast 192.168.5.255 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 3:router 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 6:dns-server 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 8 option: 15:domain-name home.pvt Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 3 option: 12:hostname NokiaCellPhone Anybody know the problem might be? If I switch off dnsmasq dhcp queries logging, i.e. if I decrease the verbosity of the log, all I see are two lines of DHCPDISCOVER(ath0) and DHCPOFFER(ath0) repeatedly in the log with no acceptance by the cell phone. It appears as though the phone is not accepting the dhcp offer. However, if I give the phone a static IP address in its configuration, it works properly on the wifi network. So it appears as though the problem is dhcp related. Hints? Suggestions? Installed stuff: $ dpkg -l dnsmasq hostap* | grep ^i ii dnsmasq 2.50-1 A small caching DNS proxy and DHCP/TFTP server ii dnsmasq-base 2.50-1 A small caching DNS proxy and DHCP/TFTP server ii hostapd 1:0.6.9-3 user space IEEE 802.11 AP and IEEE 802.1X/WPA/ Thanks. PS: Here is the DHCP tcp dump for more information (with mac addresses changed): $ sudo dhcpdump -i ath0 -h ^11:22:33:44:55:66 TIME: 2009-10-30 12:15:32.916 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 0 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:32.918 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 0 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:32.918 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 0 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:34.922 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 2 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:34.922 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 2 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:34.923 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 2 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:38.919 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 6 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:38.920 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 6 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:38.921 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 6 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:46.944 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: ccafe769 SECS: 14 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:46.944 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: ccafe769 SECS: 14 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:46.945 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: ccafe769 SECS: 14 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:48.952 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 ... and so on ...

    Read the article

  • iPod touch has extremely slow wifi, drops packets - only on my router

    - by mskfisher
    I just purchased an iPod Touch. I am having a lot of trouble with its speeds on my Tenda W311R, but it has no speed problems on my neighbor's Netgear router. It will connect and authenticate to my network, but the Speed Test app from speedtest.net shows rates near 20-50 kbps. If I run the speed test immediately after powering the iPod on, it will get speeds of 10-20 Mbps, like it should - but the speeds slow down to the kbps range abut 10-15 seconds afterward. I get the same behavior with encryption and without encryption, and regardless of N, G, or B compatibility settings in the router. I've tried rebooting the iPod and resetting the network settings, but it's still slow. I've tried pinging the iPod from another computer, and it shows about 40% packet loss: $ ping 192.168.0.111 PING 192.168.0.111 (192.168.0.111): 56 data bytes 64 bytes from 192.168.0.111: icmp_seq=0 ttl=64 time=14.188 ms 64 bytes from 192.168.0.111: icmp_seq=1 ttl=64 time=11.556 ms 64 bytes from 192.168.0.111: icmp_seq=2 ttl=64 time=5.675 ms 64 bytes from 192.168.0.111: icmp_seq=3 ttl=64 time=5.721 ms Request timeout for icmp_seq 4 64 bytes from 192.168.0.111: icmp_seq=5 ttl=64 time=6.491 ms Request timeout for icmp_seq 6 64 bytes from 192.168.0.111: icmp_seq=7 ttl=64 time=8.065 ms Request timeout for icmp_seq 8 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 64 bytes from 192.168.0.111: icmp_seq=11 ttl=64 time=9.605 ms Signal strength is good - I'm never more than 20 feet from my access point, and it exhibits the same behavior if I'm standing next to the router. It works just well enough to receive text, but videos don't work at all. App downloads are hit and miss. I've tweaked just about all of the settings I can see to tweak, and I'm at a loss. I have also been searching Google for the past three days, all to no avail. Any suggestions?

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • How to debug slow queries in Django+Postgres

    - by lacker
    My database queries from Django are starting to take 1-2 seconds and I'm having trouble figuring out why. Not too big a site, about 1-2 requests per second (that hit Django; static files are just served from nginx.) The thing that confuses me is, I can replicate the slowness in the Django shell using debug mode. But when I issue the exact same queries at an sql prompt they are fast. It takes about a second for a query to return, but when I check connection.queries it reports the time as under 10 ms. Here's an example (from the Django shell): >>> p = PlayerData.objects.get(uid="100000521952372") >>> a = time.time(); p.save(); print time.time() - a 1.96812295914 >>> for d in connection.queries: print d["time"] ... 0.002 0.000 0.000 How can I figure out where this extra time is being spent? I'm using Apache+mod_wsgi in daemon mode, but this happens with just the django shell as well, so I figure it is not apache-related.

    Read the article

  • AWS elastic load balancer basic issues

    - by Jones
    I have an array of EC2 t1.micro instances behind a load balancer and each node can manage ~100 concurrent users before it starts to get wonky. i would THINK if i have 2 such instances it would allow my network to manage 200 concurrent users... apparently not. When i really slam the server (blitz.io) with a full 275 concurrents, it behaves the same as if there is just one node. it goes from 400ms response time to 1.6 seconds (which for a single t1.micro is expected, but not 6). So the question is, am i simply not doing something right or is ELB effectively worthless? Anyone have some wisdom on this? AB logs: Loadbalancer (3x m1.medium) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 11.668 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 4285.10 [#/sec] (mean) Time per request: 23.337 [ms] (mean) Time per request: 0.233 [ms] (mean, across all concurrent requests) Transfer rate: 1661.35 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 4.3 2 63 Processing: 2 21 15.1 19 302 Waiting: 2 21 15.0 19 261 Total: 3 23 15.7 21 304 Single instance (1x m1.medium direct connection) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 9.597 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 5210.19 [#/sec] (mean) Time per request: 19.193 [ms] (mean) Time per request: 0.192 [ms] (mean, across all concurrent requests) Transfer rate: 2020.01 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 9 128.9 3 3010 Processing: 1 10 8.7 9 141 Waiting: 1 9 8.7 8 140 Total: 2 19 129.0 12 3020

    Read the article

  • How can I disable 'natural breaks' in Workrave?

    - by Pixelastic
    I've just discovered Workrave, and was trying to use it along the Pomodoro technique (5mn break every 25mn). But the concept of 'natural breaks' of Workrave seems to interfere with what I'm trying to achieve. Workrave tries to guess that I'm doing a natural break if I stop using my mouse and keyboard for longer than 5s. It then stops the work timer, and start counting time as if I was doing my break. Here is a typical example : I've configured a 5mn rest break every 25mn. I start working. 10mn later, I receive a phone call, or start talking with a colleague, or any work-related action that do not need either keyboard nor mouse. Workrave then stops counting my time as work time, and starts its rest timer. If my phone call is shorter than 5mn, then Workrave will resume its timer where it stopped it. Meaning that my time on the phone is not counted as work time, and so my break time is pushed a few minutes later than it should be. Even worse, if my phone call is longer than 5mn, then Workrave count it as a complete rest break, and when I'll resume working, it will restart its timer completly. I'm looking for either a way to disable the natural breaks, or increase the 'inactivity time' from 5s to maybe ~1mn. Or maybe an other angle to look at the natural breaks that might work with the Pomodoro technique (forced 5mn breaks every 25mn). I'm using Ubuntu 11.10.

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • How can I make an AdvancedDataGrid re-display its labels when the results of the labelFunction chang

    - by Chris R
    I have an AdvancedDataGrid with a custom label function whose value can change based on other form status (specifically, there's a drop down to choose the time display format for some columns). Right now, I have this labelFunction: internal function formatColumnTime(item: Object, column: AdvancedDataGridColumn): String { var seconds: Number = item[column.dataField]; return timeFormat.selectedItem.labelFunction(seconds); } internal function formatTimeAsInterval(time: Number): String { if (isNaN(time)) return ""; var integerTime: int = Math.round(time); var seconds: int = integerTime % 60; integerTime = integerTime / 60; var minutes: int = integerTime % 60; var hours: int = integerTime / 60; return printf("%02d:%02d:%02d", hours, minutes, seconds); } internal function formatTimeAsFractions(time: Number): String { if (isNaN(time)) return ""; var hours: Number = time / 3600.0; return new String(Math.round(hours * 100) / 100); } ... and the timeFormat object is a combo box with items whose labelFunction attributes are formatTimeAsFractions and formatTimeAsInterval. The columns that have time formats have formatColumnTime as their labelFunction value, because extracting the seconds in that function and passing it in to the formatters made for a more testable app (IMHO). So, when the timeFormat.selectedItem value changes, I want to force my grid to re-calculate the labels of these colums. What method must I call on it? invalidateProperties() didn't work, so that's out.

    Read the article

  • Search multiple datepicker on same grid

    - by DHF
    I'm using multiple datepicker on same grid and I face the problem to get a proper result. I used 3 datepicker in 1 grid. Only the first datepicker (Order Date)is able to output proper result while the other 2 datepicker (Start Date & End Date) are not able to generate proper result. There is no problem with the query, so could you find out what's going on here? Thanks in advance! php wrapper <?php ob_start(); require_once 'config.php'; // include the jqGrid Class require_once "php/jqGrid.php"; // include the PDO driver class require_once "php/jqGridPdo.php"; // include the datepicker require_once "php/jqCalendar.php"; // Connection to the server $conn = new PDO(DB_DSN,DB_USER,DB_PASSWORD); // Tell the db that we use utf-8 $conn->query("SET NAMES utf8"); // Create the jqGrid instance $grid = new jqGridRender($conn); // Write the SQL Query $grid->SelectCommand = "SELECT c.CompanyID, c.CompanyCode, c.CompanyName, c.Area, o.OrderCode, o.Date, m.maID ,m.System, m.Status, m.StartDate, m.EndDate, m.Type FROM company c, orders o, maintenance_agreement m WHERE c.CompanyID = o.CompanyID AND o.OrderID = m.OrderID "; // Set the table to where you update the data $grid->table = 'maintenance_agreement'; // set the ouput format to json $grid->dataType = 'json'; // Let the grid create the model $grid->setPrimaryKeyId('maID'); // Let the grid create the model $grid->setColModel(); // Set the url from where we obtain the data $grid->setUrl('grouping_ma_details.php'); // Set grid caption using the option caption $grid->setGridOptions(array( "sortable"=>true, "rownumbers"=>true, "caption"=>"Group by Maintenance Agreement", "rowNum"=>20, "height"=>'auto', "width"=>1300, "sortname"=>"maID", "hoverrows"=>true, "rowList"=>array(10,20,50), "footerrow"=>false, "userDataOnFooter"=>false, "grouping"=>true, "groupingView"=>array( "groupField" => array('CompanyName'), "groupColumnShow" => array(true), //show or hide area column "groupText" =>array('<b> Company Name: {0}</b>',), "groupDataSorted" => true, "groupSummary" => array(true) ) )); if(isset($_SESSION['login_admin'])) { $grid->addCol(array( "name"=>"Action", "formatter"=>"actions", "editable"=>false, "sortable"=>false, "resizable"=>false, "fixed"=>true, "width"=>60, "formatoptions"=>array("keys"=>true), "search"=>false ), "first"); } // Change some property of the field(s) $grid->setColProperty("CompanyID", array("label"=>"ID","hidden"=>true,"width"=>30,"editable"=>false,"editoptions"=>array("readonly"=>"readonly"))); $grid->setColProperty("CompanyName", array("label"=>"Company Name","hidden"=>true,"editable"=>false,"width"=>150,"align"=>"center","fixed"=>true)); $grid->setColProperty("CompanyCode", array("label"=>"Company Code","hidden"=>true,"width"=>50,"align"=>"center")); $grid->setColProperty("OrderCode", array("label"=>"Order Code","width"=>110,"editable"=>false,"align"=>"center","fixed"=>true)); $grid->setColProperty("maID", array("hidden"=>true)); $grid->setColProperty("System", array("width"=>150,"fixed"=>true,"align"=>"center")); $grid->setColProperty("Type", array("width"=>280,"fixed"=>true)); $grid->setColProperty("Status", array("width"=>70,"align"=>"center","edittype"=>"select","editoptions"=>array("value"=>"Yes:Yes;No:No"),"fixed"=>true)); $grid->setSelect('System', "SELECT DISTINCT System, System AS System FROM master_ma_system ORDER BY System", false, true, true, array(""=>"All")); $grid->setSelect('Type', "SELECT DISTINCT Type, Type AS Type FROM master_ma_type ORDER BY Type", false, true, true, array(""=>"All")); $grid->setColProperty("StartDate", array("label"=>"Start Date","width"=>120,"align"=>"center","fixed"=>true, "formatter"=>"date", "formatoptions"=>array("srcformat"=>"Y-m-d H:i:s","newformat"=>"d M Y") )); // this is only in this case since the orderdate is set as date time $grid->setUserTime("d M Y"); $grid->setUserDate("d M Y"); $grid->setDatepicker("StartDate",array("buttonOnly"=>false)); $grid->datearray = array('StartDate'); $grid->setColProperty("EndDate", array("label"=>"End Date","width"=>120,"align"=>"center","fixed"=>true, "formatter"=>"date", "formatoptions"=>array("srcformat"=>"Y-m-d H:i:s","newformat"=>"d M Y") )); // this is only in this case since the orderdate is set as date time $grid->setUserTime("d M Y"); $grid->setUserDate("d M Y"); $grid->setDatepicker("EndDate",array("buttonOnly"=>false)); $grid->datearray = array('EndDate'); $grid->setColProperty("Date", array("label"=>"Order Date","width"=>100,"editable"=>false,"align"=>"center","fixed"=>true, "formatter"=>"date", "formatoptions"=>array("srcformat"=>"Y-m-d H:i:s","newformat"=>"d M Y") )); // this is only in this case since the orderdate is set as date time $grid->setUserTime("d M Y"); $grid->setUserDate("d M Y"); $grid->setDatepicker("Date",array("buttonOnly"=>false)); $grid->datearray = array('Date'); // This command is executed after edit $maID = jqGridUtils::GetParam('maID'); $Status = jqGridUtils::GetParam('Status'); $StartDate = jqGridUtils::GetParam('StartDate'); $EndDate = jqGridUtils::GetParam('EndDate'); $Type = jqGridUtils::GetParam('Type'); // This command is executed immediatley after edit occur. $grid->setAfterCrudAction('edit', "UPDATE maintenance_agreement SET m.Status=?, m.StartDate=?, m.EndDate=?, m.Type=? WHERE m.maID=?", array($Status,$StartDate,$EndDate,$Type,$maID)); $selectorder = <<<ORDER function(rowid, selected) { if(rowid != null) { jQuery("#detail").jqGrid('setGridParam',{postData:{CompanyID:rowid}}); jQuery("#detail").trigger("reloadGrid"); // Enable CRUD buttons in navigator when a row is selected jQuery("#add_detail").removeClass("ui-state-disabled"); jQuery("#edit_detail").removeClass("ui-state-disabled"); jQuery("#del_detail").removeClass("ui-state-disabled"); } } ORDER; // We should clear the grid data on second grid on sorting, paging, etc. $cleargrid = <<<CLEAR function(rowid, selected) { // clear the grid data and footer data jQuery("#detail").jqGrid('clearGridData',true); // Disable CRUD buttons in navigator when a row is not selected jQuery("#add_detail").addClass("ui-state-disabled"); jQuery("#edit_detail").addClass("ui-state-disabled"); jQuery("#del_detail").addClass("ui-state-disabled"); } CLEAR; $grid->setGridEvent('onSelectRow', $selectorder); $grid->setGridEvent('onSortCol', $cleargrid); $grid->setGridEvent('onPaging', $cleargrid); $grid->setColProperty("Area", array("width"=>100,"hidden"=>false,"editable"=>false,"fixed"=>true)); $grid->setColProperty("HeadCount", array("label"=>"Head Count","align"=>"center", "width"=>100,"hidden"=>false,"fixed"=>true)); $grid->setSelect('Area', "SELECT DISTINCT AreaName, AreaName AS Area FROM master_area ORDER BY AreaName", false, true, true, array(""=>"All")); $grid->setSelect('CompanyName', "SELECT DISTINCT CompanyName, CompanyName AS CompanyName FROM company ORDER BY CompanyName", false, true, true, array(""=>"All")); $custom = <<<CUSTOM jQuery("#getselected").click(function(){ var selr = jQuery('#grid').jqGrid('getGridParam','selrow'); if(selr) { window.open('http://www.smartouch-cdms.com/order.php?CompanyID='+selr); } else alert("No selected row"); return false; }); CUSTOM; $grid->setJSCode($custom); // Enable toolbar searching $grid->toolbarfilter = true; $grid->setFilterOptions(array("stringResult"=>true,"searchOnEnter"=>false,"defaultSearch"=>"cn")); // Enable navigator $grid->navigator = true; // disable the delete operation programatically for that table $grid->del = false; // we need to write some custom code when we are in delete mode. // get the grid operation parameter to see if we are in delete mode // jqGrid sends the "oper" parameter to identify the needed action $deloper = $_POST['oper']; // det the company id $cid = $_POST['CompanyID']; // if the operation is del and the companyid is set if($deloper == 'del' && isset($cid) ) { // the two tables are linked via CompanyID, so let try to delete the records in both tables try { jqGridDB::beginTransaction($conn); $comp = jqGridDB::prepare($conn, "DELETE FROM company WHERE CompanyID= ?", array($cid)); $cont = jqGridDB::prepare($conn,"DELETE FROM contact WHERE CompanyID = ?", array($cid)); jqGridDB::execute($comp); jqGridDB::execute($cont); jqGridDB::commit($conn); } catch(Exception $e) { jqGridDB::rollBack($conn); echo $e->getMessage(); } } // Enable only deleting if(isset($_SESSION['login_admin'])) { $grid->setNavOptions('navigator', array("pdf"=>true, "excel"=>true,"add"=>false,"edit"=>true,"del"=>false,"view"=>true, "search"=>true)); } else $grid->setNavOptions('navigator', array("pdf"=>true, "excel"=>true,"add"=>false,"edit"=>false,"del"=>false,"view"=>true, "search"=>true)); // In order to enable the more complex search we should set multipleGroup option // Also we need show query roo $grid->setNavOptions('search', array( "multipleGroup"=>false, "showQuery"=>true )); // Set different filename $grid->exportfile = 'Company.xls'; // Close the dialog after editing $grid->setNavOptions('edit',array("closeAfterEdit"=>true,"editCaption"=>"Update Company","bSubmit"=>"Update","dataheight"=>"auto")); $grid->setNavOptions('add',array("closeAfterAdd"=>true,"addCaption"=>"Add New Company","bSubmit"=>"Update","dataheight"=>"auto")); $grid->setNavOptions('view',array("Caption"=>"View Company","dataheight"=>"auto","width"=>"1100")); ob_end_clean(); //solve TCPDF error // Enjoy $grid->renderGrid('#grid','#pager',true, null, null, true,true); $conn = null; ?> javascript code jQuery(document).ready(function ($) { jQuery('#grid').jqGrid({ "width": 1300, "hoverrows": true, "viewrecords": true, "jsonReader": { "repeatitems": false, "subgrid": { "repeatitems": false } }, "xmlReader": { "repeatitems": false, "subgrid": { "repeatitems": false } }, "gridview": true, "url": "session_ma_details.php", "editurl": "session_ma_details.php", "cellurl": "session_ma_details.php", "sortable": true, "rownumbers": true, "caption": "Group by Maintenance Agreement", "rowNum": 20, "height": "auto", "sortname": "maID", "rowList": [10, 20, 50], "footerrow": false, "userDataOnFooter": false, "grouping": true, "groupingView": { "groupField": ["CompanyName"], "groupColumnShow": [false], "groupText": ["<b> Company Name: {0}</b>"], "groupDataSorted": true, "groupSummary": [true] }, "onSelectRow": function (rowid, selected) { if (rowid != null) { jQuery("#detail").jqGrid('setGridParam', { postData: { CompanyID: rowid } }); jQuery("#detail").trigger("reloadGrid"); // Enable CRUD buttons in navigator when a row is selected jQuery("#add_detail").removeClass("ui-state-disabled"); jQuery("#edit_detail").removeClass("ui-state-disabled"); jQuery("#del_detail").removeClass("ui-state-disabled"); } }, "onSortCol": function (rowid, selected) { // clear the grid data and footer data jQuery("#detail").jqGrid('clearGridData', true); // Disable CRUD buttons in navigator when a row is not selected jQuery("#add_detail").addClass("ui-state-disabled"); jQuery("#edit_detail").addClass("ui-state-disabled"); jQuery("#del_detail").addClass("ui-state-disabled"); }, "onPaging": function (rowid, selected) { // clear the grid data and footer data jQuery("#detail").jqGrid('clearGridData', true); // Disable CRUD buttons in navigator when a row is not selected jQuery("#add_detail").addClass("ui-state-disabled"); jQuery("#edit_detail").addClass("ui-state-disabled"); jQuery("#del_detail").addClass("ui-state-disabled"); }, "datatype": "json", "colModel": [ { "name": "Action", "formatter": "actions", "editable": false, "sortable": false, "resizable": false, "fixed": true, "width": 60, "formatoptions": { "keys": true }, "search": false }, { "name": "CompanyID", "index": "CompanyID", "sorttype": "int", "label": "ID", "hidden": true, "width": 30, "editable": false, "editoptions": { "readonly": "readonly" } }, { "name": "CompanyCode", "index": "CompanyCode", "sorttype": "string", "label": "Company Code", "hidden": true, "width": 50, "align": "center", "editable": true }, { "name": "CompanyName", "index": "CompanyName", "sorttype": "string", "label": "Company Name", "hidden": true, "editable": false, "width": 150, "align": "center", "fixed": true, "edittype": "select", "editoptions": { "value": "Aquatex Industries:Aquatex Industries;Benithem Sdn Bhd:Benithem Sdn Bhd;Daily Bakery Sdn Bhd:Daily Bakery Sdn Bhd;Eurocor Asia Sdn Bhd:Eurocor Asia Sdn Bhd;Evergrown Technology:Evergrown Technology;Goldpar Precision:Goldpar Precision;MicroSun Technologies Asia:MicroSun Technologies Asia;NCI Industries Sdn Bhd:NCI Industries Sdn Bhd;PHHP Marketing:PHHP Marketing;Smart Touch Technology:Smart Touch Technology;THOSCO Treatech:THOSCO Treatech;YHL Trading (Johor) Sdn Bhd:YHL Trading (Johor) Sdn Bhd;Zenxin Agri-Organic Food:Zenxin Agri-Organic Food", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Aquatex Industries:Aquatex Industries;Benithem Sdn Bhd:Benithem Sdn Bhd;Daily Bakery Sdn Bhd:Daily Bakery Sdn Bhd;Eurocor Asia Sdn Bhd:Eurocor Asia Sdn Bhd;Evergrown Technology:Evergrown Technology;Goldpar Precision:Goldpar Precision;MicroSun Technologies Asia:MicroSun Technologies Asia;NCI Industries Sdn Bhd:NCI Industries Sdn Bhd;PHHP Marketing:PHHP Marketing;Smart Touch Technology:Smart Touch Technology;THOSCO Treatech:THOSCO Treatech;YHL Trading (Johor) Sdn Bhd:YHL Trading (Johor) Sdn Bhd;Zenxin Agri-Organic Food:Zenxin Agri-Organic Food", "separator": ":", "delimiter": ";" } }, { "name": "Area", "index": "Area", "sorttype": "string", "width": 100, "hidden": true, "editable": false, "fixed": true, "edittype": "select", "editoptions": { "value": "Cemerlang:Cemerlang;Danga Bay:Danga Bay;Kulai:Kulai;Larkin:Larkin;Masai:Masai;Nusa Cemerlang:Nusa Cemerlang;Nusajaya:Nusajaya;Pasir Gudang:Pasir Gudang;Pekan Nenas:Pekan Nenas;Permas Jaya:Permas Jaya;Pontian:Pontian;Pulai:Pulai;Senai:Senai;Skudai:Skudai;Taman Gaya:Taman Gaya;Taman Johor Jaya:Taman Johor Jaya;Taman Molek:Taman Molek;Taman Pelangi:Taman Pelangi;Taman Sentosa:Taman Sentosa;Tebrau 4:Tebrau 4;Ulu Tiram:Ulu Tiram", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Cemerlang:Cemerlang;Danga Bay:Danga Bay;Kulai:Kulai;Larkin:Larkin;Masai:Masai;Nusa Cemerlang:Nusa Cemerlang;Nusajaya:Nusajaya;Pasir Gudang:Pasir Gudang;Pekan Nenas:Pekan Nenas;Permas Jaya:Permas Jaya;Pontian:Pontian;Pulai:Pulai;Senai:Senai;Skudai:Skudai;Taman Gaya:Taman Gaya;Taman Johor Jaya:Taman Johor Jaya;Taman Molek:Taman Molek;Taman Pelangi:Taman Pelangi;Taman Sentosa:Taman Sentosa;Tebrau 4:Tebrau 4;Ulu Tiram:Ulu Tiram", "separator": ":", "delimiter": ";" } }, { "name": "OrderCode", "index": "OrderCode", "sorttype": "string", "label": "Order No.", "width": 110, "editable": false, "align": "center", "fixed": true }, { "name": "Date", "index": "Date", "sorttype": "date", "label": "Order Date", "width": 100, "editable": false, "align": "center", "fixed": true, "formatter": "date", "formatoptions": { "srcformat": "Y-m-d H:i:s", "newformat": "d M Y" }, "editoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "searchoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } } }, { "name": "maID", "index": "maID", "sorttype": "int", "key": true, "hidden": true, "editable": true }, { "name": "System", "index": "System", "sorttype": "string", "width": 150, "fixed": true, "align": "center", "edittype": "select", "editoptions": { "value": "Payroll:Payroll;TMS:TMS;TMS & Payroll:TMS & Payroll", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Payroll:Payroll;TMS:TMS;TMS & Payroll:TMS & Payroll", "separator": ":", "delimiter": ";" }, "editable": true }, { "name": "Status", "index": "Status", "sorttype": "string", "width": 70, "align": "center", "edittype": "select", "editoptions": { "value": "Yes:Yes;No:No" }, "fixed": true, "editable": true }, { "name": "StartDate", "index": "StartDate", "sorttype": "date", "label": "Start Date", "width": 120, "align": "center", "fixed": true, "formatter": "date", "formatoptions": { "srcformat": "Y-m-d H:i:s", "newformat": "d M Y" }, "editoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "searchoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "editable": true }, { "name": "EndDate", "index": "EndDate", "sorttype": "date", "label": "End Date", "width": 120, "align": "center", "fixed": true, "formatter": "date", "formatoptions": { "srcformat": "Y-m-d H:i:s", "newformat": "d M Y" }, "editoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "searchoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "editable": true }, { "name": "Type", "index": "Type", "sorttype": "string", "width": 530, "fixed": true, "edittype": "select", "editoptions": { "value": "Comprehensive MA:Comprehensive MA;FOC service, 20% spare part discount:FOC service, 20% spare part discount;Standard Package, FOC 1 time service, 20% spare part discount:Standard Package, FOC 1 time service, 20% spare part discount;Standard Package, FOC 2 time service, 20% spare part discount:Standard Package, FOC 2 time service, 20% spare part discount;Standard Package, FOC 3 time service, 20% spare part discount:Standard Package, FOC 3 time service, 20% spare part discount;Standard Package, FOC 4 time service, 20% spare part discount:Standard Package, FOC 4 time service, 20% spare part discount;Standard Package, FOC 6 time service, 20% spare part discount:Standard Package, FOC 6 time service, 20% spare part discount;Standard Package, no free:Standard Package, no free", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Comprehensive MA:Comprehensive MA;FOC service, 20% spare part discount:FOC service, 20% spare part discount;Standard Package, FOC 1 time service, 20% spare part discount:Standard Package, FOC 1 time service, 20% spare part discount;Standard Package, FOC 2 time service, 20% spare part discount:Standard Package, FOC 2 time service, 20% spare part discount;Standard Package, FOC 3 time service, 20% spare part discount:Standard Package, FOC 3 time service, 20% spare part discount;Standard Package, FOC 4 time service, 20% spare part discount:Standard Package, FOC 4 time service, 20% spare part discount;Standard Package, FOC 6 time service, 20% spare part discount:Standard Package, FOC 6 time service, 20% spare part discount;Standard Package, no free:Standard Package, no free", "separator": ":", "delimiter": ";" }, "editable": true } ], "postData": { "oper": "grid" }, "prmNames": { "page": "page", "rows": "rows", "sort": "sidx", "order": "sord", "search": "_search", "nd": "nd", "id": "maID", "filter": "filters", "searchField": "searchField", "searchOper": "searchOper", "searchString": "searchString", "oper": "oper", "query": "grid", "addoper": "add", "editoper": "edit", "deloper": "del", "excel": "excel", "subgrid": "subgrid", "totalrows": "totalrows", "autocomplete": "autocmpl" }, "loadError": function(xhr, status, err) { try { jQuery.jgrid.info_dialog(jQuery.jgrid.errors.errcap, '<div class="ui-state-error">' + xhr.responseText + '</div>', jQuery.jgrid.edit.bClose, { buttonalign: 'right' } ); } catch(e) { alert(xhr.responseText); } }, "pager": "#pager" }); jQuery('#grid').jqGrid('navGrid', '#pager', { "edit": true, "add": false, "del": false, "search": true, "refresh": true, "view": true, "excel": true, "pdf": true, "csv": false, "columns": false }, { "drag": true, "resize": true, "closeOnEscape": true, "dataheight": "auto", "errorTextFormat": function (r) { return r.responseText; }, "closeAfterEdit": true, "editCaption": "Update Company", "bSubmit": "Update" }, { "drag": true, "resize": true, "closeOnEscape": true, "dataheight": "auto", "errorTextFormat": function (r) { return r.responseText; }, "closeAfterAdd": true, "addCaption": "Add New Company", "bSubmit": "Update" }, { "errorTextFormat": function (r) { return r.responseText; } }, { "drag": true, "closeAfterSearch": true, "multipleSearch": true }, { "drag": true, "resize": true, "closeOnEscape": true, "dataheight": "auto", "Caption": "View Company", "width": "1100" } ); jQuery('#grid').jqGrid('navButtonAdd', '#pager', { id: 'pager_excel', caption: '', title: 'Export To Excel', onClickButton: function (e) { try { jQuery("#grid").jqGrid('excelExport', { tag: 'excel', url: 'session_ma_details.php' }); } catch (e) { window.location = 'session_ma_details.php?oper=excel'; } }, buttonicon: 'ui-icon-newwin' }); jQuery('#grid').jqGrid('navButtonAdd', '#pager', { id: 'pager_pdf', caption: '', title: 'Export To Pdf', onClickButton: function (e) { try { jQuery("#grid").jqGrid('excelExport', { tag: 'pdf', url: 'session_ma_details.php' }); } catch (e) { window.location = 'session_ma_details.php?oper=pdf'; } }, buttonicon: 'ui-icon-print' }); jQuery('#grid').jqGrid('filterToolbar', { "stringResult": true, "searchOnEnter": false, "defaultSearch": "cn" }); jQuery("#getselected").click(function () { var selr = jQuery('#grid').jqGrid('getGridParam', 'selrow'); if (selr) { window.open('http://www.smartouch-cdms.com/order.php?CompanyID=' + selr); } else alert("No selected row"); return false; }); });

    Read the article

  • Google App Engine ClassNotPersistenceCapableException

    - by Frank
    I have the following class : import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.*; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class PayPal_Message { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private Text content; @Persistent private String time; public PayPal_Message(Text content,String time) { this.content=content; this.time=time; } public Long getId() { return id; } public Text getContent() { return content; } public String getTime() { return time; } public void setContent(Text content) { this.content=content; } public void setTime(String time) { this.time=time; } } It used to be in a package, and works fine, now I put all classes in the default package, which caused me this error : org.datanucleus.jdo.exceptions.ClassNotPersistenceCapableException: The class "The class "PayPal_Message" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found." is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data for the class is not found. NestedThrowables: org.datanucleus.exceptions.ClassNotPersistableException: The class "PayPal_Message" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found. What should I do to fix it ?

    Read the article

  • How to set default date in date_select helper in Rails

    - by brad
    I'm trying to set up a date of birth helper in my Rails app (2.3.5). At present it is like so. <%= f.date_select :date_of_birth, :start_year => Time.now.year - 110, :end_year => Time.now.year %> This generates a perfectly functional set of date fields that work just fine but.... They default to today's date which is not ideal for a date of birth field (I'm not sure what is but unless you're running a neonatal unit today's date seems less than ideal). I want it to read Jan 1 2010 instead (or 2011 or whatever year it happens to be). Using the :default option has proven unsuccessful. I've tried many possibilities including; <%= f.date_select :date_of_birth, :default => {:year => Time.now.year, :month => 'Jan', :day => 1}, :start_year => Time.now.year - 110, :end_year => Time.now.year %> and <%= f.date_select :date_of_birth, :default => Time.local(2010,'Jan',1), :start_year => Time.now.year - 110, :end_year => Time.now.year %> None of this changes the behaviour of the first example. Does the default option actually work as described? It seems that this should be a fairly straightforward thing to do. Ta.

    Read the article

  • Jquery: Display element relative to another

    - by namtax
    Hi I am trying to display a drop down list using jquery everytime a user clicks on a particular formfield... At the moment the HTML is as stands // Form field to enter the time an event starts <div> <label for="eventTime"> Event Time </label> <input type = "text" name="eventTime" id="eventTime" class="time"> </label> </div> // Form field to enter the time an event finishes <div> <label for="eventEnd"> Event Ends</label> <input type = "text" name="eventEnds" id="eventEnds" class="time"> </label> </div> And the Jquery looks like so // Display Time Picker $('.time').click(function(){ var thisTime = $(this); var timePicker = '<ul class="timePicker">' timePicker += '<li>10.00am</li>' timePicker += '<li>11.00am</li>' timePicker += '<li>12.00am</li>' timePicker += '</ul>' $('.timePicker').hide().insertAfter(thisTime); //show the menu directly over the placeholder var pos = thisTime.offset(); $(".timePicker").attr({ top: pos.top left: pos.left }); $(".timePicker").show(); }); However, when I click on one the form fields with class time, the timePicker drop down appears, but it always appears in the same location, not next to the form field as I would like. Any ideas? Thanks

    Read the article

  • How to maintain the state of button cutom listview in android

    - by Akshay
    I have custom ListView with three TextView three Button and three Chronometer. And the situation is I am loading the ListView properly.But while loading ListView I am disabling some button in the ListView by checking one parameter. Up to this point ListView is showing it's row properly. But when I am scrolling the ListView at that time previously enabled Button are getting disabled.What I am doing wrong I am not getting can one please point out my mistake Or any suggestion. Here is my Adapter class. public class OrderSmartKitchenAdapter extends BaseAdapter { private int flagDeliveryComplete = 0; private int flagPreparationComplete = 0; private int flagPreparationStarted = 0; private List<OrderitemdetailsBO> list = new ArrayList<OrderitemdetailsBO(); private int orderStatus; public OrderSmartKitchenAdapter() { // TODO Auto-generated constructor stub } public void setOrderList(List<OrderitemdetailsBO> orderList) { this.list = orderList; } @Override public int getCount() { // TODO Auto-generated method stub Log.i("OrderItemList Size :-", Integer.toString(list.size())); return list.size(); } @Override public Object getItem(int position) { // TODO Auto-generated method stub return null; } @Override public long getItemId(int position) { // TODO Auto-generated method stub return 0; } @Override public View getView(final int position, View convertView,ViewGroup parent) { // TODO Auto-generated method stub final ViewHolder viewHolder ; if (convertView == null) { layoutInflater = LayoutInflater.from(myContext); convertView = layoutInflater.inflate(R.layout.table_row_view,null); viewHolder = new ViewHolder(); viewHolder.txtTableNumber = (TextView) convertView.findViewById(R.id.txtTableNumber); viewHolder.txtMenuItem = (TextView) convertView.findViewById(R.id.txtMenuItem); viewHolder.txtQuantity = (TextView) convertView.findViewById(R.id.txtQuantity); viewHolder.txtOrderAcceptanceTime = (TextView) convertView.findViewById(R.id.txtOrderAcceptanceTime); viewHolder.txtElapsedTimeOfOrderAcceptance = (Chronometer) convertView.findViewById(R.id.txtElapsedTimeOfOrderAcceptance); viewHolder.btnPreparationStart = (Button) convertView.findViewById(R.id.btnPreparationStart); viewHolder.btnPreparationStart.setTag(position); viewHolder.txtElapsedTimeForPreparation = (Chronometer) convertView.findViewById(R.id.txtElapsedTimeForPrepatration); viewHolder.btnPreparationComplete = (Button) convertView.findViewById(R.id.btnPreparationCompleted); viewHolder.btnPreparationComplete.setTag(position); viewHolder.txtElapsedTimeForDeliveryComplete = (Chronometer) convertView.findViewById(R.id.txtElapsedTimeForCompleation); viewHolder.btnDeliveryComplete = (Button) convertView.findViewById(R.id.btnOrderComplete); viewHolder.btnDeliveryComplete.setTag(position); convertView.setTag(viewHolder); } else{ viewHolder = (ViewHolder)convertView.getTag(); viewHolder.btnDeliveryComplete.setTag(position); viewHolder.btnPreparationComplete.setTag(position); viewHolder.btnPreparationStart.setTag(position); } if (list.get(position) != null) { OrderitemdetailsBO orderitemdetailsBO = new OrderitemdetailsBO(); orderitemdetailsBO = list.get(position); viewHolder.txtTableNumber.setText(orderitemdetailsBO.getOrderitemid().toString()); viewHolder.txtMenuItem.setText(orderitemdetailsBO.getMenuitemname().toString()); viewHolder.txtQuantity.setText(orderitemdetailsBO.getQuantity().toString()); Log.i("Table Number :-", Long.toString(orderitemdetailsBO.getOrderitemid())); Log.i("Menu Name :-", orderitemdetailsBO.getMenuitemname().toString()); Log.i("Quantity", orderitemdetailsBO.getQuantity().toString()); Date acceptTime = new Date(); acceptTime = orderitemdetailsBO.getOrderdatetime(); viewHolder.txtOrderAcceptanceTime.setText(DateUtil.getDateAsString(acceptTime,"HH:mm")); Log.i("Order Accept Time :-", acceptTime.getMinutes() + ":"+ acceptTime.getSeconds()); orderStatus = orderitemdetailsBO.getOrderstatus(); Date preparationStartTime = new Date(); preparationStartTime = orderitemdetailsBO.getPreparationstarttime(); if(preparationStartTime != null) { Log.i("OrderSmartKitchenActivity", "2 Order Acceptance Time :-" + "Menu Item id "+ orderitemdetailsBO.getOrderitemid() + " Preparation Start time " + orderitemdetailsBO.getPreparationstarttime() ); viewHolder.txtElapsedTimeOfOrderAcceptance.stop(); Log.i("Preparation Start Time :-",preparationStartTime.getMinutes() + ":" + preparationStartTime.getSeconds()); viewHolder.txtElapsedTimeOfOrderAcceptance.setText(DateUtil.getDateAsString(preparationStartTime,"MM:ss")); viewHolder.txtElapsedTimeOfOrderAcceptance.stop(); viewHolder.btnPreparationStart.setEnabled(false); viewHolder.btnPreparationStart.setClickable(false); viewHolder.btnPreparationStart.setBackgroundColor(Color.LTGRAY); } else { Long n = acceptTime.getTime(); Log.i("OrderSmartKitchenActivity", "Order Acceptance Time :-" + "Menu Item id "+ orderitemdetailsBO.getOrderitemid() + " Acceptance time" + Long.toString(n) + " Preparation Start time " + orderitemdetailsBO.getPreparationstarttime() ); // Calculate Time difference viewHolder.txtElapsedTimeOfOrderAcceptance.setBase(SystemClock.elapsedRealtime() - System.currentTimeMillis() + n); viewHolder.txtElapsedTimeOfOrderAcceptance.getBase(); viewHolder.txtElapsedTimeOfOrderAcceptance.start(); viewHolder.txtElapsedTimeOfOrderAcceptance.setFormat("%s"); } viewHolder.btnPreparationStart.setOnClickListener(new OnClickListener() { @Override public void onClick(final View v) { // TODO Auto-generated method stub if (flagPreparationStarted == 0) { flagPreparationStarted++; v.startAnimation(playAnimation()); handler.postDelayed(new Runnable() { @Override public void run() { // TODO Auto-generated method stub v.clearAnimation(); Date currentTime = new Date(); // Set Preparation Start Time. viewHolder.txtElapsedTimeOfOrderAcceptance.stop(); Date setTime = new Date(currentTime.getTime() * 1000); OrderitemdetailsBO orderitemdetailsBO = list.get(position); orderitemdetailsBO.setPreparationstarttime(setTime); String orderDetails = "2"; String getPosition = Integer.toString(position); viewHolder.btnPreparationStart.setBackgroundColor(Color.LTGRAY); new sendOrderStatusToServer().execute(orderDetails,getPosition); } }, 5000); } else { handler.removeCallbacksAndMessages(null); v.clearAnimation(); flagPreparationStarted = 0; Log.i("Handler Removed. :-", "Here"); } } }); String preparationTime = orderitemdetailsBO.getOrderpreparationtime(); if(preparationTime != null && orderStatus == order_preparationComplete) { viewHolder.txtElapsedTimeForPreparation.setText(preparationTime); viewHolder.txtElapsedTimeForPreparation.stop(); viewHolder.btnPreparationComplete.getTag(position); viewHolder.btnPreparationComplete.setEnabled(false); viewHolder.btnPreparationComplete.setClickable(false); viewHolder.btnPreparationComplete.setBackgroundColor(Color.LTGRAY); } else if( orderStatus == order_preparationStart || orderStatus == orderReceived || orderStatus == order_delivered){ Long n = acceptTime.getTime(); Log.i("Preparation Start Time :-", Long.toString(n)); viewHolder.txtElapsedTimeForPreparation.setBase(SystemClock.elapsedRealtime() - System.currentTimeMillis() + n); viewHolder.txtElapsedTimeForPreparation.getBase(); viewHolder.txtElapsedTimeForPreparation.start(); viewHolder.txtElapsedTimeForPreparation.setFormat("%s"); } viewHolder.btnPreparationComplete.setOnClickListener(new OnClickListener() { @Override public void onClick(final View v) { // TODO Auto-generated method if (flagPreparationComplete == 0) { flagPreparationComplete++; v.startAnimation(playAnimation()); handler.postDelayed(new Runnable() { @Override public void run() { // TODO Auto-generated method stub v.clearAnimation(); OrderitemdetailsBO orderitemdetailsBO = list.get(position); Date date = orderitemdetailsBO.getPreparationstarttime(); if(date != null) { viewHolder.txtElapsedTimeForPreparation.stop(); Date currentTime = new Date(); Calendar calendar = Calendar.getInstance(); int minute = calendar.get(Calendar.MINUTE); int second = calendar.get(Calendar.SECOND); orderitemdetailsBO.setOrderpreparationtime(calendar.get(Calendar.MINUTE) +":" +calendar.get(Calendar.SECOND)); String orderDetails = "3"; String getPosition = Integer.toString(position); viewHolder.btnPreparationComplete.setBackgroundColor(Color.LTGRAY); new sendOrderStatusToServer().execute(orderDetails,getPosition); } else { Toast.makeText(myContext, "Please Enter Preparation Start Time.", Toast.LENGTH_LONG).show(); } } }, 5000); } else { handler.removeCallbacksAndMessages(null); v.clearAnimation(); flagPreparationComplete = 0; } } }); String deleveredTime = orderitemdetailsBO.getOrderdeliverytime(); if(deleveredTime != null && orderStatus == order_delivered) { Date delevered = new Date(Long.parseLong(deleveredTime)); viewHolder.txtElapsedTimeForPreparation.setText(DateUtil.getDateAsString(delevered,"MM:ss")); Log.i("Preparation Start Time :-", delevered.getMinutes()+":"+delevered.getSeconds()); viewHolder.txtElapsedTimeForPreparation.stop(); viewHolder.btnDeliveryComplete.getTag(position); viewHolder.btnDeliveryComplete.setEnabled(false); viewHolder.btnDeliveryComplete.setClickable(false); viewHolder.btnDeliveryComplete.setBackgroundColor(Color.LTGRAY); } else if(orderStatus == 3 || orderStatus == 2 || orderStatus == 1) { Long n = acceptTime.getTime(); Log.i("Preparation Start Time :-", Long.toString(n)); viewHolder.txtElapsedTimeForDeliveryComplete.setTag(list.get(position)); viewHolder.txtElapsedTimeForDeliveryComplete.setBase(SystemClock.elapsedRealtime() - System.currentTimeMillis() + n); viewHolder.txtElapsedTimeForDeliveryComplete.getBase(); viewHolder.txtElapsedTimeForDeliveryComplete.start(); viewHolder.txtElapsedTimeForDeliveryComplete.setFormat("%s"); } viewHolder.btnDeliveryComplete.setOnClickListener(new OnClickListener() { @Override public void onClick(final View v) { // TODO Auto-generated method stub if (flagDeliveryComplete == 0) { flagDeliveryComplete++; v.startAnimation(playAnimation()); handler.postDelayed(new Runnable() { @Override public void run() { // TODO Auto-generated method stub v.clearAnimation(); OrderitemdetailsBO orderitemdetailsBO = list.get(position); Date date = orderitemdetailsBO.getPreparationstarttime(); String preparationComplete = orderitemdetailsBO.getOrderpreparationtime(); if(date != null && preparationComplete != null ) { Date currentTime = new Date(); Calendar calendar = Calendar.getInstance(); viewHolder.txtElapsedTimeForDeliveryComplete.stop(); orderitemdetailsBO.setOrderdeliverytime(calendar.get(Calendar.MINUTE) +":"+calendar.get(Calendar.SECOND)); String orderDetails = Integer.toString(order_delivered); String getPosition = Integer.toString(position); viewHolder.btnDeliveryComplete.setBackgroundColor(Color.LTGRAY); new sendOrderStatusToServer().execute(orderDetails,getPosition); } else { Toast.makeText(myContext, "Please Enter Preparation Start Time & Preparation Complete Time.", Toast.LENGTH_LONG).show(); } } }, 5000); } else { handler.removeCallbacksAndMessages(null); v.clearAnimation(); flagDeliveryComplete = 0; } } }); } return convertView; } } private static class ViewHolder { protected TextView txtTableNumber; protected TextView txtMenuItem; protected TextView txtQuantity; protected TextView txtOrderAcceptanceTime; protected Chronometer txtElapsedTimeOfOrderAcceptance; protected Button btnPreparationStart; protected Chronometer txtElapsedTimeForPreparation; protected Button btnPreparationComplete; protected Chronometer txtElapsedTimeForDeliveryComplete; protected Button btnDeliveryComplete; }

    Read the article

  • Better way to summarize data about stop times?

    - by Vimvq1987
    This question is close to this: http://stackoverflow.com/questions/2947963/find-the-period-of-over-speed Here's my table: Longtitude Latitude Velocity Time 102 401 40 2010-06-01 10:22:34.000 103 403 50 2010-06-01 10:40:00.000 104 405 0 2010-06-01 11:00:03.000 104 405 0 2010-06-01 11:10:05.000 105 406 35 2010-06-01 11:15:30.000 106 403 60 2010-06-01 11:20:00.000 108 404 70 2010-06-01 11:30:05.000 109 405 0 2010-06-01 11:35:00.000 109 405 0 2010-06-01 11:40:00.000 105 407 40 2010-06-01 11:50:00.000 104 406 30 2010-06-01 12:00:00.000 101 409 50 2010-06-01 12:05:30.000 104 405 0 2010-06-01 11:05:30.000 I want to summarize times when vehicle had stopped (velocity = 0), include: it had stopped since "when" to "when" in how much minutes, how many times it stopped and how much time it stopped. I wrote this query to do it: select longtitude, latitude, MIN(time), MAX(time), DATEDIFF(minute, MIN(Time), MAX(time)) as Timespan from table_1 where velocity = 0 group by longtitude,latitude select DATEDIFF(minute, MIN(Time), MAX(time)) as minute into #temp3 from table_1 where velocity = 0 group by longtitude,latitude select COUNT(*) as [number]from #temp select SUM(minute) as [totaltime] from #temp3 drop table #temp This query return: longtitude latitude (No column name) (No column name) Timespan 104 405 2010-06-01 11:00:03.000 2010-06-01 11:10:05.000 10 109 405 2010-06-01 11:35:00.000 2010-06-01 11:40:00.000 5 number 2 totaltime 15 You can see, it works fine, but I really don't like the #temp table. Is there anyway to query this without use a temp table? Thank you.

    Read the article

  • Array data to display in table cells where col and row are given in array

    - by Saleem
    I have a array Array ( Array ( [id] => 1 , [schedule_id] => 4 , [subject] => Subject 1 , [classroom] => 1 , [time] => 08:00:00 , [col] => 1 , [row] => 1 ), Array ( [id] => 2 , [schedule_id] => 4 , [subject] => Subject 2 , [classroom] => 1 , [time] => 08:00:00 , [col] => 2 , [row] => 1 ), Array ( [id] => 3 , [schedule_id] => 4 , [subject] => Subject 3 , [classroom] => 2 , [time] => 09:00:00 , [col] => 1 , [row] => 2 ), Array ( [id] => 4 , [schedule_id] => 4 , [subject] => Subject 4 , [classroom] => 2 , [time] => 09:00:00 , [col] => 2 , [row] => 2 ) ) I want to display it in table format according to col and row value col 1 & row 1 col 2 $ row 1 1st array data 2nd array data Subject, room, time Subject, room, time 1, 1, 08:00 2, 1, 08:00 col 1 $ row 2 col 2 $ row 2 3rd array data 4th array data Subject, room, time Subject, room, time 3, 2, 09:00 4, 2, 08:00 I am new to arrays and need you support to sort this table. Thanks

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >