Search Results

Search found 2329 results on 94 pages for 'minute'.

Page 7/94 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • MRTG + RRDTool Hourly Graph

    - by SuperMicro321
    I am using MRTG + RRDtool to monitor the bandwidth on each switchport of a Cisco Catalyst 2950 via snmp. Is MRTG capable of generating an hourly graph? With RRDtool I was able to set the interval to 1 minute in hopes of getting a more detailed graph, but the shortest timeframe the graph is 'Daily' graph (5 Minute Average) and the image is too small. What I am looking to get out of this: I am looking to be able to visually monitor all of the switch ports and tell when port begins to have unusually high traffic, in real time (1 minute interval of snmp poll, graphs generated, and page refreshed).

    Read the article

  • Queue'ing cron jobs up to 10 times

    - by webnoob
    Hi All, This kind of touches on another post I made but is different so I have posted it as a new question. I have a script that runs that may take just over 1 minute to process, and my cron is set to run every minute. I can stop another cron job executing the script if the first one hasn't finished by using flock (php) in the file, however, this means that I would lose one iteration of the routine and have to wait nearly a minute before it is triggered again (as my understanding leads me to believe). What I would like to do is if the script is locked, not bomb out, but wait. Over time however, this could get quite high so I would also like to limit the amount of queued cron's to 10. I am a real newbie with Linux (had a Linux VPS for 3 days now) so I am not sure if my solution is even practical. Thanks.

    Read the article

  • cron+pam heavily spamming my logs

    - by Lo'oris
    Two times every minute I get this in auth.log: May 12 15:21:01 ruptai CRON[25303]: pam_unix(cron:session): session opened for user root by (uid=0) May 12 15:21:01 ruptai CRON[25303]: pam_unix(cron:session): session closed for user root This never stops, two times every minute, every minute of every day. I've no idea what it is, I would just to stop it from pointless logging this stuff. This has been going on for ages so I can't recall when it started. OS is debian stable. Btw, I've found questions on google but no answers

    Read the article

  • Felix Baumgartner Skydives from the Edge of Space [Video]

    - by Jason Fitzpatrick
    Yesterday Felix Baumgartner broke the record for highest skydive by leaping out of a capsule 128,100 feet above the Earth. Check out his jump in the following videos. After flying to an altitude of 39,045 meters (128,100 feet) in a helium-filled balloon, Felix Baumgartner completed a record breaking jump for the ages from the edge of space, exactly 65 years after Chuck Yeager first broke the sound barrier flying in an experimental rocket-powered airplane. Felix reached a maximum of speed of 1,342.8 km/h (833mph) through the near vacuum of the stratosphere before being slowed by the atmosphere later during his 4:20 minute long freefall. The 43-year-old Austrian skydiving expert also broke two other world records (highest freefall, highest manned balloon flight), leaving the one for the longest freefall to project mentor Col. Joe Kittinger. The above video is a 2 minute highlight reel of the ascent and jump; check out the full 15 minute descent video here. For an in-depth look at the technology used to keep Baumgartner safe during his record setting journey, hit up the link below. The Tech Behind Felix Baumgartner’s Stratospheric Skydive [ExtremeTech] HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Platinum Services – The Highest Level of Service in the Industry

    - by cwarticki
    Oracle Platinum Services provides remote fault monitoring with faster response times and patch deployment services to qualified Oracle Premier Support customers – at no additional cost. We know that disruptions in IT systems availability can seriously impact business performance. That’s why we engineer our hardware and software to work together. Oracle engineered systems are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. And now, customers who choose the extreme performance of Oracle engineered systems have the power to access the added support they need – Oracle Platinum Services – to further optimize for high availability at no additional cost.  In addition to receiving the complete support essentials with Oracle Premier Support, qualifying Oracle Platinum Services customers also receive: •     24/7 Oracle remote fault monitoring •    Industry-leading response and restore times o   5-Minute Fault Notification o   15-Minute Restoration or Escalation to Development o   30-Minute Joint Debugging with Development •    Update and patch deployment Visit us online to learn more about how to get Oracle Platinum Services

    Read the article

  • How do I fix my ethernet card losing network connection every few minutes with kernels 3.8.x?

    - by igoryonya
    I'm using Ubuntu 13.04. My laptop is Acer Aspire one 722-c58rr, and my ethernet card works for a few seconds at a time with kernels 3.8.x, however, kernels 3.5.x and below worked fine. On kernels 3.8.x, it works fine after boot for about a minute, then it looses network connection. When pinging to some address, it says: network address is unreachable, but it can ping it's own address. The address is statically configured. Everything was working fine before. I went to vacation, where I used WiFi and 3G connections, so I didn't notice that the problem occurred. Came back home, plugged in into the ethernet. It worked for a minute then stopped. Rebooting commutator fixed the problem. Tried to connect to a different commutator, same problem. Unplugging and plugging the cable fixes the problem for another minute. Disconnecting eth in Network manager and reconnecting it again, does the same thing. WiFi has no such problem. Tried to use a different cable that works fine on another computer, the same problem. Tried to boot with the lower kernel version, the same problem was happening until I got to the version 3.5 of the kernel series. Everything works fine on the kernel 3.5.x, but I don't want to miss out on the new kernel's features. Executing commands, when booted with 3.8 kernel series, give the following results: lspci| grep -i eth: 06:00.0 Ethernet controller: Qualcomm Atheros AR8152 v2.0 Fast Ethernet (rev c1) dmesg| grep eth1: [ 89.548291] atl1c 0000:06:00.0: atl1c: eth1 NIC Link is Up How do I fix it, while staying in the new kernel version?

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Can't configure frame relay T1 on Cisco 1760

    - by sonar
    For the past few days I've been trying to configure a data T1 via a Frame Relay. Now I've been pretty unsuccessful at it, and it's been a while, since I've done this so please bare with me. The ISP provided me the following information: 1. IP address 2. Gateway address 3. Encapsulation Frame Relay 4. DLCI 100 5. BZ8 ESF (I think the bz8 was supposed to be b8zs) 6. Time Slot (1 al 24). And what I have configured up until now is the following: interface Serial0/0 ip address <ip address> 255.255.255.252 encapsulation frame-relay service-module t1 timeslots 1-24 frame-relay interface-dlci 100 sh service-module s0/0 (outputs): Module type is T1/fractional Hardware revision is 0.128, Software revision is 0.2, Image checksum is 0x73D70058, Protocol revision is 0.1 Receiver has no alarms. Framing is **ESF**, Line Code is **B8ZS**, Current clock source is line, Fraction has **24 timeslots** (64 Kbits/sec each), Net bandwidth is 1536 Kbits/sec. Last module self-test (done at startup): Passed Last clearing of alarm counters 00:17:17 loss of signal : 0, loss of frame : 0, AIS alarm : 0, Remote alarm : 2, last occurred 00:10:10 Module access errors : 0, Total Data (last 1 15 minute intervals): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs Data in current interval (138 seconds elapsed): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs sh int: FastEthernet0/0 is up, line protocol is up Hardware is PQUICC_FEC, address is 000d.6516.e5aa (bia 000d.6516.e5aa) Internet address is 10.0.0.1/24 MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:20:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 0 packets input, 0 bytes Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog 0 input packets with dribble condition detected 191 packets output, 20676 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Serial0/0 is up, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY, loopback not set Keepalive set (10 sec) LMI enq sent 157, LMI stat recvd 0, LMI upd recvd 0, DTE LMI down LMI enq recvd 23, LMI stat sent 0, LMI upd sent 0 LMI DLCI 1023 LMI type is CISCO frame relay DTE FR SVC disabled, LAPF state down Broadcast queue 0/64, broadcasts sent/dropped 2/0, interface broadcasts 0 Last input 00:24:51, output 00:00:05, output hang never Last clearing of "show interface" counters 00:27:20 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/64/0 (size/max total/threshold/drops) Conversations 0/1/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1152 kilobits/sec 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 23 packets input, 302 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 1725 input errors, 595 CRC, 1099 frame, 0 overrun, 0 ignored, 30 abort 246 packets output, 3974 bytes, 0 underruns 0 output errors, 0 collisions, 48 interface resets 0 output buffer failures, 0 output buffers swapped out 4 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up Serial0/0.1 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never Serial0/0.100 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU Internet address is <ip address>/30 MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never And everything seems to be accounted for to me, but apparently I'm missing something. My issue is that I'm stuck on interface up, line protocol down, so the T1 doesn't go up. Any ideas? Thank you,

    Read the article

  • Better way to summarize data about stop times?

    - by Vimvq1987
    This question is close to this: http://stackoverflow.com/questions/2947963/find-the-period-of-over-speed Here's my table: Longtitude Latitude Velocity Time 102 401 40 2010-06-01 10:22:34.000 103 403 50 2010-06-01 10:40:00.000 104 405 0 2010-06-01 11:00:03.000 104 405 0 2010-06-01 11:10:05.000 105 406 35 2010-06-01 11:15:30.000 106 403 60 2010-06-01 11:20:00.000 108 404 70 2010-06-01 11:30:05.000 109 405 0 2010-06-01 11:35:00.000 109 405 0 2010-06-01 11:40:00.000 105 407 40 2010-06-01 11:50:00.000 104 406 30 2010-06-01 12:00:00.000 101 409 50 2010-06-01 12:05:30.000 104 405 0 2010-06-01 11:05:30.000 I want to summarize times when vehicle had stopped (velocity = 0), include: it had stopped since "when" to "when" in how much minutes, how many times it stopped and how much time it stopped. I wrote this query to do it: select longtitude, latitude, MIN(time), MAX(time), DATEDIFF(minute, MIN(Time), MAX(time)) as Timespan from table_1 where velocity = 0 group by longtitude,latitude select DATEDIFF(minute, MIN(Time), MAX(time)) as minute into #temp3 from table_1 where velocity = 0 group by longtitude,latitude select COUNT(*) as [number]from #temp select SUM(minute) as [totaltime] from #temp3 drop table #temp This query return: longtitude latitude (No column name) (No column name) Timespan 104 405 2010-06-01 11:00:03.000 2010-06-01 11:10:05.000 10 109 405 2010-06-01 11:35:00.000 2010-06-01 11:40:00.000 5 number 2 totaltime 15 You can see, it works fine, but I really don't like the #temp table. Is there anyway to query this without use a temp table? Thank you.

    Read the article

  • Creating a simple JQuery tooltip?

    - by Mike El Panther
    Has anybody here got any experience in creating simple tooptips using JQuery? I have created a table: <tr id="mon_Section"> <td id="day_Title">Monday</td>` <td id="mon_Row" onmousemove="calculate_Time(event)"></td> </tr>` the function "calculate_Time" will be called and this gets the X position of the mouse cursor and from the value, uses IF and ELSE-IF statements to calculate the values of other variables i have created: var mon_Pos; var hour; var minute; var orig; var myxpos; function calculate_Time(event) { myxpos = event.pageX; myxpos = myxpos-194; if (myxpos<60) { orig = myxpos; minute = myxpos; hour = 07; } else if (myxpos>=60 && myxpos<120) { orig = myxpos; minute=myxpos-60; hour=08; } } How do i go about creating a stylized tooltip containing the Hour and Minute variables that i created, I need the tooltip to be updated everytime the X position of the cursor changes. I know that with myxpos variable does change whenever the mouse is moved because i tried it out using an alert(myxpos) and as expected, if the mouse moves, a new alert box popups with a new value. I just cant work out how to place that value in a tooltip?

    Read the article

  • Elegant solution to retrieve custom date and time?

    - by kefs
    I am currently using a date and time picker to retrieve a user-submitted date and time, and then I set a control's text to the date and time selected. I am using the following code: new DatePickerDialog(newlog3.this, d, calDT.get(Calendar.YEAR), calDT.get(Calendar.MONTH), calDT.get(Calendar.DAY_OF_MONTH)).show(); new TimePickerDialog(newlog3.this, t, calDT.get(Calendar.HOUR_OF_DAY), calDT.get(Calendar.MINUTE), true).show(); optCustom.setText(fmtDT.format(calDT.getTime())); Now, while the above code block does bring up the date and time widgets and sets the text, the code block is being executed in full before the user can select the date.. ie: It brings up the date box first, then the time box over that, and then updates the text, all without any user interaction. I would like the date widget to wait to execute the time selector until the date selection is done, and i would like the settext to execute only after the time widget is done. How is this possible? Or is there is a more elegant solution that is escaping me? Edit: This is the code for DatePickerDialog/TimePickerDialog which is located within the class: DatePickerDialog.OnDateSetListener d=new DatePickerDialog.OnDateSetListener() { public void onDateSet(DatePicker view, int year, int monthOfYear, int dayOfMonth) { calDT.set(Calendar.YEAR, year); calDT.set(Calendar.MONTH, monthOfYear); calDT.set(Calendar.DAY_OF_MONTH, dayOfMonth); //updateLabel(); } }; TimePickerDialog.OnTimeSetListener t=new TimePickerDialog.OnTimeSetListener() { public void onTimeSet(TimePicker view, int hourOfDay, int minute) { calDT.set(Calendar.HOUR_OF_DAY, hourOfDay); calDT.set(Calendar.MINUTE, minute); //updateLabel(); } }; Thanks in advance

    Read the article

  • SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

    - by Brian
    Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload. The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component. The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component. The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component. The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component. The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2. The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time. JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times. The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth. A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers. This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload. The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small. Performance Landscape JD Edwards EnterpriseOne Day in the Life Benchmark Online with Batch Workload This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine. System RackUnits Online Users Resp Time (sec) BatchConcur(# of UBEs) BatchRate(UBEs/m) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10 4 5000 0.88 19 10 9.0.1 Resp Time (sec) — Response time of online jobs reported in seconds Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute. JD Edwards EnterpriseOne Day in the Life Benchmark Online Workload Only These results are for the Day in the Life benchmark. They are run without any batch workload. System RackUnits Online Users ResponseTime (sec) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10 4 5000 0.52 9.0.1 IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0 IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0 IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere Configuration Summary Hardware Configuration: 1 x SPARC T3-1 server 1 x 1.65 GHz SPARC T3 128 GB memory 16 x 300 GB 10000 RPM SAS 1 x Sun Flash Accelerator F20 PCIe Card, 92 GB 1 x 10 GbE NIC 1 x SPARC Enterprise M3000 server 1 x 2.86 SPARC64 VII+ 64 GB memory 1 x 10 GbE NIC 2 x StorageTek 2540 + 2501 Software Configuration: JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3 Oracle Database 11g Release 2 Oracle 11g WebLogic server 11g Release 1 version 10.3.2 Oracle Web Tier Utilities 11g Oracle Solaris 10 9/10 Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1 Oracle’s Universal Batch Engine - Short UBEs and Long UBEs Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently. One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently. The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner. Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers. See Also SPARC T3-1 oracle.com SPARC Enterprise M3000 oracle.com Oracle Solaris oracle.com JD Edwards EnterpriseOne oracle.com Oracle Database 11g Release 2 Enterprise Edition oracle.com Disclosure Statement Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

    Read the article

  • Configuring service restart with 'restart service after' parameter

    - by Tim Brigham
    It appears that sc.exe isn't capable of setting the 'restart service after' parameter and powershell isn't capable of setting up service restarts at all. My intended configuration is failure1/restart failure2/restart failure3/nothing with a five minute counter between each restart. The five minute timer is extremely important. Is there anything else I can look at other than some registry hackery configure this?

    Read the article

  • Why does unpartitioned Hitachi HDS5C3020 drive start consuming 50% more power 15 minutes after boot?

    - by Pro Backup
    In a Debian 6.0.6 system there are 74 pieces of 2TB Toshiba DT01ABA200 drives. These drives are identified as Hitachi HDS5C3020BLE630 drives running firmware revision MZ4OAAB0. 64 Drives attached via HP SAS expander cards to an LSI 2008 SAS controller, another 5 drives are connected directly to the mainboard, 4 drives are connected to a Sil based PCI controller and last 1 drive is only powered and has no data cable connected. The controller LSI and Sil card's their onboard BIOS are both disabled and the mpt2sas and sata_sil modules are removed from the Linux debian 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux kernel. The mpt2sas module is loaded after boot using a modprobe command in /etc/rc.local. These 74 drives are not partitioned, neither formatted and also not mounted. The system consumes: with 0 drives: 70.6 - 70.9 Watt (also 15 minutes after boot); with 74 drives: 330 - 360 Watt, just after boot (is equivalent to 3.5 - 3.9W per drive in idle state); with 74 drives: 420 - 466 Watt, each time in the 15th minute of uptime (is equivalent to 4.7 - 5.3W per drive in idle state). The drive specification lists 4.7W as read/write, and 3.3W as idle power consumption. The increased power consumption is most likely on the 5V line, because after roughly 1 minute an "over current protection" (OCP) of the power supply (PSU) shuts down the power. The used PSU is a single rail model with an OCP of 122A on the 12V line and 55A on the 5V line. Regression: It doesn't matter whether the drive its APM value is set to disabled or 1 (maximum power saving). The operating system records no read/write activity in /proc/diskstats. The values there are identical (28 read, 0 write operations) as immediately after the modprobe operation. Can't test what happens when booting into the mainboard it's BIOS - to exclude any OS intervention - because the Super Micro X8SI6-F mainboard running firmware 06/27/12 has a bug that incorrectly reads a +74.0 C CPU sensor temperature as "High" in BIOS mode, and shuts down the power after 1 minute. What might be causing the drive read/write activity on all drives in the 15th minute after boot and how to prevent it from happening?

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • screen behind rate limited iptables and connection disconnects

    - by Bond
    Take this scenario if I have rate limited the connections to 4.(i.e if you attempt 4th connection you wont be able to login for some time.) If in a minute I get disconnected 3 times while I was already logged in on the server with a screen session, will I be able to login or I need to keep quite for a minute? -A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 --name DEFAULT --rsource -j DROP -A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name DEFAULT --rsource

    Read the article

  • How to schedule server jobs more intelligently than with cron?

    - by John
    I run a job every minute to reindex my site's content. Today, the search engine died, and when I logged in there were hundreds of orphan processes that had been started by cron. Is there another way using some kind of existing software that will let me execute a job every minute, but that won't launch another instance if that job doesn't return (i.e. because the search engine process has failed)?

    Read the article

  • where to get free 1440 cron jobs a day??

    - by Nok Imchen
    Well, I'm making a program for my own use. In this program, i need to set up cron job. The cron job should run every minute (24 hr * 60 mins = 1440 times). Thus, I'll need to set up a cron job with a frequency of 1 minute. I think Google app engine gives free cron job. But i'm very new to it. I downloaded the java SDK and read the document but understood nothing :( So, i cant use Google app engine. Is here any other free service like Google app engine which but with easier inferface??? all i want is a cron job with 1 minute frequency please help/suggest me.... thank you

    Read the article

  • Using PHP and SQLite

    - by Barry Shittpeas
    I am going to use a small SQLite database to store some data that my application will use. I cant however get the syntax for inserting data into the DB using PHP to correctly work, below is the code that i am trying to run: <?php $day = $_POST["form_Day"]; $hour = $_POST["form_Hour"]; $minute = $_POST["form_Minute"]; $type = $_POST["form_Type"]; $lane = $_POST["form_Lane"]; try { $db = new PDO('sqlite:EVENTS.sqlite'); $db->exec("INSERT INTO events (Day, Hour, Minute, Type, Lane) VALUES ($day, $hour, $minute, $type, $lane);"); $db = NULL; } catch(PDOException $e) { print 'Exception : '.$e->getMessage(); } ?> I have successfully created a SQLite database file using some code that i wrote but i just cant seem to insert data into the database.

    Read the article

  • Grouping timestamps by interval between timestamps, then calculating from group MySQL

    - by Emile
    To put this question into context, I'm trying to calculate "time in app" based on an event log. Assume the following table: user_id event_time 2 2012-05-09 07:03:38 3 2012-05-09 07:03:42 4 2012-05-09 07:03:43 2 2012-05-09 07:03:44 2 2012-05-09 07:03:45 4 2012-05-09 07:03:52 2 2012-05-09 07:06:30 I'd like to get the difference between the highest and lowest event_time from a set of timestamps that are within 2 minutes of eachother (and grouped by user). If a timestamp is outside of a 2 minute interval from the set, it should be considered a part of another set. Desired output: user_id seconds_interval 2 7 (because 07:03:45 - 07:03:38 is 7 seconds) 3 0 (because 07:03:42) 4 9 (because 07:03:52 - 2012-05-09 07:03:43) 2 0 (because 07:06:30 is outside 2 min interval of 1st user_id=2 set) This is what I've tried, although I can't group on seconds_interval (even if I could, I'm not sure this is the right direction): SELECT (max(tr.event_time)-min(tr.event_time)) as seconds_interval FROM some_table tr INNER JOIN TrackingRaw tr2 ON (tr.event_time BETWEEN tr2.event_time - INTERVAL 2 MINUTE AND tr2.event_time + INTERVAL 2 MINUTE) GROUP BY seconds_interval

    Read the article

  • Displaying the Time in AM/PM format in android

    - by Rahul Varma
    Hi, I am getting the time using a time picker and displaying the time in the text view using following code... private TimePickerDialog.OnTimeSetListener mTimeSetListener = new TimePickerDialog.OnTimeSetListener() { public void onTimeSet(TimePicker view, int hourOfDay, int minute) { Toast.makeText(SendMail.this, "Your Appointment time is "+hourOfDay+":"+minute, Toast.LENGTH_SHORT).show(); TextView datehid = (TextView)findViewById(R.id.timehidden); datehid.setText(String.valueOf(hourOfDay)+ ":"+(String.valueOf(minute))); } }; Now, the issue is when i set the time as 8:00 pm then i get the time displayed as 20:0... I want to display the time as 8:00 pm... How can i do that???

    Read the article

  • ForceContext Queries method twice?

    - by azz0r
    Hello, I'm doing a per minute script, to output the xml that a server will read, I am used forceContext. <?php class My_Controller_Action_Helper_ForceContext extends Zend_Controller_Action_Helper_ContextSwitch { public function initContext($format = null) { $request = $this->getRequest(); $action = $request->getActionName(); $context = $this->getActionContexts($action); //check if this is the only context if(count($context) === 1) { $format = $context[0]; } return parent::initContext($format); } } class Video_PerMinuteController extends Zend_Controller_Action { function init() { $contextSwitch = $this->_helper->getHelper('ForceContext'); $contextSwitch->addActionContext('transaction', 'xml')->initContext(); In my method, it gets the current minute count, adds 1, then saves. So I can clearly see when its accessed more than once in a minute. If I comment out the second contextSwitch line, it only goes up 1, if Its not, it displays the xml page but adds 2 minutes (being called twice somehow). Any ideas?

    Read the article

  • How do I use timezones with a datetime object in python?

    - by jidar
    How do I properly represent a different timezone in my timezone? The below example only works because I know that EDT is one hour ahead of me, so I can uncomment the subtraction of myTimeZone() import datetime, re from datetime import tzinfo class myTimeZone(tzinfo): """docstring for myTimeZone""" def utfoffset(self, dt): return timedelta(hours=1) def myDateHandler(aDateString): """u'Sat, 6 Sep 2008 21:16:33 EDT'""" _my_date_pattern = re.compile(r'\w+\,\s+(\d+)\s+(\w+)\s+(\d+)\s+(\d+)\:(\d+)\:(\d+)') day, month, year, hour, minute, second = _my_date_pattern.search(aDateString).groups() month = [ 'JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP', 'OCT', 'NOV', 'DEC' ].index(month.upper()) + 1 dt = datetime.datetime( int(year), int(month), int(day), int(hour), int(minute), int(second) ) # dt = dt - datetime.timedelta(hours=1) # dt = dt - dt.tzinfo.utfoffset(myTimeZone()) return (dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second, 0, 0, 0) def main(): print myDateHandler("Sat, 6 Sep 2008 21:16:33 EDT") if __name__ == '__main__': main()

    Read the article

  • Group MySQL Data into Arbitrarily Sized Time Buckets

    - by Eric J.
    How do I count the number of records in a MySQL table based on a timestamp column per unit of time where the unit of time is arbitrary? Specifically, I want to count how many record's timestamps fell into 15 minute buckets during a given interval. I understand how to do this in buckets of 1 second, 1 minute, 1 hour, 1 day etc. using MySQL date functions, e.g. SELECT YEAR(datefield) Y, MONTH(datefield) M, DAY(datefield) D, COUNT(*) Cnt FROM mytable GROUP BY YEAR(datefield), MONTH(datefield), DAY(datefield) but how can I group by 15 minute buckets?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >