Search Results

Search found 3489 results on 140 pages for 'clock rate'.

Page 26/140 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • recursive program

    - by wilson88
    I am trying to make a recursive program that calculates interest per year.It prompts the user for the startup amount (1000), the interest rate (10%)and number of years(1).(in brackets are samples) Manually I realised that the interest comes from the formula YT(1 + R)----- interest for the first year which is 1100. 2nd year YT(1 + R/2 + R2/2) //R squared 2nd year YT(1 + R/3 + R2/3 + 3R3/) // R cubed How do I write a recursive program that will calculate the interest? Below is the function which I tried //Latest after editing double calculateInterest2(double start, double rate, int duration) { if (0 == duration) { return start; } else { return (1+rate) * calculateInterest2(start, rate, duration - 1); } }

    Read the article

  • Matching String.

    - by Harikrishna
    I have a string String mainString="///BUY/SELL///ORDERTIME///RT///QTY///BROKERAGE///NETRATE///AMOUNTRS///RATE///SCNM///"; Now I have another strings String str1= "RT"; which should be matched only with RT which is substring of string mainString but not with ORDERTIME which is also substring of string mainString. String str2= "RATE" ; And RATE(str2) should be matched with RATE which is substring of string mainString but not with NETRATE which is also substring of string mainString. How can we do that ?

    Read the article

  • Select a record with highest amount by joining two tables

    - by user2516394
    I've 2 tables Sales & Purchase, Sales table with fields SaleId, Rate, Quantity, Date, CompanyId, UserID. Purchase table with fields PurchaseId, Rate, Quantity, Date, CompanyId, UserID. I want to select a record from either table that have highest Rate*Quantity. SELECT SalesId Or PurchaseId FROM Sales,Purchase where Sales.UserId=Purchase.UserId and Sales.CompanyId=Purchase.CompanyId AND Sales.Date=Current date AND Purchase.Date=Current date AND Sales.UserId=1 AND Purchase.UserId=1 AND Sales.CompanyId=1 AND Purchase.ComoanyId=1

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • How can I permanently fix my date synchronize problem in linux?

    - by gr33d
    Ubuntu 7.10 server i386 clock/date/time won't stay in sync. Are their log files I can view to tell when the clock changes? For a temporary fix, I created a file in /etc/cron.hourly: #!/bin/sh ntpdate time.nist.gov However, this still leaves a potential hour of unchecked time. Is there a cron.minutely? That would still leave a potential minute of unchecked time. I have read about CMOS battery problems, but what if this does not fix it? I'd like to be able to troubleshoot this as a completely software problem. My squid logs are showing dates back in 2005 when the clock changes, and my time-sensitive access controls are skewed and end up allowing users to surf prohibited websites during business hours.

    Read the article

  • How to keep time on resumed KVM guest with libvirt?

    - by Hristo Hristov
    On my host I am using libvirt and a KVM guest. When the host is shutting down, libvirt suspends the guest. When the host is starting up, libvirt resumes the guest. The problem is, if the guest is suspended and resumed after 24 hours for example, then the guest time is 24 hours in the past. I thought that maybe the problem is with the clocksource, but it is set to "kvm-clock" already. $ cat /sys/devices/system/clocksource/clocksource0/available_clocksource kvm-clock tsc hpet acpi_pm $ cat /sys/devices/system/clocksource/clocksource0/current_clocksource kvm-clock

    Read the article

  • Game loop and time tracking

    - by David Brown
    Maybe I'm just an idiot, but I've been trying to implement a game loop all day and it's just not clicking. I've read literally every article I could find on Google, but the problem is that they all use different timing mechanisms, which makes them difficult to apply to my particular situation (some use milliseconds, other use ticks, etc). Basically, I have a Clock object that updates each time the game loop executes. internal class Clock { public static long Timestamp { get { return Stopwatch.GetTimestamp(); } } public static long Frequency { get { return Stopwatch.Frequency; } } private long _startTime; private long _lastTime; private TimeSpan _totalTime; private TimeSpan _elapsedTime; /// <summary> /// The amount of time that has passed since the first step. /// </summary> public TimeSpan TotalTime { get { return _totalTime; } } /// <summary> /// The amount of time that has passed since the last step. /// </summary> public TimeSpan ElapsedTime { get { return _elapsedTime; } } public Clock() { Reset(); } public void Reset() { _startTime = Timestamp; _lastTime = 0; _totalTime = TimeSpan.Zero; _elapsedTime = TimeSpan.Zero; } public void Tick() { long currentTime = Timestamp; if (_lastTime == 0) _lastTime = currentTime; _totalTime = TimestampToTimeSpan(currentTime - _startTime); _elapsedTime = TimestampToTimeSpan(currentTime - _lastTime); _lastTime = currentTime; } public static TimeSpan TimestampToTimeSpan(long timestamp) { return TimeSpan.FromTicks( (timestamp * TimeSpan.TicksPerSecond) / Frequency); } } I based most of that on the XNA GameClock, but it's greatly simplified. Then, I have a Time class which holds various times that the Update and Draw methods need to know. public class Time { public TimeSpan ElapsedVirtualTime { get; internal set; } public TimeSpan ElapsedRealTime { get; internal set; } public TimeSpan TotalVirtualTime { get; internal set; } public TimeSpan TotalRealTime { get; internal set; } internal Time() { } internal Time(TimeSpan elapsedVirtualTime, TimeSpan elapsedRealTime, TimeSpan totalVirutalTime, TimeSpan totalRealTime) { ElapsedVirtualTime = elapsedVirtualTime; ElapsedRealTime = elapsedRealTime; TotalVirtualTime = totalVirutalTime; TotalRealTime = totalRealTime; } } My main class keeps a single instance of Time, which it should constantly update during the game loop. So far, I have this: private static void Loop() { do { Clock.Tick(); Time.TotalRealTime = Clock.TotalTime; Time.ElapsedRealTime = Clock.ElapsedTime; InternalUpdate(Time); InternalDraw(Time); } while (!_exitRequested); } The real time properties of the time class turn out great. Now I'd like to get a proper update/draw loop working so that the state is updated a variable number of times per frame, but at a fixed timestep. At the same time, the Time.TotalVirtualTime and Time.ElapsedVirtualTime should be updated accordingly. In addition, I intend for this to support multiplayer in the future, in case that makes any difference to the design of the game loop. Any tips or examples on how I could go about implementing this (aside from links to articles)?

    Read the article

  • Pygame surfaces and their Rects

    - by Jaka Novak
    I am trying to understand how pygame surfaces work. I am confused about Rect position of Surface object. If I try blit surface on screen at some position then Surface is drawn at right position, but Rect of the surface is still at position (0, 0)... I tried write my own surface class with new rect, but i am not sure if is that right solution. My goal is that i could move surface like image with rect.move() or something like that. If there is any solution to do that i would be happy to read it. Thanks for answer and time for reading this awful English If helps i write some code for better understanding my problem. (run it first, and then uncomment two lines of code and run again to see the diference): import pygame from pygame.locals import * class SurfaceR(pygame.Surface): def __init__(self, size, position): pygame.Surface.__init__(self, size) self.rect = pygame.Rect(position, size) self.position = position self.size = size def get_rect(self): return self.rect def main(): pygame.init() screen = pygame.display.set_mode((640, 480)) pygame.display.set_caption("Screen!?") clock = pygame.time.Clock() fps = 30 white = (255, 255, 255) red = (255, 0, 0) green = (0, 255, 0) blue = (0, 0, 255) surface = pygame.Surface((70,200)) surface.fill(red) surface_re = SurfaceR((300, 50), (100, 300)) surface_re.fill(blue) while True: for event in pygame.event.get(): if event.type == QUIT: return 0 screen.blit(surface, (100,50)) screen.blit(surface_re, surface_re.position) #pygame.draw.rect(screen, white, surface.get_rect()) #pygame.draw.rect(screen, white, surface_re.get_rect()) pygame.display.update() clock.tick(fps) if __name__ == "__main__": main()

    Read the article

  • Why are my videos playing speeded up with no audio, but work fine if I log in as a guest?

    - by Martins Kruze
    Since the start of this week I have been experiencing a glitch in the multimedia on my Samsung R518 laptop. I have 2 problems: Videos in every player are speeded up around 2 or 4 times (including youtube.com (both HTML5 and flash variants), any other video on the web and videos on my laptop played by Totem Media Player), exception is VLC player, but 2nd problem does concern even that. There is no sound - simple as that (with or without headphones plugged in). These all problems are now, and has not seen before, I upgraded to Ubuntu 10.10 after it was possible, and from start I didn't have anything from this - it just started in this week. I haven't even putted new software in. I have more or less solved the question (kind of) - I just logged in as a guest - and it all works, but when I make a new user - it does not. Please help me. Some stats below: sudo lshw -c sound *-multimedia description: Audio device product: RV710/730 vendor: ATI Technologies Inc physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:48 memory:cfeec000-cfeeffff *-multimedia description: Audio device product: 82801I (ICH9 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 03 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:47 memory:fc200000-fc203fff sudo lshw -c video *-display description: VGA compatible controller product: M92 LP [Mobility Radeon HD 4300 Series] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:46 memory:d0000000-dfffffff ioport:2000(size=256) memory:cfef0000-cfefffff memory:cfe00000-cfe1ffff

    Read the article

  • View the Time & Date in Chrome When Hiding Your Taskbar

    - by Asian Angel
    Do you prefer keeping your Taskbar hidden but still need to keep watch on what time it is? Now you can keep track of the time without the Taskbar using the Date Today extension for Google Chrome. A Look at Date Today with Different Themes This extension does one thing and does it well…it provides you with an “active icon” clock that will let you view the time and date in two fashions. The first is by hovering your mouse over the “Toolbar Clock Button”… And the second is by clicking on the “Toolbar Clock Button” to view an enlarged version. Here you can see the extension in use with five different themes to get an idea of how it might look with the theme that you are currently using. It does stand out very nicely with brighter or darker colored themes. Conclusion While this extension is obviously not for everyone it will make a nice (and useful) addition to Chrome for those who prefer keeping their Taskbar hidden. Links Download the Date Today extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Set the Date and Time on SolarisView Browser History Based on Host & Date in ChromeQuick Tip: Set a Future Date for a Post in WordPressFuture Date a Post in Windows Live WriterSave Screen Space by Hiding the Bookmarks Toolbar in Safari for Windows TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Discovery Channel LIFE Theme (Win7) Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs

    Read the article

  • The model to sell apps on App Store is better with a paid only version?

    - by ????
    Rob Napier, the author of iOS 5 Programming Pushing the Limits, mentioned there are several models of selling apps on the App Store: Write an app and sell it Publish a free and a full version Ad supported by third party or by iAd In App purchase Surprisingly, the author said that the most workable model is (1) in terms of sales. I would think that (2) with fairly limiting ability for the free version can bring more sales, as people without trying, might not plunge down $0.99 or $1.99 for something they haven't tried? I for one, might not have purchased Angry Birds if I didn't try their free version first. Also, I think it also depends on the situation: if the app is about alarm clock, and there are already 5 alarm clocks in App Store that are free, then your app that is $0.99 might not be that eagerly purchased. If yours is also free, and users really like it out of all the other ones, then they may think, $0.99 is nothing to get a good alarm clock, and gladly pay you the $0.99 in exchange for a full version of the alarm clock, something that they can't get with the free version. (such as the full version can let you choose a song from your Music Library for the alarm). Could (1) work only if the user definitely want it and have no substitute? How might it work the best?

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • Using PHP version 5.2 or 5.3 for commercial products?

    - by Ash
    I'm doing research on what version of PHP to use when creating commercial scripts that will be sold to the public. Although the available stats aren't great, PHP 5.3 shows a 18.5% adoption rate. I'd like to use Symfony to create these scripts and it requires 5.3.2 which shows an even lower adoption rate (roughly 13% of that 18.5% use less than 5.3.2). Would I be risking much by jumping straight to PHP 5.3.2+ or should I ignore the stats and plough ahead?

    Read the article

  • Using PHP version 5.2 or 5.3 for end-user commercial products?

    - by Ash
    I'm doing research on what version of PHP to use when creating commercial scripts that will be sold to end users. Although the available stats aren't great, PHP 5.3 shows a 18.5% adoption rate. I'd like to use Symfony to create these scripts and it requires 5.3.2 which shows an even lower adoption rate (roughly 13% of that 18.5% use less than 5.3.2). Would I be risking much by jumping straight to PHP 5.3.2+ or should I ignore the stats and plough ahead?

    Read the article

  • Registration Open for 2015 Oracle Value Chain Summit

    - by Terri Hiskey
    Registration has opened for the Oracle Value Chain Summit, taking place January 26-28 in San Jose, California. Register now and take advantage of the Super Saver rate of only $495 (a $400 savings from the regular registration rate), good through September 26. Click here to register today, or to check out further information about the Summit. Keynote speakers to the 2015 event include former 49ers quarterback Steve Young and leading green business expert and author of the best-selling Green to Gold, Andrew Winston.

    Read the article

  • Alfa AWUS036H USB wireless adapter not recognized

    - by GFiasco
    The Alfa AWUS036H USB wireless adapter will not be recognized by my netbook (Ubuntu 14.04, Asus X201E). As I understand it, the drivers should already be built in to this version of Linux, but I tried a make/make install of the latest Realtek drivers (as mentioned on How do I install drivers for the Alfa AWUS036H USB wireless adapter?) and it didn't work. I then followed the advice of this thread (ALFA AWUS036NH driver) and did a make/make install of the most up-to-date backport of the drivers, but that didn't work. At this point I tried a series of commands from this thread (http://ubuntuforums.org/showthread.php?t=2187780) in an attempt to identify the problem, but at no point could I get the laptop to ever recognize the USB adapter. I have also troubleshot the USB cable itself, tried both the USB 2.0 and 3.0 ports on the laptop, have never received an error message regarding a need to update the firmware, and have seemingly successfully installed all manner of variation of Realtek drivers which were supposed to make the adapter work. (I also tried to delete/clean up after each install, in the hope I wasn't making things worse.) Not sure what I should do next. Please let me know if I need to post any more information. Thanks very much for your help. EDIT: Before inserting Alpha USB adapter: :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub After inserting Alpha USB adapter (USB 3.0 port, no change): :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Ran tail -f /var/log/syslog, inserted device, no recognition (last entry is dated 16:17:01, so an hour ago). Going to check on an Ubuntu 14.04 laptop and Windows XP desktop. I'll update after. Thanks for your help to this point.

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • Fingerprint driver issue hp Pavillion dm4

    - by user108226
    I need a fingerprint scan driver for Pavilion dm4 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 0a5c:21e3 Broadcom Corp. Bus 001 Device 004: ID 138a:0018 Validity Sensors, Inc. Bus 002 Device 003: ID 10f1:1a2e Importek

    Read the article

  • What if globals make sense?

    - by Greg
    I've got a value that many objects need. For example, a financial application with different investments as objects, and most of them need the current interest rate. I was hoping to encapsulate my "financial environment" as an object, with the interest rate as a property. But, sibling objects that need that value can't get to it. So how do I share values among many objects without over-coupling my design? Obviously I'm thinking about this wrong.

    Read the article

  • How Best to Use Your Forum For SEO

    Forums are no exception to this rule, by any means. When you create a forum, you obviously want to attract users who will get the most from your website and actually utilize what you have to offer. It does no good for you to market to the masses and find a 50% success rate when you can market to your specific niche and find a 75% or even 85% success rate.

    Read the article

  • Should I charge for travel time as a contractor?

    - by Keith
    Here's a question for all my fellow contractors - I'm paid quite handsomely for my normal contracted hours (any overtime is billed at the same rate) but do you think it fair to bill for travel time to the other end of the country (regional office) when this takes place outside of the normal working day (or overlapping into the evening) as well as actual time holed up in a hotel room when you get there, ready for a normal working day the next day, along with the return journey? Petrol is claimed normally (nominal rate) and hotel is covered by the company I contract for.

    Read the article

  • WoW runs faster on GNOME Shell compared to Unity

    - by João Vinholi
    I have been trying to run WoW on Ubuntu 12.04. When I run it on unity, the frame rate is very low and it is impossible to play. Although, when I launch it on gnome shell, for some reason, the frame rate gets very high and the playing experience is very comfortable. The problem is that I prefer running Unity instead of gnome shell, but I like to play WoW too. Is there a way to run WoW on Unity, with no lag?

    Read the article

  • Wow is very faster on GNOME Shell comparing to unity

    - by João Vinholi
    I have been trying to run WoW on ubuntu 12.04. When I run it on unity, the frame rate is very low and it is impossible to play. Although, when I launch it on gnome shell, for some reason, the frame rate gets very high and the playing experience is very comfortable. The problem is that I prefer running unity instead of gnome shell, but I like to play WoW too. Is there a way to run WoW on unity, with no lag?

    Read the article

  • Oracle Cloud Services Referral Program Now Available

    - by Cinzia Mascanzoni
    Partners can now take advantage of the five different Cloud Services programs: The Cloud Referral Partner program allows partners to get rewarded for referring Oracle Cloud opportunities to Oracle. The Cloud Services Partner Referral program is an extension of Oracle’s existing referral program but offers a standard 10% referral rate paid on guaranteed revenue with $50K cap. For a limited time, Oracle is offering a 20% referral rate for [offering still being finalized]. Contact your partner manager for more details and click here for more information.

    Read the article

  • How to find out console equivalents of Ubuntu System Settings GUI?

    - by user4514
    How do I in general find out, what the very nice "System Settings GUI" in Ubuntu (say 12.04) does and how to replicate the changes in command line? Many people ask questions like "how to change the keyboard rate using command line", and often the answers do not help and are hard to find. What is the easiest way to find out, what the GUI is actually changing (for various types of settings. I.e. keyboard layout, rate, mouse, network, ...)

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >