Search Results

Search found 2796 results on 112 pages for 'bounce rate'.

Page 19/112 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • recursive program

    - by wilson88
    I am trying to make a recursive program that calculates interest per year.It prompts the user for the startup amount (1000), the interest rate (10%)and number of years(1).(in brackets are samples) Manually I realised that the interest comes from the formula YT(1 + R)----- interest for the first year which is 1100. 2nd year YT(1 + R/2 + R2/2) //R squared 2nd year YT(1 + R/3 + R2/3 + 3R3/) // R cubed How do I write a recursive program that will calculate the interest? Below is the function which I tried //Latest after editing double calculateInterest2(double start, double rate, int duration) { if (0 == duration) { return start; } else { return (1+rate) * calculateInterest2(start, rate, duration - 1); } }

    Read the article

  • Matching String.

    - by Harikrishna
    I have a string String mainString="///BUY/SELL///ORDERTIME///RT///QTY///BROKERAGE///NETRATE///AMOUNTRS///RATE///SCNM///"; Now I have another strings String str1= "RT"; which should be matched only with RT which is substring of string mainString but not with ORDERTIME which is also substring of string mainString. String str2= "RATE" ; And RATE(str2) should be matched with RATE which is substring of string mainString but not with NETRATE which is also substring of string mainString. How can we do that ?

    Read the article

  • Select a record with highest amount by joining two tables

    - by user2516394
    I've 2 tables Sales & Purchase, Sales table with fields SaleId, Rate, Quantity, Date, CompanyId, UserID. Purchase table with fields PurchaseId, Rate, Quantity, Date, CompanyId, UserID. I want to select a record from either table that have highest Rate*Quantity. SELECT SalesId Or PurchaseId FROM Sales,Purchase where Sales.UserId=Purchase.UserId and Sales.CompanyId=Purchase.CompanyId AND Sales.Date=Current date AND Purchase.Date=Current date AND Sales.UserId=1 AND Purchase.UserId=1 AND Sales.CompanyId=1 AND Purchase.ComoanyId=1

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • Using PHP version 5.2 or 5.3 for commercial products?

    - by Ash
    I'm doing research on what version of PHP to use when creating commercial scripts that will be sold to the public. Although the available stats aren't great, PHP 5.3 shows a 18.5% adoption rate. I'd like to use Symfony to create these scripts and it requires 5.3.2 which shows an even lower adoption rate (roughly 13% of that 18.5% use less than 5.3.2). Would I be risking much by jumping straight to PHP 5.3.2+ or should I ignore the stats and plough ahead?

    Read the article

  • Using PHP version 5.2 or 5.3 for end-user commercial products?

    - by Ash
    I'm doing research on what version of PHP to use when creating commercial scripts that will be sold to end users. Although the available stats aren't great, PHP 5.3 shows a 18.5% adoption rate. I'd like to use Symfony to create these scripts and it requires 5.3.2 which shows an even lower adoption rate (roughly 13% of that 18.5% use less than 5.3.2). Would I be risking much by jumping straight to PHP 5.3.2+ or should I ignore the stats and plough ahead?

    Read the article

  • Registration Open for 2015 Oracle Value Chain Summit

    - by Terri Hiskey
    Registration has opened for the Oracle Value Chain Summit, taking place January 26-28 in San Jose, California. Register now and take advantage of the Super Saver rate of only $495 (a $400 savings from the regular registration rate), good through September 26. Click here to register today, or to check out further information about the Summit. Keynote speakers to the 2015 event include former 49ers quarterback Steve Young and leading green business expert and author of the best-selling Green to Gold, Andrew Winston.

    Read the article

  • Alfa AWUS036H USB wireless adapter not recognized

    - by GFiasco
    The Alfa AWUS036H USB wireless adapter will not be recognized by my netbook (Ubuntu 14.04, Asus X201E). As I understand it, the drivers should already be built in to this version of Linux, but I tried a make/make install of the latest Realtek drivers (as mentioned on How do I install drivers for the Alfa AWUS036H USB wireless adapter?) and it didn't work. I then followed the advice of this thread (ALFA AWUS036NH driver) and did a make/make install of the most up-to-date backport of the drivers, but that didn't work. At this point I tried a series of commands from this thread (http://ubuntuforums.org/showthread.php?t=2187780) in an attempt to identify the problem, but at no point could I get the laptop to ever recognize the USB adapter. I have also troubleshot the USB cable itself, tried both the USB 2.0 and 3.0 ports on the laptop, have never received an error message regarding a need to update the firmware, and have seemingly successfully installed all manner of variation of Realtek drivers which were supposed to make the adapter work. (I also tried to delete/clean up after each install, in the hope I wasn't making things worse.) Not sure what I should do next. Please let me know if I need to post any more information. Thanks very much for your help. EDIT: Before inserting Alpha USB adapter: :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub After inserting Alpha USB adapter (USB 3.0 port, no change): :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Ran tail -f /var/log/syslog, inserted device, no recognition (last entry is dated 16:17:01, so an hour ago). Going to check on an Ubuntu 14.04 laptop and Windows XP desktop. I'll update after. Thanks for your help to this point.

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • Fingerprint driver issue hp Pavillion dm4

    - by user108226
    I need a fingerprint scan driver for Pavilion dm4 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 0a5c:21e3 Broadcom Corp. Bus 001 Device 004: ID 138a:0018 Validity Sensors, Inc. Bus 002 Device 003: ID 10f1:1a2e Importek

    Read the article

  • How Best to Use Your Forum For SEO

    Forums are no exception to this rule, by any means. When you create a forum, you obviously want to attract users who will get the most from your website and actually utilize what you have to offer. It does no good for you to market to the masses and find a 50% success rate when you can market to your specific niche and find a 75% or even 85% success rate.

    Read the article

  • Should I charge for travel time as a contractor?

    - by Keith
    Here's a question for all my fellow contractors - I'm paid quite handsomely for my normal contracted hours (any overtime is billed at the same rate) but do you think it fair to bill for travel time to the other end of the country (regional office) when this takes place outside of the normal working day (or overlapping into the evening) as well as actual time holed up in a hotel room when you get there, ready for a normal working day the next day, along with the return journey? Petrol is claimed normally (nominal rate) and hotel is covered by the company I contract for.

    Read the article

  • What if globals make sense?

    - by Greg
    I've got a value that many objects need. For example, a financial application with different investments as objects, and most of them need the current interest rate. I was hoping to encapsulate my "financial environment" as an object, with the interest rate as a property. But, sibling objects that need that value can't get to it. So how do I share values among many objects without over-coupling my design? Obviously I'm thinking about this wrong.

    Read the article

  • WoW runs faster on GNOME Shell compared to Unity

    - by João Vinholi
    I have been trying to run WoW on Ubuntu 12.04. When I run it on unity, the frame rate is very low and it is impossible to play. Although, when I launch it on gnome shell, for some reason, the frame rate gets very high and the playing experience is very comfortable. The problem is that I prefer running Unity instead of gnome shell, but I like to play WoW too. Is there a way to run WoW on Unity, with no lag?

    Read the article

  • Wow is very faster on GNOME Shell comparing to unity

    - by João Vinholi
    I have been trying to run WoW on ubuntu 12.04. When I run it on unity, the frame rate is very low and it is impossible to play. Although, when I launch it on gnome shell, for some reason, the frame rate gets very high and the playing experience is very comfortable. The problem is that I prefer running unity instead of gnome shell, but I like to play WoW too. Is there a way to run WoW on unity, with no lag?

    Read the article

  • How to find out console equivalents of Ubuntu System Settings GUI?

    - by user4514
    How do I in general find out, what the very nice "System Settings GUI" in Ubuntu (say 12.04) does and how to replicate the changes in command line? Many people ask questions like "how to change the keyboard rate using command line", and often the answers do not help and are hard to find. What is the easiest way to find out, what the GUI is actually changing (for various types of settings. I.e. keyboard layout, rate, mouse, network, ...)

    Read the article

  • Oracle Cloud Services Referral Program Now Available

    - by Cinzia Mascanzoni
    Partners can now take advantage of the five different Cloud Services programs: The Cloud Referral Partner program allows partners to get rewarded for referring Oracle Cloud opportunities to Oracle. The Cloud Services Partner Referral program is an extension of Oracle’s existing referral program but offers a standard 10% referral rate paid on guaranteed revenue with $50K cap. For a limited time, Oracle is offering a 20% referral rate for [offering still being finalized]. Contact your partner manager for more details and click here for more information.

    Read the article

  • Should I charge for travel expenses as a contractor?

    - by Keith
    Here's a question for all my fellow contractors - I'm paid quite handsomely for my normal contracted hours (any overtime is billed at the same rate) but do you think it fair to bill for travel time to the other end of the country (regional office) when this takes place outside of the normal working day (or overlapping into the evening) as well as actual time holed up in a hotel room when you get there, ready for a normal working day the next day, along with the return journey? Petrol is claimed normally (nominal rate) and hotel is covered by the company I contract for.

    Read the article

  • Configure Postfix to Port other than 25

    - by bwheeler96
    I've done quite a bit of googling on how to reconfigure postfix to work on a different port, but I still can't fond the line(s) people keep talking about in my master.cf. I'm using OS X Mountain Lion, and my ISP blocks traffic both ways on port 25. people have said to look for a line that says smtp inet n - n - - smtpd I can't find it. This is (what I believe to be) unmodified # ==== Begin auto-generated section ======================================== # This section of the master.cf file is auto-generated by the Server Admin # Mail backend plugin whenever mails settings are modified. smtp inet n - n - 1 postscreen smtpd pass - - n - - smtpd dnsblog unix - - n - 0 dnsblog tlsproxy unix - - n - 0 tlsproxy submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt smtp unix - - n - - smtp # === End auto-generated section =========================================== # Modern SMTP clients communicate securely over port 25 using the STARTTLS command. # Some older clients, such as Outlook 2000 and its predecessors, do not properly # support this command and instead assume a preconfigured secure connection # on port 465. This was sometimes called "smtps", but such usage was never # approved by the IANA and therefore conflicts with another, legitimate assignment. # For more details about managing secure SMTP connections with postfix, please see: # http://www.postfix.org/TLS_README.html # To read more about configuring secure connections with Outlook 2000, please read: # http://support.microsoft.com/default.aspx?scid=kb;en-us;Q307772 # Apple does not support the use of port 465 for this purpose. # After determining that connecting clients do require this behavior, you may choose # to manually enable support for these older clients by uncommenting the following # four lines. #465 inet n - n - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - n - - smtp pickup fifo n - n 60 1 pickup cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify sacl-cache unix - - n - 1 sacl-cache flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error retry unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants.

    Read the article

  • Bouncing a ball off a surface

    - by Sagekilla
    Hi all, I'm currently in the middle of writing a game like Breakout, and I was wondering how I could properly bounce a ball off a surface. I went with the naive way of rotating the velocity by 180 degrees, which was: [vx, vy] -> [-vy, vx] Which (unsurprisingly) didn't work so well. If I know the position and veocity of the ball, as well as the point the ball would hit (but is going to instead bounce off of) how can I bounce it off that point? I don't need any language specific code. If anyone could provide a small, mathematical formula on how to properly do this that would work fine for me. I also need this to work with integer positions and velocity (I can't use floating point anywhere). Thanks!

    Read the article

  • openvpn TCP/UDP slow SSH/SMB performance

    - by Petr Latal
    I have question about strange behavior of my openVPN configuration on Debian lenny. I have 2 server configs (one proto tcp-server based and one proto udp based). ISP bandwidth is 7Mbit/7Mbit. When I uses proto tcp-server my download server rate is fine around 6,4 Mbit/s, but upload rate is about 3Mbit/s. When I uses proto udp, my download server rate is around 3Mbit/s and upload rate around 6,4Mbit/s. I tried to handle the MTU, MSSFIX and cipher on/off on server and client configs to synchronize rates, but without solution. Here is TCP based SERVER config: mode server tls-server port 1194 proto tcp-server dev tap0 ifconfig 11.10.15.1 255.255.255.0 ifconfig-pool 11.10.15.2 11.10.15.20 255.255.255.0 push "route 192.168.1.0 255.255.255.0" push "dhcp-option DNS 192.168.1.200" push "route-gateway 11.10.15.1" push "dhcp-option WINS 192.168.1.200" route-up /etc/openvpn/routeup.sh duplicate-cn ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key mssfix cipher BF-CBC Here is UDP based SERVER config: port 1194 proto udp dev tun0 local xx.xx.xx.xx server 11.10.15.0 255.255.255.0 ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 duplicate-cn script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key tun-mtu 1500 mssfix 1212 client-to-client ifconfig-pool-persist ipp.txt Here is TCP/UDP based windows CLIENT config: remote xx.xx.xx.xx --socket-flags TCP_NODELAY tls-client port 1194 proto tcp-client #proto udp dev tap #dev tun pull ca ca.crt cert latis.crt key latis.key mute 0 comp-lzo adaptive verb 3 resolv-retry infinite nobind persist-key auth-user-pass auth-nocache script-security 2 mssfix cipher BF-CBC

    Read the article

  • ADSL throughput loss from Reed-Solomon encoding

    - by javano
    I'm reading about ADSL starting here and I am confused by how the Reed-Solomon encoding for ECC is limiting the available transfer rate, as much as it does (nearly half). This pdf on the same subject contains the following; A maximum of 255 sub-carriers can be used to modulate data in the downstream direction. Sub-carrier 256, the downstream Nyquist frequency, and sub-carrier 64, the downstream pilot frequency, are not available for user data, thus limiting the total number of available downstream sub-carriers to 254. Each of these 254 sub-carriers can support the modulation of 0 to 15 bits. Since the ADSL DMT data frame rate is 4000 frames per second, the maximum theoretical downstream data rate of an ADSL system is 15.24Mbps. Due to limitations in system architecture, specifically the maximum allowable Reed-Solomon codeword size (255 bytes), the maximum achievable downstream data rate is 8.16Mbps. How is this nearly halving the throughput? Is all that extra bandwidth overhead of the RS encoding? 15240000 bps (15.24Mbps) - 8160000 bps (8.12Mbps) = 7080000 bps (7.08Mbps). Where has that 7Mbps of throughput gone? EDIT: I tried to read the wiki page on Reed-Soloman but it's all crazy maths and algerbra, which I don't understand. I can understand that data is split into 255 byte codewords, because that maybe the max codeword size whilst still maintaining accuracy during transmission; But I don't understand why that means less data is sent?

    Read the article

  • Traffic Shaping using tc

    - by Simon
    Hi guys, I have a 1.5 Mbit/s link that i want to share with 150 users. My setup is the following: Linux box with 3 NICs eth0 - public ip eth1 - subnet A - 50 users (static ips) eth2 - subnet B - 100 users (via dhcp) I am using squid as a transparent proxy on port 3128. dhcp server using ports 67 and 68. I was creating, but I think packets are not going to the right queues #!/bin/bash DEV=eth0 RATE_MAIN=2048kbit CEIL_MAIN=2048kbit BURST=1b CBURST=1b RATE_DEFAULT=1024kbit CEIL_DEFAULT=$CEIL_MAIN PRIO_DEFAULT=3 RATE_P2P=1024Kbit CEIL_P2P=$CEIL_MAIN PRIO_P2P=4 RATE_IND=32kbit CEIL_IND=$CEIL_DEFAULT tc qdisc del dev $DEV root tc qdisc add dev $DEV root handle 1: htb default 30 tc class add dev $DEV parent 1: classid 1:1 htb rate $RATE_MAIN ceil $CEIL_MAIN tc class add dev $DEV parent 1:1 classid 1:10 htb rate $RATE_DEFAULT ceil $CEIL_MAIN burst $BURST cburst $CBURST prio $PRIO_WEB ## some other sub class for p2p other traffic tc class add dev $DEV parent 1:1 classid 1:20 htb rate $RATE_P2P ceil $CEIL_P2P burst $BURST cburst $CBURST prio $PRIO_P2P $IPS_NET1=50 $IPS_NET2=100 let $IPS=$IPS_NET1+$IPS_NET2 for ((i=1; i<= $IPS; i++)) do let CLASSID=($i+100) let HANDLE=($i+100) tc class add dev $DEV parent 1:10 classid 1:$CLASSID htb rate $RATE_IND ceil $CEIL_IND tc qdisc add dev $DEV parent 1:$CLASSID handle $HANDLE: sfq perturb 10 done ## Generate IP addresses ## IP_ADDRESSES="" # Subnet A BASE_IP=10.10.10. for ((i=2; i<=$IPS_NET1+1; i++)) do TEMP="$BASE_IP$i" IP=ADDRESSES="$IP_ADDRESSES $TEMP" done # Subnet B BASE_IP=192.168.0. for ((i=2; i<=$IPS_NET2+1; i++)) do TEMP="$BASE_IP$i" IP_ADDRESSES="$IP_ADDRESSES $TEMP" done ## FILTERS ## j=1 U32="tc filter add dev $DEV protocol ip parent 1:0 prio $PRIO_DEFAULT u32" for NET in $IP_ADDRESSES; do let CLASSID=($j+100) $U32_DEFAULT match ip src $NET/32 flowid 1:$CLASSID $U32_DEFAULT match ip dst $NET/32 flowid 1:$CLASSID let j=j+1 done Can you guys help me figure out what's wrong with it? basically I want my classes to be 1:1 (1.5 Mbit ) 1:10 (1024 Kbit) 1:20 (1024 Kbit) (200 ips each with 32 kbit)

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >