Search Results

Search found 4689 results on 188 pages for 'no average geek'.

Page 125/188 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Why is git-svn useful?

    - by Wes
    I have read these related questions: I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS? git for personal (one-man) projects. Overkill? ...and I understand why git is useful. What I don't understand is why tools like git-svn that allow git to integrate with svn are useful. When, for example, a team is working with svn, or any other centralised SCM, why would a member of the team opt to use git-svn? Are there any practical advantages for a developer that has to synchronize with a centralized repository?

    Read the article

  • Is there a COMPLETE tutorial for upgrading for dummies?

    - by Windwood Trader
    I have tried upgrading in the past with zero success doing a backup of stuff and futilely attempting to enter my stuff into the new version. My email accounts and folders, my bookmarks and web browser info and of course my photos. In the past I have received messages that the back up files were done using version XXX and cannot be read by the new system, as an example. I need a hand-holding tutorial to go from 11.04 to 12.10. What are the actual step by step mechanics? Frustrated Non-Geek

    Read the article

  • Getting weather information (from a thermometer and a hygrometer)

    - by EKS
    I have decided, as a arrogant geek, to build my own home ventilation and heating system, and will try to do this as my little project. I have always been annoyed with the lack of good ventilation systems at work, so I accept building my own is arrogant. Does anyone know about a device I can interact with that allows me to get temperature and humidity that I can interact with using C#? I cannot get it from the Internet because I need to get the humidity from my server "room", so I can control the dehumidifier there. Similar with temperature, outside is not that important. It would be a huge plus if the sensors had some sort of wireless access.

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • The 2013 PASS Summit - Day 1

    - by AllenMWhite
    It's SQL Server Geek Week once again! Every year at the PASS Summit the SQL Server faithful descend on the city of choice for the annual Summit, and this year it's Charlotte, North Carolina. Once again I've been given the privilege of sitting at the bloggers table, so my laptop is on a table! So far this week it's been great seeing people I get to see just once a year. I attended Red Gate's SQL in the City event on Monday, and saw some great sessions from Grant Fritchey, Steve Jones and Nigel Sammy....(read more)

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • SQL Server 2008 Running trigger after Insert, Update locks original table

    - by Polity
    Hi Folks, I have a serious performance problem. I have a database with (related to this problem), 2 tables. 1 Table contains strings with some global information. The second table contains the string stripped down to each individual word. So the string is like indexed in the second table, word by word. The validity of the data in the second table is of less important then the validity of the data in the first table. Since the first table can grow like towards 1*10^6 records and the second table having an average of like 10 words for 1 string can grow like 1*10^7 records, i use a nolock in order to read the second this leaves me free for inserting new records without locking it (Expect many reads on both tables). I have a script which keeps on adding and updating rows to the first table in a MERGE statement. On average, the data beeing merged are like 20 strings a time and the scripts runs like ones every 5 seconds. On the first table, i have a trigger which is beeing invoked on a Insert or Update, which takes the newly inserted or updated data and calls a stored procedure on it which makes sure the data is indexed in the second table. (This takes some significant time). The problem is that when having the trigger disbaled, Reading the first table happens in a few ms. However, when enabling the trigger and your in bad luck of trying to read the first table while this is beeing updated, Our webserver gives you a timeout after 10 seconds (which is way to long anyways). I can quess from this part that when running the trigger, the first table is kept (partially) in a lock untill the trigger is completed. What do you think, if i'm right, is there a easy way around this? Thanks in advance! Cheers, Koen

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Temperature anomaly calculation of time series data

    - by neel
    I have a time series like following: Data <- structure(list(Year = c(1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L), Month = c(8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L, 1L, 1L, 1L, 2L, 2L, 2L, 3L, 3L, 3L, 4L, 4L, 4L, 5L, 5L, 5L, 6L, 6L, 6L, 7L, 7L, 7L, 8L, 8L, 8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L), Day = c(30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 28L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L), Hour = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), temperature = c(72.5, 64, 62.5, 64, 64, 53, 52, 52, 45.5, 49, 50, 50, 59, 63.5, 69.5, 61, 61, NaN, NaN, 39.5, 37, 45.5, 45, 39, 43.5, 52, 53, 56, 64, 66, 66.5, 73.5, 81, 85, 89.5, 87.5, 88.5, 83, 84.5, 74, 60.5, 59, 53, 60.5, 62.5, 64.5, 63, 62, 65.5)), .Names = c("Year", "Month", "Day", "Hour", "temperature"), class = "data.frame", row.names = c(NA, -49L)) and I have to calculate standardized anomaly. The steps to calculate the anomalies are following: Monthly premature departures from the long-term (1991-2007) average are obtained. Then standardized by dividing by the standard deviation of monthly temperature. The standardized monthly anomalies are then weighted by multiplying by the fraction of the average temperature for the given month. These weighted anomalies are then summed over 3 month time period. Can you please help me?

    Read the article

  • Pandas Dataframe add rows on top of dataframe

    - by yash.trojan.25
    I am trying to add blank rows on top of the pandas Dataframe data. Basically, some blank rows and some calculation for each row which contains calculations for Average etc. for that column. Can someone please help me how I can do this? From: A B D E F G H I J 0 -8 10 532 533 533 532 534 532 532 1 -8 12 520 521 523 523 521 521 521 2 -8 14 520 523 522 523 522 521 522 3 -4 2 526 527 527 528 528 527 529 4 -4 4 516 518 517 519 518 516 518 5 -4 6 528 529 530 531 530 528 530 6 -4 8 518 521 521 521 522 519 521 7 -4 10 524 525 525 525 525 524 524 8 -4 12 522 523 524 525 525 522 523 9 -2 2 525 526 527 527 527 525 527 10 -2 4 518 519 519 521 520 519 520 11 -2 6 520 522 522 522 522 520 523 12 -2 8 551 551 552 552 552 550 552 13 -2 10 533 534 535 536 535 534 535 14 -2 12 537 539 539 539 538 537 539 15 -2 14 528 530 530 531 530 529 530 16 -1 2 518 519 519 521 520 518 520 To: A B D E F G H I J Average 525.6 527.1 527.4 528.0 527.6 526.0 527.4 Sigma 8.6 8.3 8.5 8.1 8.3 8.3 8.4 Minimum 516 518 517 519 518 516 518 Maximum 551 551 552 552 552 550 552 0 -8 10 532 533 533 532 534 532 532 1 -8 12 520 521 523 523 521 521 521 2 -8 14 520 523 522 523 522 521 522 3 -4 2 526 527 527 528 528 527 529 4 -4 4 516 518 517 519 518 516 518 5 -4 6 528 529 530 531 530 528 530 6 -4 8 518 521 521 521 522 519 521 7 -4 10 524 525 525 525 525 524 524 8 -4 12 522 523 524 525 525 522 523 9 -2 2 525 526 527 527 527 525 527 10 -2 4 518 519 519 521 520 519 520 11 -2 6 520 522 522 522 522 520 523 12 -2 8 551 551 552 552 552 550 552 13 -2 10 533 534 535 536 535 534 535 14 -2 12 537 539 539 539 538 537 539 15 -2 14 528 530 530 531 530 529 530 16 -1 2 518 519 519 521 520 518 520

    Read the article

  • What are some programming techniques for converting SD images to HD images

    - by Dr Dork
    I'm taking programming class and instructor loves to work with images so most of our assignments involve manipulating raw RGB image data. One of our assignments is to implement a standard image converter that converts SD images to HD images and vice versa. I always take advantage of these types of assignments to go a little beyond what we were asked to do, so I added a basic anti-aliasing process that uses the average pixel color of the 3x3 surrounding pixels to improve the converted image. While it helps a bit, the resulting image still doesn't look good, which is ok because it's not expected to for the assignment. I've learned that converting an SD to HD images has shown to be much harder than down sampling to SD from HD just because SD to HD effectively involves increasing resolution when it is not there. Obviously, it is hard to create pixels from nothing, but I'd like enhance my anti-aliasing to something that provides better results when upscaling an image. Most of the techniques I find and read on the internet are far beyond my level of image processing and programming. Can anybody suggest any better methods or processes to create good HD content from SD content that may be within my programming skill level? I know that's a difficult thing to gauge since you don't know me, but perhaps knowing that I can write c++ code to read in raw RGB data and upscale/downscale it with simple average-anti-aliasing will give you an idea. Thanks in advance for all your help!

    Read the article

  • Would this method work to scale out SQL queries?

    - by David
    I have a database containing a single huge table. At the moment a query can take anything from 10 to 20 minutes and I need that to go down to 10 seconds. I have spent months trying different products like GridSQL. GridSQL works fine, but is using its own parser which does not have all the needed features. I have also optimized my database in various ways without getting the speedup I need. I have a theory on how one could scale out queries, meaning that I utilize several nodes to run a single query in parallel. The idea is to take an incoming SQL query and simply run it exactly like it is on all the nodes. When the results are returned to a coordinator node, the same query is run on the union of the resultsets. I realize that an aggregate function like average need to be rewritten into a count and sum to the nodes and that the coordinator divides the sum of the sums with the sum of the counts to get the average. What kinds of problems could not easily be solved using this model. I believe one issue would be the count distinct function. Edit: I am getting so many nice suggestions, but none have addressed the method.

    Read the article

  • Ruby 1.9 GarbageCollector, GC.disable/enable

    - by seb
    I'm developing a Rails 2.3, Ruby 1.9.1 webapplication that does quite a bunch of calculation before each request. For every request it has to calculate a graph with 300 nodes and ~1000 edges. The graph and all its nodes, edges and other objects are initialized for every request (~2000 objects) - actually they are cloned from an uncalculated cached graph using Marshal.load(Marshal.dump()). Performance is quite an issue here. Right now the whole request takes in average 150ms. I then saw that during a request, parts of the calculation randomly take longer. Assuming, that this might be the GarbageCollector kicking in, I wrapped the request in GC.disable and GC.enable, so that the request waits with garbagecollecting until calculating and rendering have finished. def query GC.disable calculate respond_to do |format| format.html {render} end GC.enable end The average request now takes about 100ms (50 ms less). But I'm unsure if this is a good/stable solution, I assume there must be drawbacks doing that. Does anybody has experience with a similar problem or sees problems with the above code?

    Read the article

  • Pass a range into a custom function from within a cell

    - by Luis
    Hi I'm using VBA in Excel and need to pass in the values from two ranges into a custom function from within a cell's formula. The function looks like this: Public Function multByElement(range1 As String, range2 As String) As Variant Dim arr1() As Variant, arr2() As Variant arr1 = Range(range1).value arr2 = Range(range2).value If UBound(arr1) = UBound(arr2) Then Dim arrayA() As Variant ReDim arrayA(LBound(arr1) To UBound(arr1)) For i = LBound(arr1) To UBound(arr1) arrayA(i) = arr1(i) * arr2(i) Next i multByElement = arrayA End If End Function As you can see, I'm trying to pass the string representation of the ranges. In the debugger I can see that they are properly passed in and the first visible problem occurs when it tries to read arr1(i) and shows as "subscript out of range". I have also tried passing in the range itself (ie range1 as Range...) but with no success. My best suspicion was that it has to do with the Active Sheet since it was called from a different sheet from the one with the formula (the sheet name is part of the string) but that was dispelled since I tried it both from within the same sheet and by specifying the sheet in the code. BTW, the formula in the cell looks like this: =AVERAGE(multByElement("A1:A3","B1:B3")) or =AVERAGE(multByElement("My Sheet1!A1:A3","My Sheet1!B1:B3")) for when I call it from a different sheet.

    Read the article

  • Count problem in SQL when I want results from diffrent tabels

    - by Nicklas
    ALTER PROCEDURE GetProducts @CategoryID INT AS SELECT COUNT(tblReview.GroupID) AS ReviewCount, COUNT(tblComment.GroupID) AS CommentCount, Product.GroupID, MAX(Product.ProductID) AS ProductID, AVG(Product.Price) AS Price, MAX (Product.Year) AS Year, MAX (Product.Name) AS Name, AVG(tblReview.Grade) AS Grade FROM tblReview, tblComment, Product WHERE (Product.CategoryID = @CategoryID) GROUP BY Product.GroupID HAVING COUNT(distinct Product.GroupID) = 1 This is what the tabels look like: **Product** |**tblReview** | **tblComment** ProductID | ReviewID | CommentID Name | Description | Description Year | GroupID | GroupID Price | Grade | GroupID GroupID is name_year of a Product, ex Nike_2010. One product can have diffrent sizes for exampel: ProductID | Name | Year | Price | Size | GroupID 1 | Nike | 2010 | 50 | 8 | Nike_2010 2 | Nike | 2010 | 50 | 9 | Nike_2010 3 | Nike | 2010 | 50 | 10 | Nike_2010 4 | Adidas| 2009 | 45 | 8 | Adidas_2009 5 | Adidas| 2009 | 45 | 9 | Adidas_2009 6 | Adidas| 2009 | 45 | 10 | Adidas_2009 I dont get the right count in my tblReview and tblComment. If I add a review to Nike size 8 and I add one review to Nike size 10 I want 2 count results when I list the products with diffrent GroupID. Now I get the same count on Reviews and Comment and both are wrong. I use a datalist to show all the products with diffrent/unique GroupID, I want it to be like this: ______________ | | | Name: Nike | | Year: 2010 | | (All Sizes) | | x Reviews | | x Comments | | x AVG Grade | |______________| All Reviewcounts, Commentcounts and the Average of all products with the same GroupID, the Average works great.

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • Designing small comparable objects

    - by Thomas Ahle
    Intro Consider you have a list of key/value pairs: (0,a) (1,b) (2,c) You have a function, that inserts a new value between two current pairs, and you need to give it a key that keeps the order: (0,a) (0.5,z) (1,b) (2,c) Here the new key was chosen as the average between the average of keys of the bounding pairs. The problem is, that you list may have milions of inserts. If these inserts are all put close to each other, you may end up with keys such to 2^(-1000000), which are not easily storagable in any standard nor special number class. The problem How can you design a system for generating keys that: Gives the correct result (larger/smaller than) when compared to all the rest of the keys. Takes up only O(logn) memory (where n is the number of items in the list). My tries First I tried different number classes. Like fractions and even polynomium, but I could always find examples where the key size would grow linear with the number of inserts. Then I thought about saving pointers to a number of other keys, and saving the lower/greater than relationship, but that would always require at least O(sqrt) memory and time for comparison. Extra info: Ideally the algorithm shouldn't break when pairs are deleted from the list.

    Read the article

  • Lack of security in many PHP applications?

    - by John
    Over the past year of freelancing, I inherited two web projects, both of them built in PHP, both of them with sensitive information like credit card info, bank info, etc... In one application, when I typed http://thecompany.com/admin/, and without being asked for a username and password, I saw every user's sensitive information, including credit card numbers, bank account numbers etc... In another application, I was able to bypass the login screen by simply typing http://the2ndcompany.com/customer.php?user_id=777, and again, without any prompts for username and password, i was able to see user 777's credit card info. I cycled through a few more user_ids (any integer) and saw each person's credit card info. Is something wrong here? Or is this the quality of work that the "average" programmer produces? Because if this is what the average programmer produces, does that means I'm an...gasp...elite programmer?? No..that can't be right....something doesn't make sense. So my question is, is it just coincidence that I inherited two applications both of which are dangerously lacking in security? Or are there are a lot of bad PHP programmers out there?

    Read the article

  • Calculate the retrieved rows in database Visual C#

    - by Tanya Lertwichaiworawit
    I am new in Visual C# and would want to know how to calculate the retrieved data from a database. Using the above GUI, when "Calculate" is click, the program will display the number of students in textBox1, and the average GPA of all students in textBox2. Here is my database table "Students": I was able to display the number of students but I'm still confused to how I can calculate the average GPA Here's my code: private void button1_Click(object sender, EventArgs e) { string connection = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Database1.accdb"; OleDbConnection connect = new OleDbConnection(connection); string sql = "SELECT * FROM Students"; connect.Open(); OleDbCommand command = new OleDbCommand(sql, connect); DataSet data = new DataSet(); OleDbDataAdapter adapter = new OleDbDataAdapter(command); adapter.Fill(data, "Students"); textBox1.Text = data.Tables["Students"].Rows.Count.ToString(); double gpa; for (int i = 0; i < data.Tables["Students"].Rows.Count; i++) { gpa = Convert.ToDouble(data.Tables["Students"].Rows[i][2]); } connect.Close(); }

    Read the article

  • Pseudo-quicksort time complexity

    - by Ord
    I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell): quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p] Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice. But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

    Read the article

  • Using EhCache for session.createCriteria(...).list()

    - by James Smith
    I'm benchmarking the performance gains from using a 2nd level cache in Hibernate (enabling EhCache), but it doesn't seem to improve performance. In fact, the time to perform the query slightly increases. The query is: session.createCriteria(MyEntity.class).list(); The entity is: @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class MyEntity { @Id @GeneratedValue private long id; @Column(length=5000) private String data; //---SNIP getters and setters--- } My hibernate.cfg.xml is: <!-- all the normal stuff to get it to connect & map the entities plus:--> <property name="hibernate.cache.region.factory_class"> net.sf.ehcache.hibernate.EhCacheRegionFactory </property> The MyEntity table contains about 2000 rows. The problem is that before adding in the cache, the query above to list all entities took an average of 65 ms. After the cache, they take an average of 74 ms. Is there something I'm missing? Is there something extra that needs to be done that will increase performance?

    Read the article

  • parsing the output of the 'w' command?

    - by Blackbinary
    I'm writing a program which requires knowledge of the current load on the system, and the activity of any users (it's a load balancer). This is a university assignment, and I am required to use the w command. I'm having a hard time parsing this command because it is very verbose. Any suggestions on what I can do would be appreciated. This is a small part of the program, and I am free to use whatever method i like. The most condensed version of w which still has the information I require is `w -u -s -f' which produces this: 10:13:43 up 9:57, 2 users, load average: 0.00, 0.00, 0.00 USER TTY IDLE WHAT fsm tty7 22:44m x-session-manager fsm pts/0 0.00s w -u -s -f So out of that, I am interested in the first number after load average and the smallest idle time (so i will need to parse them all). My background process will call w, so the fact that w is the lowest idle time will not matter (all i will see is the tty time). Do you have any ideas? Thanks (I am allowed to use alternative unix commands, like grep, if that helps).

    Read the article

  • Using JMX classes to notify on events over time

    - by Cincinnati Joe
    I've been looking at JMX for monitoring application and system metrics (partially because MBeans can accessed by various tools such as JConsole). It would seem like the classes included with JMX would be useful for things like notification when metrics have exceeded thresholds. But I'm not sure they fit with the way I want to measure these over a specified time period. For example, let's say I want to notify an admin when the average CPU load is over 95% for more than 5 minutes. Is that something can be done with a GaugeMonitor? From the docs, it doesn't seem quite suited for this, and I'm wondering if instead I should write my own MBean with the necessary logic. A more relevant example is when the login times for users exceed 10s over a period of 5 mins. Slightly different would be the last 20 logins took more than 10s on average. Another case would be when a process crashes 4+ times in an hour. Or the request queue exceeds 15 for 5 mins. Are the JMX Monitor classes useful for this kind of thing?

    Read the article

  • In Scrum, should a team remove points from (defect) stories that don't result in a code change?

    - by CanIgtAW00tW00t
    My work uses a Scrum-like process to manage projects. I say Scrum-like, because we call it Scrum, but our project managers exclude aspects of Scrum that are inconvenient (most notably customer interaction). One of the stories in our current sprint was to correct a defect. After spending almost an entire day working on the issue, I determined the issue was the result of a permissions issue, so I didn't end up modifying any code. Our Scrum master / project manager decided that no code change equals zero points. I know that Scrum points are supposed to measure size / complexity and not time, but our Scrum master invests a lot of time in preparing graphs and statistical information from past sprints (average velocity, average points completed, etc.) I've always been of the opinion that for statistics to be meaningful in any way, the data must be as accurate as possible. All of our data is fuzzy to begin with, because, from time to time, we're encouraged by the Scrum master to "adjust" our size / complexity estimates, both increasing and decreasing them. I'd like to hear some other developers / Scrum team members thoughts on the merits of statistics based on past sprints, and also whether they think it's appropriate to "adjust" size / complexity estimates in the middle of a sprint, or the remove all points from a story all together for situations similar to what I've just described.

    Read the article

  • Excel, Pivot Calculated formula: SUM(Field1)/AVG(Field2)

    - by Bas
    I've a simple table with some amount and interval in sec by date and product name. Month | Product | Amount | Interval in sec ------------------------------------------ 05-'12| Prod A | 10 | 5 05-'12| Prod A | 3 | 5 05-'12| Prod B | 4 | 5 05-'12| Prod C | 13 | 5 05-'12| Prod C | 5 | 5 From this table I've derived a Pivot table with SUM(Amount), AVERAGE(Interval in sec) by Month and Product. Month | Product | SUM of Amount | AVG of Interval in sec -------------------------------------------------------- 05-'12| Prod A | 13 | 5 05-'12| Prod B | 4 | 5 05-'12| Prod C | 18 | 5 So far So good... Now i want to add and extra column to my Pivot table with gives me the outcome of SUM of Amount / AVG of Interval in sec Adding a calculated value =SUM(Amount)/AVERAGE(Interval) is not giving me the right values. Exel gives me. Month | Product | SUM of Amount | AVG of Interval in sec | Amount per sec ------------------------------------------------------------------------- 05-'12| Prod A | 13 | 5 | 1.3 05-'12| Prod B | 4 | 5 | 0.8 05-'12| Prod C | 18 | 5 | 1.8 What it actually is doing is =SUM(Amount)/SUM(Interval in sec) for every Month and Product based on the values in the first table... But I'm looking for Month | Product | SUM of Amount | AVG of Interval in sec | Amount per sec ------------------------------------------------------------------------- 05-'12| Prod A | 13 | 5 | 2.6 05-'12| Prod B | 4 | 5 | 0.8 05-'12| Prod C | 18 | 5 | 3.6 So litterly devide pivot field 'Sum of Amount' by pivot field 'AVG of Interval in sec' How to achieve this? Thank you in advanced

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >