Search Results

Search found 3392 results on 136 pages for 'average joe'.

Page 22/136 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Simplifying for-if messes with better structure?

    - by HH
    # Description: you are given a bitwise pattern and a string # you need to find the number of times the pattern matches in the string # any one liner or simple pythonic solution? import random def matchIt(yourString, yourPattern): """find the number of times yourPattern occurs in yourString""" count = 0 matchTimes = 0 # How can you simplify the for-if structures? for coin in yourString: #return to base if count == len(pattern): matchTimes = matchTimes + 1 count = 0 #special case to return to 2, there could be more this type of conditions #so this type of if-conditionals are screaming for a havoc if count == 2 and pattern[count] == 1: count = count - 1 #the work horse #it could be simpler by breaking the intial string of lenght 'l' #to blocks of pattern-length, the number of them is 'l - len(pattern)-1' if coin == pattern[count]: count=count+1 average = len(yourString)/matchTimes return [average, matchTimes] # Generates the list myString =[] for x in range(10000): myString= myString + [int(random.random()*2)] pattern = [1,0,0] result = matchIt(myString, pattern) print("The sample had "+str(result[1])+" matches and its size was "+str(len(myString))+".\n" + "So it took "+str(result[0])+" steps in average.\n" + "RESULT: "+str([a for a in "FAILURE" if result[0] != 8])) # Sample Output # # The sample had 1656 matches and its size was 10000. # So it took 6 steps in average. # RESULT: ['F', 'A', 'I', 'L', 'U', 'R', 'E']

    Read the article

  • give feedback on this pointer program

    - by JohnWong
    This is relatively simple program. But I want to get some feedback about how I can improve this program (if any), for example, unnecessary statements? #include<iostream> #include<fstream> using namespace std; double Average(double*,int); int main() { ifstream inFile("data2.txt"); const int SIZE = 4; double *array = new double(SIZE); double *temp; temp = array; for (int i = 0; i < SIZE; i++) { inFile >> *array++; } cout << "Average is: " << Average(temp, SIZE) << endl; } double Average(double *pointer, int x) { double sum = 0; for (int i = 0; i < x; i++) { sum += *pointer++; } return (sum/x); } The codes are valid and the program is working fine. But I just want to hear what you guys think, since most of you have more experience than I do (well I am only a freshman ... lol) Thanks.

    Read the article

  • Summary statistics in visual basic

    - by ben
    Below I am trying to write a script the goal of which is to calculate some summary statistics for a few different columns of numbers. I have gotten some help on it up to the "Need help below" mark. But beyond that I am flabergasted as to how to calculate the simple stats (sum, mean, standard deviation, coefficient of variation). I know VB has scripts for these stats, which I have included in my code, but I guess I need to do some extra declaring or something. Advice much appreciated. Thanks. Sub TOAinput() Const n As Integer = 648 Dim stratum(n), hybrid(n), acres(n), hhsz(n), offinc(n) Dim s1 As Integer Dim s2 As Integer Dim i As Integer For i = 1 To n stratum(i) = Worksheets("hhid level").Cells(i + 1, 2).Value Next i s1 = 0 s2 = 0 For i = 1 To n If stratum(i) = 1 Then s1 = s1 + 1 Else: s2 = s2 + 1 End If Next i Dim acres1(), hhsz1(), offinc1(), acres2(), hhsz2(), offinc2() ReDim acres1(s1), hhsz1(s1), offinc1(s1), acres2(s2), hhsz2(s2), offinc2(s2) 'data infiles: acres, hh size, off-farm income, For i = 1 To n acres(i) = Worksheets("hhid level").Cells(i + 1, 4).Value hhsz(i) = Worksheets("hhid level").Cells(i + 1, 5).Value offinc(i) = Worksheets("hhid level").Cells(i + 1, 6).Value Next i s1 = 0 s2 = 0 For i = 1 To n If stratum(i) = 1 Then s1 = s1 + 1 acres1(s1) = acres(i) hhsz1(s1) = hhsz(i) offinc1(s1) = offinc(i) Else: s2 = s2 + 1 acres2(s2) = acres(i) hhsz2(s2) = hhsz(i) offinc2(s2) = offinc(i) End If Next i '**************************** 'Need help below '**************************** Dim sumac1, sumac2, mhhsz1, mhhsz2, cvhhsz1, cvhhsz2 sumac1 = Sum(acres1) sumac2 = Sum(acres2) mhhsz1 = Average(hhsz1) mhhsz2 = Average(hhsz2) cvhhsz1 = StDev(hhsz1) / Average(hhsz1) cvhhsz2 = StDev(hhsz2) / Average(hhsz2) End Sub

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • C#/.NET Little Wonders: Interlocked CompareExchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Two posts ago, I discussed the Interlocked Add(), Increment(), and Decrement() methods (here) for adding and subtracting values in a thread-safe, lightweight manner.  Then, last post I talked about the Interlocked Read() and Exchange() methods (here) for safely and efficiently reading and setting 32 or 64 bit values (or references).  This week, we’ll round out the discussion by talking about the Interlocked CompareExchange() method and how it can be put to use to exchange a value if the current value is what you expected it to be. Dirty reads can lead to bad results Many of the uses of Interlocked that we’ve explored so far have centered around either reading, setting, or adding values.  But what happens if you want to do something more complex such as setting a value based on the previous value in some manner? Perhaps you were creating an application that reads a current balance, applies a deposit, and then saves the new modified balance, where of course you’d want that to happen atomically.  If you read the balance, then go to save the new balance and between that time the previous balance has already changed, you’ll have an issue!  Think about it, if we read the current balance as $400, and we are applying a new deposit of $50.75, but meanwhile someone else deposits $200 and sets the total to $600, but then we write a total of $450.75 we’ve lost $200! Now, certainly for int and long values we can use Interlocked.Add() to handles these cases, and it works well for that.  But what if we want to work with doubles, for example?  Let’s say we wanted to add the numbers from 0 to 99,999 in parallel.  We could do this by spawning several parallel tasks to continuously add to a total: 1: double total = 0; 2:  3: Parallel.For(0, 10000, next => 4: { 5: total += next; 6: }); Were this run on one thread using a standard for loop, we’d expect an answer of 4,999,950,000 (the sum of all numbers from 0 to 99,999).  But when we run this in parallel as written above, we’ll likely get something far off.  The result of one of my runs, for example, was 1,281,880,740.  That is way off!  If this were banking software we’d be in big trouble with our clients.  So what happened?  The += operator is not atomic, it will read in the current value, add the result, then store it back into the total.  At any point in all of this another thread could read a “dirty” current total and accidentally “skip” our add.   So, to clean this up, we could use a lock to guarantee concurrency: 1: double total = 0.0; 2: object locker = new object(); 3:  4: Parallel.For(0, count, next => 5: { 6: lock (locker) 7: { 8: total += next; 9: } 10: }); Which will give us the correct result of 4,999,950,000.  One thing to note is that locking can be heavy, especially if the operation being locked over is trivial, or the life of the lock is a high percentage of the work being performed concurrently.  In the case above, the lock consumes pretty much all of the time of each parallel task – and the task being locked on is relatively trivial. Now, let me put in a disclaimer here before we go further: For most uses, lock is more than sufficient for your needs, and is often the simplest solution!    So, if lock is sufficient for most needs, why would we ever consider another solution?  The problem with locking is that it can suspend execution of your thread while it waits for the signal that the lock is free.  Moreover, if the operation being locked over is trivial, the lock can add a very high level of overhead.  This is why things like Interlocked.Increment() perform so well, instead of locking just to perform an increment, we perform the increment with an atomic, lockless method. As with all things performance related, it’s important to profile before jumping to the conclusion that you should optimize everything in your path.  If your profiling shows that locking is causing a high level of waiting in your application, then it’s time to consider lighter alternatives such as Interlocked. CompareExchange() – Exchange existing value if equal some value So let’s look at how we could use CompareExchange() to solve our problem above.  The general syntax of CompareExchange() is: T CompareExchange<T>(ref T location, T newValue, T expectedValue) If the value in location == expectedValue, then newValue is exchanged.  Either way, the value in location (before exchange) is returned. Actually, CompareExchange() is not one method, but a family of overloaded methods that can take int, long, float, double, pointers, or references.  It cannot take other value types (that is, can’t CompareExchange() two DateTime instances directly).  Also keep in mind that the version that takes any reference type (the generic overload) only checks for reference equality, it does not call any overridden Equals(). So how does this help us?  Well, we can grab the current total, and exchange the new value if total hasn’t changed.  This would look like this: 1: // grab the snapshot 2: double current = total; 3:  4: // if the total hasn’t changed since I grabbed the snapshot, then 5: // set it to the new total 6: Interlocked.CompareExchange(ref total, current + next, current); So what the code above says is: if the amount in total (1st arg) is the same as the amount in current (3rd arg), then set total to current + next (2nd arg).  This check and exchange pair is atomic (and thus thread-safe). This works if total is the same as our snapshot in current, but the problem, is what happens if they aren’t the same?  Well, we know that in either case we will get the previous value of total (before the exchange), back as a result.  Thus, we can test this against our snapshot to see if it was the value we expected: 1: // if the value returned is != current, then our snapshot must be out of date 2: // which means we didn't (and shouldn't) apply current + next 3: if (Interlocked.CompareExchange(ref total, current + next, current) != current) 4: { 5: // ooops, total was not equal to our snapshot in current, what should we do??? 6: } So what do we do if we fail?  That’s up to you and the problem you are trying to solve.  It’s possible you would decide to abort the whole transaction, or perhaps do a lightweight spin and try again.  Let’s try that: 1: double current = total; 2:  3: // make first attempt... 4: if (Interlocked.CompareExchange(ref total, current + i, current) != current) 5: { 6: // if we fail, go into a spin wait, spin, and try again until succeed 7: var spinner = new SpinWait(); 8:  9: do 10: { 11: spinner.SpinOnce(); 12: current = total; 13: } 14: while (Interlocked.CompareExchange(ref total, current + i, current) != current); 15: } 16:  This is not trivial code, but it illustrates a possible use of CompareExchange().  What we are doing is first checking to see if we succeed on the first try, and if so great!  If not, we create a SpinWait and then repeat the process of SpinOnce(), grab a fresh snapshot, and repeat until CompareExchnage() succeeds.  You may wonder why not a simple do-while here, and the reason it’s more efficient to only create the SpinWait until we absolutely know we need one, for optimal efficiency. Though not as simple (or maintainable) as a simple lock, this will perform better in many situations.  Comparing an unlocked (and wrong) version, a version using lock, and the Interlocked of the code, we get the following average times for multiple iterations of adding the sum of 100,000 numbers: 1: Unlocked money average time: 2.1 ms 2: Locked money average time: 5.1 ms 3: Interlocked money average time: 3 ms So the Interlocked.CompareExchange(), while heavier to code, came in lighter than the lock, offering a good compromise of safety and performance when we need to reduce contention. CompareExchange() - it’s not just for adding stuff… So that was one simple use of CompareExchange() in the context of adding double values -- which meant we couldn’t have used the simpler Interlocked.Add() -- but it has other uses as well. If you think about it, this really works anytime you want to create something new based on a current value without using a full lock.  For example, you could use it to create a simple lazy instantiation implementation.  In this case, we want to set the lazy instance only if the previous value was null: 1: public static class Lazy<T> where T : class, new() 2: { 3: private static T _instance; 4:  5: public static T Instance 6: { 7: get 8: { 9: // if current is null, we need to create new instance 10: if (_instance == null) 11: { 12: // attempt create, it will only set if previous was null 13: Interlocked.CompareExchange(ref _instance, new T(), (T)null); 14: } 15:  16: return _instance; 17: } 18: } 19: } So, if _instance == null, this will create a new T() and attempt to exchange it with _instance.  If _instance is not null, then it does nothing and we discard the new T() we created. This is a way to create lazy instances of a type where we are more concerned about locking overhead than creating an accidental duplicate which is not used.  In fact, the BCL implementation of Lazy<T> offers a similar thread-safety choice for Publication thread safety, where it will not guarantee only one instance was created, but it will guarantee that all readers get the same instance.  Another possible use would be in concurrent collections.  Let’s say, for example, that you are creating your own brand new super stack that uses a linked list paradigm and is “lock free”.  We could use Interlocked.CompareExchange() to be able to do a lockless Push() which could be more efficient in multi-threaded applications where several threads are pushing and popping on the stack concurrently. Yes, there are already concurrent collections in the BCL (in .NET 4.0 as part of the TPL), but it’s a fun exercise!  So let’s assume we have a node like this: 1: public sealed class Node<T> 2: { 3: // the data for this node 4: public T Data { get; set; } 5:  6: // the link to the next instance 7: internal Node<T> Next { get; set; } 8: } Then, perhaps, our stack’s Push() operation might look something like: 1: public sealed class SuperStack<T> 2: { 3: private volatile T _head; 4:  5: public void Push(T value) 6: { 7: var newNode = new Node<int> { Data = value, Next = _head }; 8:  9: if (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next) 10: { 11: var spinner = new SpinWait(); 12:  13: do 14: { 15: spinner.SpinOnce(); 16: newNode.Next = _head; 17: } 18: while (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next); 19: } 20: } 21:  22: // ... 23: } Notice a similar paradigm here as with adding our doubles before.  What we are doing is creating the new Node with the data to push, and with a Next value being the original node referenced by _head.  This will create our stack behavior (LIFO – Last In, First Out).  Now, we have to set _head to now refer to the newNode, but we must first make sure it hasn’t changed! So we check to see if _head has the same value we saved in our snapshot as newNode.Next, and if so, we set _head to newNode.  This is all done atomically, and the result is _head’s original value, as long as the original value was what we assumed it was with newNode.Next, then we are good and we set it without a lock!  If not, we SpinWait and try again. Once again, this is much lighter than locking in highly parallelized code with lots of contention.  If I compare the method above with a similar class using lock, I get the following results for pushing 100,000 items: 1: Locked SuperStack average time: 6 ms 2: Interlocked SuperStack average time: 4.5 ms So, once again, we can get more efficient than a lock, though there is the cost of added code complexity.  Fortunately for you, most of the concurrent collection you’d ever need are already created for you in the System.Collections.Concurrent (here) namespace – for more information, see my Little Wonders – The Concurent Collections Part 1 (here), Part 2 (here), and Part 3 (here). Summary We’ve seen before how the Interlocked class can be used to safely and efficiently add, increment, decrement, read, and exchange values in a multi-threaded environment.  In addition to these, Interlocked CompareExchange() can be used to perform more complex logic without the need of a lock when lock contention is a concern. The added efficiency, though, comes at the cost of more complex code.  As such, the standard lock is often sufficient for most thread-safety needs.  But if profiling indicates you spend a lot of time waiting for locks, or if you just need a lock for something simple such as an increment, decrement, read, exchange, etc., then consider using the Interlocked class’s methods to reduce wait. Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked,CompareExchange,threading,concurrency

    Read the article

  • SQL SERVER – Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

    - by pinaldave
    Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything! This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.” This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules. I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version. MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence. MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information. Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading. In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Master Data Services, MDM

    Read the article

  • How to monitor the total number of SQL Server logins

    - by Shiraz Bhaiji
    We have an SQL Server 2005 that is the backend of a web application. The application is partly SharePoint and partly web services accessing the database via Entity Framework. In the performance monitor I am seeing average SQL Logins is ca, 60 per second (max 170), but the average logouts is less than 1. Where can I see the total number of SQL Server logins? Anyone have an idea what could be causing this?

    Read the article

  • My internet speed became slow at night

    - by FrozenKing
    My internet plan is 512kbps unlimited and I get speed of average 64kbps but at night I used to get speed of 112kbps ..but recently my speed got normal like day time ...as per my view usually at night their is less traffic so I should get good speed like before ... Due to good speed I download and upload at night and my average download+upload per month is 60gb or 70gb... Is it that my ISP people putting restriction on my download and uploads.. I am confused.

    Read the article

  • BitTorrent Myth

    - by Moon .
    In BitTorent Statistics there is a field "Total Ratio" that is the ratio between total downloads and uploads. i have heard that this ratio affects BitTorrent'ss performance. If the ratio is better then BitTorrent Network provides you services on priority. And If the ratio is down (less uploads) then the BitTorrent provides you services on average or below average priorities. Is there something like that.....

    Read the article

  • custom video icon for a single video file in windows 7 file explorer

    - by MrBrody
    recently I found a video on the net ( a .mp4 file), and when I had it on my computer with Windows7, I noticed its thumbnail was not the average windows 7 video thumbnail (which looks like a piece of video film with a random picture from the movie), but a custom thumbnail! Looking in the file properties did not help find the correct button to change the thumbnail...so I just wonder how he did it! Here is a picture: left: the custom thumbnail, right: the average thumbnail...

    Read the article

  • Possible reasons for high CPU load of taskmgr.exe process on VM?

    - by mjn
    On a VMware virtual machine which has severe performance problems I can see a constant average of 20+ percent CPU load for the TASKMGR.EXE (task manager) process. The apps running on this server have lower load, around 4 to 10 percent average. The VM is running Windows 2003 Server Standard with 3.75 GB assigned RAM. I suspect that the task manager CPU load has something to do with other VM instances on the VMWare server but could not see a similar value on internal ESXi systems (the problematic VM runs in the customers IT).

    Read the article

  • Design patter to keep track UITableView rows correspondance to underlying data in constant time.

    - by DenNukem
    When my model changes I want to animate changes in UITableView by inserting/deleting rows. For that I need to know the ordinal of the given row (so I can construct NSIndexPath), which I find hard to do in better-than-linear time. For example, consider that I have a list of addressbook entries which are manualy sorted by the user, i.e. there is no ordering "key" that represents the sort order. There is also a corresponding UITableView that shows one row per addressbook entry. When UITableView queries the datasource I query the NSMUtableArray populated with my entries and return required data in constant time for each row. However, if there is a change in underlying model I am getting a notification "Joe Smith, id#123 has been removed". Now I have a dilemma. A naive approach would be to scan the array, determine the index at which Joe Smith is and then ask UITableView to remove that precise row from the view, also removing it form the array. However, the scan will take linear time to finish. Now I could have an NSDictionary which allows me to find Joe Smith in constant time, but that doesn't do me a lot of good because I still need to find his ordinal index within the array in order to instruct UITableView to remove that row, which is again a linear search. I could further decide to store each object's ordinal inside the object itself to make it constant, but it will become outdated after first such update as all subsequent index values will have changed due to removal of an object. So what is the correct design pattern to accurately reflect model changes in the UITableView in costant (or at least logarithmic) time?

    Read the article

  • XML RPC c# Call returns array of strings and int

    - by chillconsulting
    I'm doing an XML RPC call in ASP.net with c# and the call returns an Array called userinfo with strings and integers in it and I can't seem to figure out how to parse the data and put it into string and int objects... The only thing that compiles is if I make it an Object in the struct. The int returns fines and is referenced easily. Any help would be appreciated - this is what I got. public Page_Load(object sender, EventArgs e){ IValidateSSO proxy = XmlRpcProxyGen.Create<IValidateSSO>(); UserInfoSSOValue ret2 = proxy.UserInfoSSO(ssoAuth, ssoValue); int i = ret2.isonid; } public struct UserInfoSSOValue { public Object userinfo; public int isonid; } [XmlRpcUrl("host")] public interface IValidateSSO : IXmlRpcProxy { [XmlRpcMethod("sso.session_userinfo")] UserInfoSSOValue UserInfoSSO(string ssoauth, string sid); } Here is what the xml rpc calls returns (I know it's in php... I'm trying to implement in c#): sso.session_userinfo(string $ssoauth, string $sid) string $ssoauth - authentication string (assigned by ONID support) string $sid - sid to get info on Returns: Array ( [userinfo] => Array ( [lastname] => College [expire_time] => 1118688011 [osuuid] => 12345678901 [sid_length] => 3600 [ip] => 10.0.0.1 [sid] => WiT7ppAEUKS3rJ2lNz3Ue64sGPxnnLL0 [username] => collegej [firstname] => Joe [fullname] => College, Joe Student [email] => joe[email protected] [create_time] => 1118684410 ) [isonid] => 1 ) array userinfo - array containing basic user information

    Read the article

  • Prefuse: Reloading of XML files

    - by John
    Hello all, I am a new to the prefuse visualization toolkit and have a couple of general questions. For my purpose, I would like to perform an initial visualization using prefuse (graphview / graphml). Once rendered, upon a user click of a node, I would like to completely reload a new xml file for a new visualization. I want to do this in order to allow me to "pre-package" graphs for display. For example. If I search for Ted. I would like to have an xml file relating to Ted load and render a display. Now in the display I see that Ted has nodes associated called Bill and Joe. When I click Joe, I would like to clear the display and load an xml file associated with Joe. And so on. I have looked into loading one very large xml file containing all node and node relationship info and allowing prefuse to handle this using the hops from one level to another. However, eventually I am sure that system performance issues will arise due to the size of data. Thanks in advance for any help, John

    Read the article

  • Linking session state between servlets and EJBs?

    - by wilth
    Hello, I have servlets (in a web module) that access stateless EJB beans (in an EJB module). The EJB module is built using SEAM. Users can have different roles and the EJB services check this using Seam's Identity. I also use a customized Authenticator (although this might not be relevant here). I noticed problems with this approach and I'm suspecting that the session context in the servlets is not "linked" with the session context in the EJB beans. What I think happens is something like: User Joe access servlet A and is assigned Session W1. Servlet A calls a login function on an EJB, using the EJB session E1. Later, user Mary accesses servlet A and is assigned Session W2. When calling the EJBs, however, the EJB session E1 is used and therefore Mary is authenticated as Joe. What also happens is that when Joe is calling the servlet twice in rapid succession, the same session W1 is used, but two different sessions E1 and E2 in the business layer, causing errors. I might be wrong in my suspicion, but maybe I'm actually expecting these "sessions" to be linked together while they in fact are not. If this is true, is there any way of achieving this? I could - of course - use stateful beans and save the authentication information in the beans, but this would break the "Identity" concept of Seam (and in general, it would be preferable to be able to use the Session context in my EJB beans). Any help and pointers are very welcome - thanks! Technology: EJB3, Seam 2.1.2. The servlets are actually the server-side of a GWT app, although I don't think this matters much. I'm using JBoss 5.

    Read the article

  • Incorrect value for sum of two NSIntegers

    - by Antonio
    Hi everybody: I'm sure I'm missing something and the answer is very simple, but I can't seem to understand why this is happening. I'm trying to make an average of dates: NSInteger runningSum =0; NSInteger count=0; for (EventoData *event in self.events) { NSDate *dateFromString = [[NSDate alloc] init]; if (event.date != nil) { dateFromString = [dateFormatter dateFromString:event.date]; runningSum += (NSInteger)[dateFromString timeIntervalSince1970]; count += 1; } } if (count>0) { NSLog(@"average is: %@",[NSDate dateWithTimeIntervalSince1970:(NSInteger)((CGFloat)runningAverage/count)]); } Everything seems to work OK, except for runningSum += (NSInteger)[dateFromString timeIntervalSince1970], which gives an incorrect result. If I put a breakpoint when taking the average of two equal dates (2009-10-10, for example, which is a timeInterval of 1255125600), runningSum is -1784716096, instead of the expected 2510251200. I've tried using NSNumber and I get the same result. Can anybody point me in the right direction? Thanks! Antonio

    Read the article

  • SQL View with Data from two tables

    - by Alex
    Hello! I can't seem to crack this - I have two tables (Persons and Companies), and I'm trying to create a view that: 1) shows all persons 2) also returns companies by themselves once, regardless of how many persons are related to it 3) orders by name across both tables To clarify, some sample data: (Table: Companies) Id Name 1 Banana 2 ABC Inc. 3 Microsoft 4 Bigwig (Table: Persons) Id Name RelatedCompanyId 1 Joe Smith 3 2 Justin 3 Paul Rudd 4 4 Anjolie 5 Dustin 4 The output I'm looking for is something like this: Name PersonName CompanyName RelatedCompanyId ABC Inc. NULL ABC Inc. NULL Anjolie Anjolie NULL NULL Banana NULL Banana NULL Bigwig NULL Bigwig NULL Dustin Dustin Bigwig 4 Joe Smith Joe Smith Microsoft 3 Justin Justin NULL NULL Microsoft NULL Microsoft NULL Paul Rudd Paul Rudd Bigwig 4 As you can see, the new "Name" column is ordered across both tables (the company names appear correctly in between the person names), and each company appears exactly once, regardless of how many people are related to it. Can this even be done in SQL?! P.S. I'm trying to create a view so I can use this later for easy data retrieval, fulltext indexing and make the programming side simpler by just querying the view.

    Read the article

  • Design pattern to keep track UITableView rows correspondance to underlying data in constant time.

    - by DenNukem
    When my model changes I want to animate changes in UITableView by inserting/deleting rows. For that I need to know the ordinal of the given row (so I can construct NSIndexPath), which I find hard to do in better-than-linear time. For example, consider that I have a list of addressbook entries which are manualy sorted by the user, i.e. there is no ordering "key" that represents the sort order. There is also a corresponding UITableView that shows one row per addressbook entry. When UITableView queries the datasource I query the NSMUtableArray populated with my entries and return required data in constant time for each row. However, if there is a change in underlying model I am getting a notification "Joe Smith, id#123 has been removed". Now I have a dilemma. A naive approach would be to scan the array, determine the index at which Joe Smith is and then ask UITableView to remove that precise row from the view, also removing it form the array. However, the scan will take linear time to finish. Now I could have an NSDictionary which allows me to find Joe Smith in constant time, but that doesn't do me a lot of good because I still need to find his ordinal index within the array in order to instruct UITableView to remove that row, which is again a linear search. I could further decide to store each object's ordinal inside the object itself to make it constant, but it will become outdated after first such update as all subsequent index values will have changed due to removal of an object. So what is the correct design pattern to accurately reflect model changes in the UITableView in costant (or at least logarithmic) time?

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • add Constraint on database with trigger

    - by Am1rr3zA
    Hi, I have 3 tables (Student, Course, student_course_choose(have field grade)) I defined a view on these 3 tables that get me an Average of the each student. I want to have constraint(with trigger) on these view(or on the table that need it) to limit the average of each student between 13 and 18. I somewhere read that I must use foreach statement(instead of foreach row) on trigger because when I decrease some grade of special student and his/her average become less than 13 they don't give me error (because later I increase grade of another his/her course ). how must I wrote this Trigger? (I want to implement aprh for testing trigger) note:I can write it in SQL server, oracle or Mysql no diff for me.

    Read the article

  • How do I handle a low job offer for an entry level position?

    - by user229269
    Hi guys! I recently graduated with MS in CS and I am excited because I just received a job offer from a company I really like for an entry-level sw engineer position. The thing is that, although the salary is not my priority and I care way more about gaining experience, their offer unfortunately is way below of what I expected. Actually after I did some research I realized that, comparing to the average salary range for the entry-level sw engineering positions in my area (one of the most expensive areas in the US) supposedly [X - Y]$ (where X is the lowest average and Y the highest), their offer is 20% below X! I wouldnt have a problem accepting an offer around X but this one is even lower than the lowest. Can I counter offer the X which is 20% more than what they offered me but at the same time is the minimum average? -- And mind you that I didnt even take under consideration the fact that I hold a MS degree which in many cases yields to a 5-10% more pay.

    Read the article

  • how to solve nested list programs [closed]

    - by riya
    write a function to get most popular car that accepts a car detail as input and returns the most popular car name along with its average rating .Each element of car details list is a sublist that provides the below information about a car (a)name of a car(b)car price (c) list of ratings obtained by car from various agencies.Incase two cars have the same average rating then the car with the lesser price qualifies as most popular car? here's my solution-: (define-struct cardetails ("name" price list of '(ratings)) (define car1 (make-cardetails "toyota" 123 '( 1 2 3))) (define car2 (make-cardetails "santro" 321 '( 2 2 3))) (define car3 (make-cardetails "toyota" 100 '( 1 2 3))) (define cardetailslist(list(car1) (car2)(car 3))) (let loop ((count 0)) (let (len (length cardetailslist)) (if(< count len) (string-ref (string-ref n)0) now please tell me how to find maximum average and display car name.it's not a homework question tomorrow is my test and we have not been taught this concept in class although it is very important from test point of view

    Read the article

  • Either .each do or .all isn't working how I think it should

    - by user1299656
    So whenever someone rates a shop, I want the Shop model to calculate its new average rating and store that in the database (instead of calculating the average every time someone looks at it). So I wrote the segment of code that follows, and it doesn't work. The loop always iterates exactly once, no matter how many shop_ratings in the database exist that have the shop's id as their shop_id. I played around with it a bit and found that every time a new rating is submitted the function is called successfully, but it only runs the loop once and sets the average to what the first rating was. I don't know if the "query" that sets the ratings variable is wrong or if it's the loop that's wrong. class Shop < ActiveRecord::Base has_many :shop_ratings attr_accessible :name, :latitude, :longitude validates_presence_of :name validates_presence_of :latitude validates_presence_of :longitude def distance_to(lat, long) return (self.longitude - long) + (self.latitude - lat) end def find_average total = 0 count = 0 ratings = ShopRating.all(:conditions => {:shop_id => id}) ratings.each do |submission| total = total + submission.rating count = count + 1 end update_attribute :average_rating, total/count end end

    Read the article

  • Handling a binary operation that makes sense only for part of a hierarchy.

    - by usersmarvin_
    I have a hierarchy, which I'll simplify greatly, of implementations of interface Value. Assume that I have two implementations, NumberValue, and StringValue. There is an average operation which only makes sense for NumberValue, with the signature NumberValue average(NumberValue numberValue){ ... } At some point after creating such variables and using them in various collections, I need to average a collection which I know is only of type NumberValue, there are three possible ways of doing this I think: Very complicated generic signatures which preserve the type info in compile time (what I'm doing now, and results in hard to maintain code) Moving the operation to the Value level, and: throwing an unsupportedOperationException for StringValue, and casting for NumberValue. Casting at the point where I know for sure that I have a NumberValue, using slightly less complicated generics to insure this. Does anybody have any better ideas, or a recommendation on oop best practices?

    Read the article

  • OOP PHP simple question

    - by Tristan
    Hello, I'm new to OOP in PHP, is that to seems correct ? class whatever { Function Maths() { $this->sql->query($requete); $i = 0; while($val = mysql_fetch_array($this)) { $tab[i][average] = $val['average']; $tab[i][randomData] = $val['sum']; $i=$i+1; } return $tab; } I want to access the data contained in the array $foo = new whatever(); $foo->Maths(); for ($i, $i <= endOfTheArray; i++) { echo Maths->tab[i][average]; echo Maths->tab[i][randomData]; } Thank you ;)

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >