Search Results

Search found 17045 results on 682 pages for 'high cpu usage'.

Page 20/682 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Checking up remotely on Website Usage

    - by Raj More
    A client of mine has a pure HTML website that was built in the dark ages - they want me to find where their users are coming from, how many individual users there are, etc. They want to know if the site is being used enough for them to invest the money into renovating it. I am remote from their site and do not have access to their web server. Is there something like ComScore for small sites that I can go into to check their usage statistics?

    Read the article

  • Usage Tracking for Windows desktop applications ...

    - by sdaas
    Hi, I am looking for some frameworks that can be used to collect usage information for Windows desktop application and analyze it. For example, I would like to be able to answer questions like (a) how many times do people use this application in a day (b) which are their favorite menu items, etc. I looked briefly at Google Analytics and Omniture SiteCatalyst but they seem to work only on web applications. Thanks SD

    Read the article

  • Why is Python used for high-performance/scientific computing (but Ruby isn't)?

    - by Cyclops
    There's a quote from a PyCon 2011 talk that goes: At least in our shop (Argonne National Laboratory) we have three accepted languages for scientific computing. In this order they are C/C++, Fortran in all its dialects, and Python. You’ll notice the absolute and total lack of Ruby, Perl, Java. It was in the more general context of high-performance computing. Granted the quote is only from one shop, but another question about languages for HPC, also lists Python as one to learn (and not Ruby). Now, I can understand C/C++ and Fortran being used in that problem-space (and Perl/Java not being used). But I'm surprised that there would be a major difference in Python and Ruby use for HPC, given that they are fairly similar. (Note - I'm a fan of Python, but have nothing against Ruby). Is there some specific reason why the one language took off? Is it about the libraries available? Some specific language features? The community? Or maybe just historical contigency, and it could have gone the other way?

    Read the article

  • Approximate Number of CPU Cycles for Various Operations

    - by colordot
    I am trying to find a reference for approximately how many CPU cycles various operations require. I don't need exact numbers (as this is going to vary between CPUs) but I'd like something relatively credible that gives ballpark figures that I could cite in discussion with friends. As an example, we all know that floating point division takes more CPU cycles than say doing a bitshift. I'd guess that the difference is that the division is around 100 cycles, where as a shift is 1 but I'm looking for something to cite to back that up. Can anyone recommend such a resource?

    Read the article

  • Quick CPU ring mode protection question

    - by b-gen-jack-o-neill
    Hi, me again :) I am very curious in messing up with HW. But my top level "messing" so far was linked or inline assembler in C program. If my understanding of CPU and ring mode is right, I cannot directly from user mode app access some low level CPU features, like disabling interrupts, or changing protected mode segments, so I must use system calls to do everything I want. But, if I am right, drivers can run in ring mode 0. I actually don´t know much about drivers, but this is what I ask for. I just want to know, is learning how to write your own drivers and than call them the way I should go, to do what I wrote? I know I could write whole new OS (at least to some point), but what I exactly want to do is acessing some low level features of HW from standart windows application. So, is driver the way to go?

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • High PageIOLatch_SH Waits with High Drive Idle times

    - by Marty Trenouth
    We are experiencing high volume of PageIOLatch_SH waits on our database (row counts in the Billions). However it seems that our drive Idle time Percentage hovers around 50-60 percent. CPU usage is nill. The Database Tuning Advisor gives no suggestions for optimization. The query plan (actual) from the single stored procedure used on the database puts the majority of the expense on index seek (yeah I know these should be optimial) operations. Anyone have suggestions of how to increase throughput?

    Read the article

  • Jquery CPU usage

    - by nharry
    I am using Jquery to make an image scroll across my page horizontally. The only problem is that it uses a serious amount of cpu usage. Up to 100% on a single core laptop in firefox. What could cause this??? Jquery <script> jQuery(document).ready(function() { $(".speech").animate({backgroundPosition: "-6000px 0px"}, 400000, null); }); </script> CSS .speech { /*position:fixed;*/ top:0; left:0px; height:400px; width:100%; z-index:-1; background:url(/images/speech.png) -300px -500px repeat-x; margin-right: auto; margin-left: auto; position: fixed; } HTML <div class="speech"></div>

    Read the article

  • Measuring CPU time per-thread on Windows

    - by Eli Courtwright
    I'm developing a long-running multi-threaded Python application for Windows, and I want the process to know the CPU time that each of its threads has taken. I can get the overall times for the entire process with os.times() but I need to know the per-thread times. I know that there are external tools such as the Sysinternals Process Explorer, but my program itself needs to have this information. If I were on Linux, I look in the /proc filesystem, as described here. If I were writing C code, I'd use the GetThreadTimes call, as described here. So how can I accomplish this on Windows using Python?

    Read the article

  • C/C++ memory usage API in Linux/Windows

    - by minjang
    I'd like to obtain memory usage information for both per process and system wide. In Windows, it's pretty easy. GetProcessMemoryInfo and GlobalMemoryStatusEx do these jobs greatly and very easily. For example, GetProcessMemoryInfo gives "PeakWorkingSetSize" of the given process. GlobalMemoryStatusEx returns system wide available memory. However, I need to do it on Linux. I'm trying to find Linux system APIs that are equivalent GetProcessMemoryInfo and GlobalMemoryStatusEx. I found 'getrusage'. However, max 'ru_maxrss' (resident set size) in struct rusage is just zero, which is not implemented. Also, I have no idea to get system-wide free memory. Current workaround for it, I'm using "system("ps -p %my_pid -o vsz,rsz");". Manually logging to the file. But, it's dirty and not convenient to process the data. I'd like to know some fancy Linux APIs for this purpose.

    Read the article

  • The downsides of using nginx as a primary web server?

    - by FractalizeR
    Hello. I've seen millions of websites using nginx as a proxifying webserver working together with Apache. But I've seen very few servers running nginx only as their default webserver. What are the main downsides of such config? I can see some: Inability to use per-directory config files like .htaccess so every configuration change should be done to main server config file and requires server reload. But pecl htscanner can compensate them for php settings Unavailability of mod_php for nginx, which can be compensated by php-fpm for example. What are others? Why don't people just drop Apache and move to nginx or any other lightweight solution? May be, there are some special reasons?

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • When modern computers boot, what initial setup of RAM do they execute, and how does it exactly work?

    - by user272840
    I know the title reeks of confusion, and some of you might assume I am just wondering about how the computer boots in general, but I'm not. But I'll sort this out for you people now: 1.Onboard firmware is how mostly all modern computer devices work, whether or not with EFI/UEFI(even without "onboard firmware", older computers still employed bank switching, or similar methods with snap-in firmware, cartridges, etc.) 2.On startup there is no "programs" running in the traditional sense yet, i.e. no kernel, OS, user-applications; all of the instructions, especially the very first instruction, is specified by the Instruction Pointer, I am guessing. How is the IP/PC/etc. set to first point to an address for a BIOS/firmware/etc. instruction, and how do the BIOS instructions map themself out in memory prior to startup? 3.Aside from MMIO, BIOS uses certain RAM addresses to have instructions. The big ? comes in when I ask this ... how does BIOS do this? Conclusion: I am assuming that with the very first instruction there is an initial hardware setup for BIOS prior to complete OS bootup. What I want to know is if it's hardware engineered to always work this way, if there's another step in this bootup method I am missing, a gap of information I am unaware of, or how this all works from the very first instruction, and the RAM data itself.

    Read the article

  • Normalize or Denormalize in high traffic websites

    - by Inam Jameel
    what is the best practice for database design for high traffic websites like this one stackoverflow? should one must use normalize database for record keeping or normalized technique or combination of both? is it sensible to design normalize database as main database for record keeping to reduce redundancy and at the same time maintain another denormalized form of database for fast searching? or main database should be denormalize and one can make normalized views in the application level for fast database operations? or beside above mentioned approach? what is the best practice of designing high traffic websites???

    Read the article

  • Mysqli results memory usage

    - by Poe
    Why is the memory consumption in this query continuing to rise as the internal pointer progresses through loop? How to make this more efficient and lean? $link = mysqli_connect(...); $result = mysqli_query($link,$query); // 403,268 rows in result set while ($row = mysqli_fetch_row($result)) { // print time, (get memory usage), -- row number } mysqli_free_result(); mysqli_close($link); /* 06:55:25 (1240336) -- Run query 06:55:26 (39958736) -- Query finished 06:55:26 (39958784) -- Begin loop 06:55:26 (39960688) -- Row 0 06:55:26 (45240712) -- Row 10000 06:55:26 (50520712) -- Row 20000 06:55:26 (55800712) -- Row 30000 06:55:26 (61080712) -- Row 40000 06:55:26 (66360712) -- Row 50000 06:55:26 (71640712) -- Row 60000 06:55:26 (76920712) -- Row 70000 06:55:26 (82200712) -- Row 80000 06:55:26 (87480712) -- Row 90000 06:55:26 (92760712) -- Row 100000 06:55:26 (98040712) -- Row 110000 06:55:26 (103320712) -- Row 120000 06:55:26 (108600712) -- Row 130000 06:55:26 (113880712) -- Row 140000 06:55:26 (119160712) -- Row 150000 06:55:26 (124440712) -- Row 160000 06:55:26 (129720712) -- Row 170000 06:55:27 (135000712) -- Row 180000 06:55:27 (140280712) -- Row 190000 06:55:27 (145560712) -- Row 200000 06:55:27 (150840712) -- Row 210000 06:55:27 (156120712) -- Row 220000 06:55:27 (161400712) -- Row 230000 06:55:27 (166680712) -- Row 240000 06:55:27 (171960712) -- Row 250000 06:55:27 (177240712) -- Row 260000 06:55:27 (182520712) -- Row 270000 06:55:27 (187800712) -- Row 280000 06:55:27 (193080712) -- Row 290000 06:55:27 (198360712) -- Row 300000 06:55:27 (203640712) -- Row 310000 06:55:27 (208920712) -- Row 320000 06:55:27 (214200712) -- Row 330000 06:55:27 (219480712) -- Row 340000 06:55:27 (224760712) -- Row 350000 06:55:27 (230040712) -- Row 360000 06:55:27 (235320712) -- Row 370000 06:55:27 (240600712) -- Row 380000 06:55:27 (245880712) -- Row 390000 06:55:27 (251160712) -- Row 400000 06:55:27 (252884360) -- End loop 06:55:27 (1241264) -- Free */

    Read the article

  • Cpu usage from top command

    - by kairyu
    How can i get the result like example following. Any command or scripts? Snapshot u1234 3971 1.9 0.0 0 0 ? Z 20:00 0:00 [php] u1234 4243 3.8 0.2 64128 43064 ? D 20:00 0:00 /usr/bin/php /home/u1234/public_html/index.php u1234 4289 5.3 0.2 64128 43064 ? R 20:00 0:00 /usr/bin/php /home/u1234/public_html/index.php u1234 4312 9.8 0.2 64348 43124 ? D 20:01 0:00 /usr/bin/php /home/u1234/public_html/index.php u1235 4368 0.0 0.0 30416 6604 ? R 20:01 0:00 /usr/bin/php /home/u1235/public_html/index.php u1236 4350 2.0 0.0 34884 13284 ? D 20:01 0:00 /usr/bin/php /home/u1236/public_html/index.php u1237 4353 13.3 0.1 51296 30496 ? S 20:01 0:00 /usr/bin/php /home/u1237/public_html/index.php u1238 4362 63.0 0.0 0 0 ? Z 20:01 0:00 [php] u1238 4366 0.0 0.1 51352 30532 ? R 20:01 0:00 /usr/bin/php /home/u1238/public_html/index.php u1239 4082 3.0 0.0 0 0 ? Z 20:00 0:01 [php] u1239 4361 26.0 0.1 49104 28408 ? R 20:01 0:00 /usr/bin/php /home/u1239/public_html/index.php u1240 1980 0.4 0.0 0 0 ? Z 19:58 0:00 [php] CPU TIME = 8459.71999999992 This result i got from hostgator support :) I was used "top -c" but they do not show "/home/u1239/public_html/index.php Thanks

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • 12.04 Unity 3D 80% CPU load with Compiz

    - by user39288
    EDIT : I have been able to to determine that the problem is not compiz, but is actually Xorg. I don't know why, but by quickly maximizing terminal and taking a screenshot with top running before the problem went away I am able to see xorg takes up 72% of cpu, with bamfdaemon taking up 18%, and compiz taking up 14%. Seems the nvidia drivers are to blame, will play more with settings and perhaps do a clean nvidia-current install to try to fix the problem. Having a very annoying problem with high CPU usage. Running 12.04 with latest drivers and nvidia-current installed. Have not had any issues for days, now I have a strange problem. Unity 3d runs great most of the time, 1-2% CPU usage with only transmission running in background. Windows open and close smoothly. However,no matter what programs are open, if I minimize all open programs to the unity bar on the left, my CPU jumps to about 80% and slows down all maximize and minimize effects. Mouse movement stays smooth the whole time, but unity becomes unresponsive for up to 30 seconds at times. Hitting alt + tab to bring up even a single window fixes the problem. The window I bring back up doesn't even have to be maximized to solve the problem. Hitting the super button to bring up the dash makes CPU drop back to idle until I close it, then high CPU usage resumes. Believe the problem is compiz, but even just having only terminal running "top", I have to minimize it to the tray for the problem to show, so I can't see the problem process. I can only tell about the high CPU usage using indicator-sysmonitor. Even tried quitting the indicator, but I can still tell very poor performance with all applications when minimized. Reset compiz back to defaults, tried going to the post-release update nvidia drivers, played with vsync settings in both the nvidia settings and compiz. Even forced refresh rate, but cannot solve the problem. The problem does NOT occur in Unity 2D. Specs are core 2 duo 2.0ghz, 4GB ddr2 ram, 2x 320's HDD in RAID 0, and Nvidia GTX 260M graphics card.

    Read the article

  • PHP5 getrusage() returning incorrect information?

    - by Andrew
    I'm trying to determine CPU usage of my PHP scripts. I just found this article which details how to find system and user CPU usage time (Section 4). However, when I tried out the examples, I received completely different results. The first example: sleep(3); $data = getrusage(); echo "User time: ". ($data['ru_utime.tv_sec'] + $data['ru_utime.tv_usec'] / 1000000); echo "System time: ". ($data['ru_stime.tv_sec'] + $data['ru_stime.tv_usec'] / 1000000); Results in: User time: 29.53 System time: 2.71 Example 2: for($i=0;$i<10000000;$i++) { } // Same echo statements Results: User time: 16.69 System time: 2.1 Example 3: $start = microtime(true); while(microtime(true) - $start < 3) { } // Same echo statements Results: User time: 34.94 System time: 3.14 Obviously, none of the information is correct except maybe the system time in the third example. So what am I doing wrong? I'd really like to be able to use this information, but it needs to be reliable. I'm using Ubuntu Server 8.04 LTS (32-bit) and this is the output of php -v: PHP 5.2.4-2ubuntu5.10 with Suhosin-Patch 0.9.6.2 (cli) (built: Jan 6 2010 22:01:14) Copyright (c) 1997-2007 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2007 Zend Technologies

    Read the article

  • Can I reduce the CPU speed of my MacBook when on battery?

    - by Greg Hewgill
    I've got a MacBook with a Core 2 Duo CPU. I've got CoreDuoTemp installed which can show the current speed of the CPU. It appears to always show: Mini : 1.0 GHz Maxi : 2.0 GHz Current : 2.0 GHz I believe my laptop would run longer on battery if it were to run at a maximum of 1 GHz. Is there a way to configure this, or is the CPU speed adjustment completely automatic?

    Read the article

  • How to see if turbo boost is working on I7 860 CPU?

    - by Jan Derk
    I just build myself a new system with a Intel I7 860 CPU. When loading it using a single threaded application like Super PI, CPU-Z shows 2.933Ghz as speed. Now I understood that the I7 goes into turbo boost mode up to 3.46GHz for a single core. How can I check that? Is there a utility to monitor CPU speed per core?

    Read the article

  • Sudden and frequent hangs on desktop computer: mobo or CPU fault?

    - by djechelon
    I have a desktop computer equipped with an ASUS Crosshair 2 Formula and a Phenom x6 3.2GHz CPU. My problem is that often the computer will hang all of a sudden, completely stopping responding. When that occurs, reset key is inoperative and power button turns the computer off but is unable to turn it back on. I have to physically disconnect power cable. The problem can occur anytime, when I'm booting Windows, when I'm logging in, when I'm listening to a song, when I'm browsing Internet, etc. It always occurs after very few minutes of 3D gameplay I thought it was a video card fault. I had 3 8800GTX so I could try all combinations of them: didn't fix I thought it was a RAM problem: I tried running with only a subset of my DDR2 banks but didn't fix. Almost every time I have to reset and reconfigure BIOS (without AHCI, Win7 won't boot, so I need to restore a few things). If I enable AMD Live, Cool&Quiet or other things from CPU configuration menu I'll be sure that the computer won't reach Windows desktop in 99% of cases (it randomly hangs somewhere in the boot process or even in the BIOS POST). Another interesting thing is that during the POST process the computer always takes unusually long time detecting USB devices (LCD POSTer shows USB INIT), and I've also tried disconnecting all USB devices but didn't take less time to POST BIOS revision is 2702, the latest. Today I found a different behaviour once: during boot screen I got a BSOD with error Stop 0x00000101 A clock interrupt was not received on a secondary processor within the allocated time interval, and this is usually related to overclocking, but I never overclocked my CPU. Judging from the description of my problem, hoping someone had the same and fixed, and since I don't have a spare CPU or motherboard for replacement, I'd like to ask if you think this is a problem with faulty CPU or faulty motherboard, and if I can perform additional tests (I mean software tests because of my lack of spare components) to identify the component to replace.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >