Search Results

Search found 15670 results on 627 pages for 'multi level'.

Page 158/627 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Limit a process's relative (not absolute) processor consumption in Linux

    - by BobBanana
    What is the standard way in Linux to enforce a system policy to limit the relative CPU use of a single process? That is, on a quad-core machine, I never want a process to use more than 2 CPUs at once, even if the process creates more threads. I do not want an absolute time limit, just a relative limit so that one task cannot dominate the machine. This is also different than renice, which allows a process to use all the resources but just politely step aside if others need them too. ulimit is the usual resource limiting tool, but it does not allow such CPU restrictions.. it can limit the number of processes per user, or absolute CPU time, not restrict the maximum number of active threads of a single process. I've found a couple of user-level tools, like CPUlimit, but not a system level tool or setting. Does such a standard resource controller exist in Linux (Red Hat Enterprise, if it matters.) If there is such a limit imposed, how would a user identify it?

    Read the article

  • Remote Desktop Encryption

    - by Kumar
    My client is RDP 6.1 (On Windows XP SP3) and Server is Windows Server 2003. I have installed an SSL certificate on server for RDP. In the RDP settings (General tab), the Encryption method is set to SSL/TLS 1.0 and Encryption level is set to "Client Compatible". I have following questions In this case is it guaranteed that all communication is encrypted even when I remote login to the server? I mean pwd is encrypted Does RDP always use some kind of encryption even if there is no SSL certificate installed on the server? In this case I do not see security lock in the connection bar. When I set encryption level to "High" then I see security lock. I do believe that communication is both cases will be encrypted. Is it true? Please reply to my questions Thanks in advance Kumar

    Read the article

  • Recovering database files from a corrupted VHD

    - by Apocalypse9
    We have a SQL server hosted on a virtual machine. Our hosting company updated/restarted the server and for some reason the virtual machines became unbootable. We've spoken to Microsoft and used a few higher level tools to attempt to recover the virtual machines but were unsuccessful. In browsing the file system the database folder doesn't even appear. I'm wondering if there are any lower level tools that might be able to find and copy the database files. As far as I know the physical hard drive is ok, so I'm hoping there may be some way to recover the files themselves even if the rest of the virtual machine file-system is a loss. Obviously we're in a bit of a bind, and any help/ suggestions are very much appreciated.

    Read the article

  • Which permissions need to be assigned to a normal user to allow him to change SHARE permissions?

    - by guest
    As in the subject - I am wondering, which permissions (if it is possible) do I need to assign to a regular user to modify the share permissions - e.g. to add another user with full control permissions, or to deny someone read access to the folder - on a share level. I know that this is possible through NTFS permissions, but I am wondering whether it is also possible on share level. Any ideas how to do that? Or perhaps only the adminstrator/creator/person who shared the folder has the access to this? I am using win2003 Any ideas? Thanks

    Read the article

  • How can I see if apache is overloaded and dropping or not accepting connections?

    - by cat pants
    Basically I just want to see if apache is handling a current level of high traffic or if I need to tune it to handle more connections. (I have found plenty of information on the actual tuning, so no help needed there) I know it has been dropping or not accepting connections earlier today, but not seeing anything in the error logs. Is the expected behavior to throw a 503 in the error log if apache cannot accept more connections? If so, what error logging level do I need in order to see these? What is the correct terminology: dropping connections or not accepting connections? MPM is prefork, OS is Linux, apache version is 2.2.15.

    Read the article

  • What are the recognized ways to increase the size of the RAID array online/offline?

    - by user149509
    Is it possible, in theory, increase the size of the RAID-array of any level just by adding new drive(s)? Variant like "backup whole data - delete old array - add/replace disks - create new array - restore data" is obvious so what are the other options? Does it depend on the RAID-level only or on the implementation of RAID-controller only, or on both? Adding new disks to a striped array necessarily leads to a rebuilding of the array with the redistribution of the strips to the new drives? What steps should be done to increase size of RAID-array in online/offline scenarios? Especially interesting RAID-5 and RAID-10. I would like to see the big picture.

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • Galaxy Tab 3 producing continuous LightSensor error in LogCat

    - by Richard Tingle
    I am using a Galaxy Tab 3 as a test device for writing an android app. As such I'm interested in the output of the LogCat which is being filled with these error level messages. The device itself appears to work correctly, apps which rely on the light sensor correctly respond to it and the number in the error itself goes down if the light sensor is obscured. If I wasn't using it to develop apps I wouldn't even be aware of the issue but I believe it is an issue with the device itself not my app: simply plugging the tab 3 into the computer and using Eclipse - ADT to look at the LogCat without any app running leads to these errors being shown. I know I could filter the LogCat to ignore these errors but inconvenience aside; they concern me. A sample of the log cat is below (it generates errors continuously). This is on verbose so it includes some debug level (D/) messages as well as the error level messages (E/). How can I correct the device to no longer generate these errors. 06-11 10:08:45.789: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:45.992: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.195: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.398: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.601: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.804: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.007: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.210: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.414: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.617: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.820: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.023: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.039: D/dalvikvm(15201): GC_CONCURRENT freed 1947K, 17% free 16973K/20359K, paused 13ms+13ms, total 50ms 06-11 10:08:48.226: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.429: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:48.835: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.039: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.242: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.445: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:49.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:49.648: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.851: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13

    Read the article

  • Kill child process when the parent exits

    - by kolypto
    I'm preparing a script for Docker, which allows only one top-level process, which should receive the signals so we can stop it. Therefore, I'm having a script like this: one application writes to syslog (bash script in this sample), and the other one just prints it. #! /usr/bin/env bash set -eu tail -f /var/log/syslog & exec bash -c 'while true ; do logger aaaaaaaaaaaaaaaaaaa ; sleep 1 ; done' Almost solved: when the top-level process bash gets SIGTERM -- it exists, but tail -f continues to run. How do I instruct tail -f to exit when the parent process exits? E.g. it should also get the signal. Note: Can't use bash traps since exec on the last line replaces the process completely.

    Read the article

  • Tomcat directly serve static (css, js) files shared by multiple applications

    - by Josvic Zammit
    I'm using the ExtJS framework which has a bulk of js and css files that are used for all apps. I intend to share these between a number of web applications (different war files). For this reason I would like to serve ExtJS js and css directly from the web server, in my case Tomcat6, which can be used to serve static files, as in this helpful link. Therefore I put my files under /var/lib/tomcat6/webapps/ROOT/extjs/. The static files that are directly under that directory are served correctly, e.g. /extjs/ext.js correctly serves the file at /var/lib/tomcat6/webapps/ROOT/extjs/ext.js. However files in lower-level directories, for example /extjs/welcome/css/welcome.css, which should serve the file at /var/lib/tomcat6/webapps/ROOT/extjs/welcome/css/welcome.css, return a 404. TL/DR Tomcat serves static files only at top-level directory. A 404 is returned for files deeper in the hierarchy. Config file contents: server.xml application's web.xml

    Read the article

  • Accessing a broken mdadm raid

    - by CarstenCarsten
    Hi! I used a western digital mybookworld (SOHO NAS storage using Linux) as backup for my Linux box. Suddenly, the mybookworld does not boot up any more. So I opened the box, removed the hard disk and put the hard disk into an external USB HDD case, and connected it to my Linux box. [ 530.640301] usb 2-1: new high speed USB device using ehci_hcd and address 3 [ 530.797630] scsi7 : usb-storage 2-1:1.0 [ 531.794844] scsi 7:0:0:0: Direct-Access WDC WD75 00AAKS-00RBA0 PQ: 0 ANSI: 2 [ 531.796490] sd 7:0:0:0: Attached scsi generic sg3 type 0 [ 531.797966] sd 7:0:0:0: [sdc] 1465149168 512-byte logical blocks: (750 GB/698 GiB) [ 531.800317] sd 7:0:0:0: [sdc] Write Protect is off [ 531.800327] sd 7:0:0:0: [sdc] Mode Sense: 38 00 00 00 [ 531.800333] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803821] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803836] sdc: sdc1 sdc2 sdc3 sdc4 [ 531.815831] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.815842] sd 7:0:0:0: [sdc] Attached SCSI disk The dmesg output looks normal, but I was wondering why the hardisk was not mounted at all. And why there are 4 different partitions on it. fdisk showed the following: root@ubuntu:/home/ubuntu# fdisk /dev/sdc WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdc: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00007c00 Device Boot Start End Blocks Id System /dev/sdc1 4 369 2939895 fd Linux raid autodetect /dev/sdc2 370 382 104422+ fd Linux raid autodetect /dev/sdc3 383 505 987997+ fd Linux raid autodetect /dev/sdc4 506 91201 728515620 fd Linux raid autodetect Oh no! Everything seems to be created as a mdadm software raid. Calling mdadm --examine with the different partitions seems to affirm that. I think the only partition I am interested in, is /dev/sdc4 (because it is the largest). But nevertheless I called mdadm --examine with every partition. root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : 5626a2d8:070ad992:ef1c8d24:cd8e13e4 Creation Time : Wed Feb 20 00:57:49 2002 Raid Level : raid1 Used Dev Size : 2939776 (2.80 GiB 3.01 GB) Array Size : 2939776 (2.80 GiB 3.01 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 4c90bc55 - correct Events : 16682 Number Major Minor RaidDevice State this 0 8 1 0 active sync /dev/sda1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 00.90.00 UUID : 9734b3ee:2d5af206:05fe3413:585f7f26 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 104320 (101.89 MiB 106.82 MB) Array Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 2 Update Time : Wed Oct 27 20:19:08 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 55560b40 - correct Events : 9884 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc3 /dev/sdc3: Magic : a92b4efc Version : 00.90.00 UUID : 08f30b4f:91cca15d:2332bfef:48e67824 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Array Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 3 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 39717874 - correct Events : 73678 Number Major Minor RaidDevice State this 0 8 3 0 active sync 0 0 8 3 0 active sync 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc4 /dev/sdc4: Magic : a92b4efc Version : 00.90.00 UUID : febb75ca:e9d1ce18:f14cc006:f759419a Creation Time : Wed Feb 20 00:57:55 2002 Raid Level : raid1 Used Dev Size : 728515520 (694.77 GiB 746.00 GB) Array Size : 728515520 (694.77 GiB 746.00 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 4 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 2f36a392 - correct Events : 519320 Number Major Minor RaidDevice State this 0 8 4 0 active sync 0 0 8 4 0 active sync 1 1 0 0 1 faulty removed If I read the output correctly everything was removed, because it was faulty. Is there ANY way to see the contents of the largest partition? Or seeing somehow which files are broken? I see that everything is raid1 which is only mirroring, so this should be a normal partition. I am anxious to do anything with mdadm, in fear that I destroy the data on the hard disk. I would be very thankful for any help.

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • Nightly backups (and maybe other tasks) causing server alerts

    - by J. Pablo Fernández
    I have two independent alert notification systems for my servers. The server is a virtual machine on Linode and one of the alerts comes from Linode. The other monitoring system we use is New Relic. They are both watching out for IO utilization. Every night I get alerts from both of them as the server is using too much IO. I run quite a few tasks in the middle of the night but the one I confirmed that can cause IO-warnings is running the backups. The backup is done by s3cmd sync. I tried ionice but it still generates the warnings. Getting warnings every night reduces the efficacy of warnings when they happen for real. For Linode I could raise the level at which a warning is issued, but it might mean making the whole thing useless as the level is too high. What would be the proper solution for this?

    Read the article

  • Determining percentage of students between certain grades

    - by dunc
    I have an Excel spreadsheet with the following data: #-----------------------------------------------------------------------------------------------------------------------------------# # Student # KS2 Grade # Target # Expected 1 # Expected 2 # Expected 3 # FSM Status # Gifted & Talented # #-----------------------------------------------------------------------------------------------------------------------------------# # User 1 # 4 # 6 # 7 # 5 # 6 # Y # N # # User 2 # 3 # 5 # 5 # 4 # 4 # N # N # # User 3 # 5 # 6 # 6 # 6 # 7 # N # N # # User 4 # 4 # 6 # 5 # 6 # 6 # N # Y # # User 5 # 5 # 7 # 7 # 6 # 7 # N # N # # User 6 # 3 # 4 # 4 # 4 # 4 # N # N # # User 7 # 3 # 4 # 5 # 3 # 4 # Y # Y # #-----------------------------------------------------------------------------------------------------------------------------------# What I'd like to do is determine the percentage of students with certain levels, i.e. a range of levels. For instance, in the data above, I'd like to determine the % of all students that have a Target level of 5 - 7. I'd then like to also expand the formula to specify % of Gifted & Talented students with a Target level of 5 - 7. Is this possible in Excel? If so, where do I start?

    Read the article

  • SQL SERVER – Generate Report for Index Physical Statistics – SSMS

    - by pinaldave
    Few days ago, I wrote about SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS (Link). A user asked me a question regarding if we can use similar reports to get the detail about Indexes. Yes, it is possible to do the same. There are similar type of reports are available at Database level, just like those available at the Server Instance level. You can right click on Database name and click Reports. Under Standard Reports, you will find following reports. Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Backup and Restore Events All Transactions All Blocking Transactions Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Resource Locking Statistics by Objects Object Execute Statistics Database Consistency history Index Usage Statistics Index Physical Statistics Schema Change history User Statistics Select the Reports with name Index Physical Statistics. Once click, a report containing all the index names along with other information related to index will be visible, e.g. Index Type and number of partitions. One column that caught my interest was Operation Recommended. In some place, it suggested that index needs to be rebuilt. It is also possible to click and expand the column of partitions and see additional details about index as well. DBA and Developers who just want to have idea about how your index is and its physical statistics can use this tool. Click to Enlarge Note: Please note that I will rebuild my indexes just because this report is recommending it. There are many other parameters you need to consider before rebuilding indexes. However, this tool gives you the accurate stats of your index and it can be right away exported to Excel or PDF writing by clicking on the report. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Why do programmers use or recommend Mac OS X?

    - by codingbear
    I've worked on both Mac and Windows for awhile. However, I'm still having a hard time understanding why programmers enthusiastically choose Mac OS X over Windows and Linux? I know that there are programmers who prefer Windows and Linux, but I'm asking the programmers who would just use Mac OS X and nothing else, because they think Mac OS X is the greatest fit for programmers. Some might argue that Mac OS X got the beautiful UI and is nix based, but Linux can do that. Although Windows is not nix based, you can pretty much develop on any platform or language, except Cocoa/Objective-C. Is it the softwares that offer only on Mac OS X? Does that really worth using Mac? Is it to develop iPhone apps? Is it because you need to buy new Windows every 2 years (less backwards compatible)? I understand why people, who are working in multimedia/entertainment industry, would use Mac OS X; however, I don't have strong merits of Mac OS X over Windows. If you develop daily on Mac and prefer Mac over anything else, can you give me a merit that Mac has over Windows/Linux? Maybe something you can do on Mac that cannot be done in Windows/Linux with the same level of ease? I'm not trying to do another Mac vs. Windows here. I tried to find things that I do on Mac that cannot be done on Windows with the same level of ease, but I couldn't. So, I'm asking for some help.

    Read the article

  • IE 11 Developer Tools - changing console target to a different frameset or iframe

    - by vladimirl
    Originally posted on: http://geekswithblogs.net/vladimirl/archive/2013/10/25/ie-11-developer-tools---changing-console-target-to-a.aspxTo change current console iframe/frameset type this into console command line where "contentIFrame" in the iframe/frameset name (there should not be quotes around the iframe name):console.cd(contentIFrame);To return to the top level window, use cd() with no argument:console.cd();It took me some time to find out that this was possible in IE 11 Developer tools. Everything is so much easier in Chrome. No drama. Sometimes I feel that I hate IE more and more. Reference (http://msdn.microsoft.com/en-us/library/ie/dn255006(v=vs.85).aspx#console_in):All script entered in the command line executes in the global scope of the currently selected window. If your webpage is built with a frameset or iframes, those frames load their own documents in their own windows.To target the window of a frameset frame or an iframe, use the cd() command, with the frame/iframe's name or ID attribute as the argument. For example, you have a frame with the name microsoftFrame and you're loading the Microsoft homepage in it.JavaScriptcd(microsoftFrame); Current window: www.microsoft.com/en-us/default.aspx Important  Note that there were no quotes around the name of the frame. Only pass the unquoted name or ID value as the parameter.To return to the top level window, use cd() with no argument.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • DevDays ‘00 The Netherlands day #1

    - by erwin21
    First day of DevDays 2010, I was looking forward to DevDays to see all the new things like VS2010, .NET4.0, MVC2. The lineup for this year is again better than the year before, there are 100+ session of all kind of topics like Cloud, Database, Mobile, SharePoint, User experience, Visual Studio, Web. The first session of the day was a keynote by Anders Hejlsberg he talked about the history and future of programming languages. He gave his view about trends and influences in programming languages today and in the future. The second talk that i followed was from the famous Scott Hanselman, he talked about the basics of ASP.NET MVC 2, although it was a 300 level session, it was more like a level 100 session, but it was mentioned by Scott at the beginning. Although it was interesting to see all the basic things about MVC like the controllers, actions, routes, views, models etc. After the lunch the third talk for me was about moving ASP.NET webform applications to MVC from Fritz Onion. In this session he changed an example webform application part by part to a MVC application. He gave some interesting tips and tricks and showed how to solve some issues that occur while converting. Next and the fourth talk was about the difference between LINQ to SQL and  the ADO.NET  Entity Framework from Kurt Claeys. He gave a good understanding about this two options, the demos where in LINQ to SQL and the Entity Framework, the goal was to get a good understanding when and where to use both options. The last talk about this day was also from Scott Hanselman, he goes deeper into the features of ASP.NET MVC 2 and gave some interesting tips, the ninja black belt tips. He gave some tips about the tooling, the new MVC 2 html helper methods, other view engines (like NHaml, spark),T4 templating. With this tips we can be more productive and create web applications better and faster. It was a long and interesting day, I am looking forward to day #2.

    Read the article

  • Introducing SSIS Reporting Pack for SQL Server code-named Denali

    - by jamiet
    In recent blog posts I have introduced the new SSIS Catalog that is forthcoming in SQL Server Code-named Denali: What's new in SSIS in Denali Introduction to SSIS Projects in Denali Parameters in SSIS In Denali SSIS Server, Catalogs, Environments and Environment Variables in SSIS in Denali The SSIS Catalog is responsible for executing SSIS packages and also for capturing the metadata from those executions. However, at the time of writing there is no mechanism provided to view analyse and drill into that metadata and that is the reason that I am, in this blog post, introducing a suite of SSIS Catalog reports called the SSIS Reporting Pack which you can download from my SkyDrive at http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. In this first release the SSIS Reporting Pack includes five reports: Catalog – A high-level summary of all activity in the Catalog Folders – A summary of activity in each Catalog Folder Folder – Project-level activity per single Folder Executions – A visualisation of all executions per Folder/Project/Package/Environment or subset thereof Execution – Information about an individual execution Here is a screenshot of the Executions report: Notice that the SSIS Reporting Pack provides a visual overview of all executions in the Catalog. Each execution is represented as a bar on the bar chart, the success or otherwise of each execution is indicated by the colour of the bar and the execution time is indicated by the bar height. I have recorded a video that gives an overview of the SSIS Reporting which I have embedded below. If you are having any trouble viewing the video go see it at http://vimeo.com/17617974 I must stress that this is a very early version of the SSIS Reporting Pack and I am expecting it to change a lot over the coming year. I am very keen to get some feedback about this, specifically: let me know if anything does not work as you expect give me your feature requests The easiest way to get hold of of me for now is within the comments section of this blog post. That’s all for now. I hope the SSIS Reporting Pack proves useful and I look forward to hearing your feedback. Lastly, that download link again: http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. @jamiet

    Read the article

  • Web application development over C++ development..

    - by learnerforever
    Hi, I am CS undergrad and CS grad. In college I used to program in C/C++/java and have pretty much stuck to the same skill set in industry with 3 years experience. I like thinking,reading,applying logic etc, designing data structures, but I have little patience with debugging large C++ code. And having to deal with low level stuff like memory fault,memory corruption,compilation/linking issues. My confidence in programming is getting down due to this, but I like being in technical field. Does web application development like LAMP suit (Linux,apache,mysql,php),CSS,scripting (AMONG OTHER WEB DEVELOPMENT RELATED SKILLS) etc need lesser patience with debugging,and understanding of low level stuff, but your analysis/logical skills also get used? Also opportunities in web application development look more. Things like scalability, most of the stuff that Google does fascinates me, but for patience needed for dealing with C++ debugging. I make blunders while coding. How does the field look like outside C++? I am beginning to wonder if as a female, by moving to web application development, I can better manage work life balance. I have seen relatively lesser females in C++ than in Java/.net. Not very sure about web related stuff though. Also, what are the other hot technologies being used in web application development? lamp,css is something I know vaguely. Not in touch with keywords going on in this area. Please help!!.

    Read the article

  • Why do programmers use or recommend Mac OS X? [closed]

    - by codingbear
    I've worked on both Mac and Windows for awhile. However, I'm still having a hard time understanding why programmers enthusiastically choose Mac OS X over Windows and Linux? I know that there are programmers who prefer Windows and Linux, but I'm asking the programmers who would just use Mac OS X and nothing else, because they think Mac OS X is the greatest fit for programmers. Some might argue that Mac OS X got the beautiful UI and is nix based, but Linux can do that. Although Windows is not nix based, you can pretty much develop on any platform or language, except Cocoa/Objective-C. Is it the softwares that offer only on Mac OS X? Does that really worth using Mac? Is it to develop iPhone apps? Is it because you need to buy new Windows every 2 years (less backwards compatible)? I understand why people, who are working in multimedia/entertainment industry, would use Mac OS X; however, I don't have strong merits of Mac OS X over Windows. If you develop daily on Mac and prefer Mac over anything else, can you give me a merit that Mac has over Windows/Linux? Maybe something you can do on Mac that cannot be done in Windows/Linux with the same level of ease? I'm not trying to do another Mac vs. Windows here. I tried to find things that I do on Mac that cannot be done on Windows with the same level of ease, but I couldn't. So, I'm asking for some help.

    Read the article

  • Chennai Metro Water [Indian Websites]

    - by samsudeen
    If you are living in Chennai definitely you will have this question in your mind “Does Chennai 0survive this summer without any water crisis?”. The Chennai Metro Water maintains a website where in you can know all the details about the water storage, supply. This site has all the information about water supply  especially matters for  Chennai residents. I have given some of the uses of this site Water Sewer Application Status If you are a new resident and applied for water sewer application status. You can check your application (Water Sewer Application Status) by entering your WA No (Water Application No). You can also download all the required application forms. Register Complaints You can register complaints (Register Complaints) about  water supply / defective water meter and sewage blocking online. Pay Water & Sewage Tax You can pay your water & sewage tax (Pay Tax) online including the dues /arrears if any left out. Lake Level This site also maintains the current (Lake Level) and historic water levels of the Chennai water reservoirs including the daily inflow /out flow of water. You can also know the daily / month rainfall levels of these water reservoirs from 1965 onwards. Hope this  information little  useful for those who are planning / building a new house in Chennai corporation. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Revisiting ANTS Performance Profiler 7.4

    - by James Michael Hare
    Last year, I did a small review on the ANTS Performance Profiler 6.3, now that it’s a year later and a major version number higher, I thought I’d revisit the review and revise my last post. This post will take the same examples as the original post and update them to show what’s new in version 7.4 of the profiler. Background A performance profiler’s main job is to keep track of how much time is typically spent in each unit of code. This helps when we have a program that is not running at the performance we expect, and we want to know where the program is experiencing issues. There are many profilers out there of varying capabilities. Red Gate’s typically seem to be the very easy to “jump in” and get started with very little training required. So let’s dig into the Performance Profiler. I’ve constructed a very crude program with some obvious inefficiencies. It’s a simple program that generates random order numbers (or really could be any unique identifier), adds it to a list, sorts the list, then finds the max and min number in the list. Ignore the fact it’s very contrived and obviously inefficient, we just want to use it as an example to show off the tool: 1: // our test program 2: public static class Program 3: { 4: // the number of iterations to perform 5: private static int _iterations = 1000000; 6: 7: // The main method that controls it all 8: public static void Main() 9: { 10: var list = new List<string>(); 11: 12: for (int i = 0; i < _iterations; i++) 13: { 14: var x = GetNextId(); 15: 16: AddToList(list, x); 17: 18: var highLow = GetHighLow(list); 19: 20: if ((i % 1000) == 0) 21: { 22: Console.WriteLine("{0} - High: {1}, Low: {2}", i, highLow.Item1, highLow.Item2); 23: Console.Out.Flush(); 24: } 25: } 26: } 27: 28: // gets the next order id to process (random for us) 29: public static string GetNextId() 30: { 31: var random = new Random(); 32: var num = random.Next(1000000, 9999999); 33: return num.ToString(); 34: } 35: 36: // add it to our list - very inefficiently! 37: public static void AddToList(List<string> list, string item) 38: { 39: list.Add(item); 40: list.Sort(); 41: } 42: 43: // get high and low of order id range - very inefficiently! 44: public static Tuple<int,int> GetHighLow(List<string> list) 45: { 46: return Tuple.Create(list.Max(s => Convert.ToInt32(s)), list.Min(s => Convert.ToInt32(s))); 47: } 48: } So let’s run it through the profiler and see what happens! Visual Studio Integration First, let’s look at how the ANTS profilers integrate with Visual Studio’s menu system. Once you install the ANTS profilers, you will get an ANTS menu item with several options: Notice that you can either Profile Performance or Launch ANTS Performance Profiler. These sound similar but achieve two slightly different actions: Profile Performance: this immediately launches the profiler with all defaults selected to profile the active project in Visual Studio. Launch ANTS Performance Profiler: this launches the profiler much the same way as starting it from the Start Menu. The profiler will pre-populate the application and path information, but allow you to change the settings before beginning the profile run. So really, the main difference is that Profile Performance immediately begins profiling with the default selections, where Launch ANTS Performance Profiler allows you to change the defaults and attach to an already-running application. Let’s Fire it Up! So when you fire up ANTS either via Start Menu or Launch ANTS Performance Profiler menu in Visual Studio, you are presented with a very simple dialog to get you started: Notice you can choose from many different options for application type. You can profile executables, services, web applications, or just attach to a running process. In fact, in version 7.4 we see two new options added: ASP.NET Web Application (IIS Express) SharePoint web application (IIS) So this gives us an additional way to profile ASP.NET applications and the ability to profile SharePoint applications as well. You can also choose your level of detail in the Profiling Mode drop down. If you choose Line-Level and method-level timings detail, you will get a lot more detail on the method durations, but this will also slow down profiling somewhat. If you really need the profiler to be as unintrusive as possible, you can change it to Sample method-level timings. This is performing very light profiling, where basically the profiler collects timings of a method by examining the call-stack at given intervals. Which method you choose depends a lot on how much detail you need to find the issue and how sensitive your program issues are to timing. So for our example, let’s just go with the line and method timing detail. So, we check that all the options are correct (if you launch from VS2010, the executable and path are filled in already), and fire it up by clicking the [Start Profiling] button. Profiling the Application Once you start profiling the application, you will see a real-time graph of CPU usage that will indicate how much your application is using the CPU(s) on your system. During this time, you can select segments of the graph and bookmark them, giving them mnemonic names. This can be useful if you want to compare performance in one part of the run to another part of the run. Notice that once you select a block, it will give you the call tree breakdown for that selection only, and the relative performance of those calls. Once you feel you have collected enough information, you can click [Stop Profiling] to stop the application run and information collection and begin a more thorough analysis. Analyzing Method Timings So now that we’ve halted the run, we can look around the GUI and see what we can see. By default, the times are shown in terms of percentage of time of the total run of the application, though you can change it in the View menu item to milliseconds, ticks, or seconds as well. This won’t affect the percentages of methods, it only affects what units the times are shown. Notice also that the major hotspot seems to be in a method without source, ANTS Profiler will filter these out by default, but you can right-click on the line and remove the filter to see more detail. This proves especially handy when a bottleneck is due to a method in the BCL. So now that we’ve removed the filter, we see a bit more detail: In addition, ANTS Performance Profiler gives you the ability to decompile the methods without source so that you can dive even deeper, though typically this isn’t necessary for our purposes. When looking at timings, there are generally two types of timings for each method call: Time: This is the time spent ONLY in this method, not including calls this method makes to other methods. Time With Children: This is the total of time spent in both this method AND including calls this method makes to other methods. In other words, the Time tells you how much work is being done exclusively in this method, and the Time With Children tells you how much work is being done inclusively in this method and everything it calls. You can also choose to display the methods in a tree or in a grid. The tree view is the default and it shows the method calls arranged in terms of the tree representing all method calls and the parent method that called them, etc. This is useful for when you find a hot-spot method, you can see who is calling it to determine if the problem is the method itself, or if it is being called too many times. The grid method represents each method only once with its totals and is useful for quickly seeing what method is the trouble spot. In addition, you can choose to display Methods with source which are generally the methods you wrote (as opposed to native or BCL code), or Any Method which shows not only your methods, but also native calls, JIT overhead, synchronization waits, etc. So these are just two ways of viewing the same data, and you’re free to choose the organization that best suits what information you are after. Analyzing Method Source If we look at the timings above, we see that our AddToList() method (and in particular, it’s call to the List<T>.Sort() method in the BCL) is the hot-spot in this analysis. If ANTS sees a method that is consuming the most time, it will flag it as a hot-spot to help call out potential areas of concern. This doesn’t mean the other statistics aren’t meaningful, but that the hot-spot is most likely going to be your biggest bang-for-the-buck to concentrate on. So let’s select the AddToList() method, and see what it shows in the source window below: Notice the source breakout in the bottom pane when you select a method (from either tree or grid view). This shows you the timings in this method per line of code. This gives you a major indicator of where the trouble-spot in this method is. So in this case, we see that performing a Sort() on the List<T> after every Add() is killing our performance! Of course, this was a very contrived, duh moment, but you’d be surprised how many performance issues become duh moments. Note that this one line is taking up 86% of the execution time of this application! If we eliminate this bottleneck, we should see drastic improvement in the performance. So to fix this, if we still wanted to maintain the List<T> we’d have many options, including: delay Sort() until after all Add() methods, using a SortedSet, SortedList, or SortedDictionary depending on which is most appropriate, or forgoing the sorting all together and using a Dictionary. Rinse, Repeat! So let’s just change all instances of List<string> to SortedSet<string> and run this again through the profiler: Now we see the AddToList() method is no longer our hot-spot, but now the Max() and Min() calls are! This is good because we’ve eliminated one hot-spot and now we can try to correct this one as well. As before, we can then optimize this part of the code (possibly by taking advantage of the fact the list is now sorted and returning the first and last elements). We can then rinse and repeat this process until we have eliminated as many bottlenecks as possible. Calls by Web Request Another feature that was added recently is the ability to view .NET methods grouped by the HTTP requests that caused them to run. This can be helpful in determining which pages, web services, etc. are causing hot spots in your web applications. Summary If you like the other ANTS tools, you’ll like the ANTS Performance Profiler as well. It is extremely easy to use with very little product knowledge required to get up and running. There are profilers built into the higher product lines of Visual Studio, of course, which are also powerful and easy to use. But for quickly jumping in and finding hot spots rapidly, Red Gate’s Performance Profiler 7.4 is an excellent choice. Technorati Tags: Influencers,ANTS,Performance Profiler,Profiler

    Read the article

  • Friday Fun: Omega Crisis

    - by Mysticgeek
    Friday is here once again and it’s time to play a fun flash game on company time. Today we take a look at the space shooter Omega Crisis. Omega Crisis At the start of the game you’re given the basic story of the game, defending the space outpost, and instructions on how to play. Controls are easy, just target the enemy and use the left mouse button to fire. After each level you’re shown the results and how many Tech Points you’ve earned. The more Tech Points you earn, you have a better chance of upgrading your weapons and base defense before the next level.   You can also go into Manage Mode by hitting the Space bar, and select gunners and other types of weapons to help defend the outpost. Choose your mission from the timeline after successfully completing a mission. You can also use A,W,D,S to move around the map and see exactly where the enemy ships are coming from. This makes it easier to destroy them before they get too close to your base. This game is a lot of fun and is similar to different “Desktop Defense” type games. If you’re looking for a fun way to waste the afternoon, and not look at TPS reports, Omega Crisis can get you though until the whistle blows. Play Omega Crisis Similar Articles Productive Geek Tips Friday Fun: Portal, the Flash VersionFriday Fun: Play Bubble QuodFriday Fun: Gravitee 2Friday Fun: Wake Up the BoxFriday Fun: Compulse TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >