Search Results

Search found 14719 results on 589 pages for 'optimization level'.

Page 193/589 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Home Directory Folders

    - by George
    I am looking for a way to acomplish the following: Currently users have home drives mapped via AD profile as follow: \\fileserver\users\username However if once a user was able to access \\fileserver\users and view everyones folder, but had no access to them. This is not ideal since we have people saving important stuff to on their drives. How can I restrict users permissions and views only to THEIR home drives? I also saw this solution, but not sure if it would apply to me: ================================================================================ Share level permissions - Everyone full permission and remove all others On the file/folder level set the following: Authenticated users special permissions on the root of the \\server\homeshare\ to Check the boxes next to the following: Traverse folder / execute file List Folder / read data Read attributes Read extended attributes / List item All other boxed leave unchecked and make sure you apply "This Folder Only" Domain Adminsfull rights and apply “this folder, subfolders, and files” This will block the users from accessing other user home directories. When you create the new user and set the home directory it will create the folder for you with the correct permissions.

    Read the article

  • How do I add an Approver to SharePoint 2010?

    - by CompGeekess
    I am still new to SharePoint and am learning so much, but have came in to a few hic-ups and here is one. I want to add an approver to SharePoint 2010 who has FULL CONTROL. My manager requested that I find out where the approval request are going and redirect them to him. (I have no idea where or how to find this out). Is this possible to do on the Central Administration or must I go into each site/subsite and set him to be the approver this way? Googled and the site was showing me how to approve workflows or how to create approvals, my other resources didn't give much help either. So far I had gone into a few individual sites and set my manager and I up as approvers with full control, but am uncertain if this is the correct procedure or if there is a better way to do this. For example, have the lower levels inherit from the higher level - set security at the highest level and cascade to the child levels. Thank You.

    Read the article

  • Recovering database files from a corrupted VHD

    - by Apocalypse9
    We have a SQL server hosted on a virtual machine. Our hosting company updated/restarted the server and for some reason the virtual machines became unbootable. We've spoken to Microsoft and used a few higher level tools to attempt to recover the virtual machines but were unsuccessful. In browsing the file system the database folder doesn't even appear. I'm wondering if there are any lower level tools that might be able to find and copy the database files. As far as I know the physical hard drive is ok, so I'm hoping there may be some way to recover the files themselves even if the rest of the virtual machine file-system is a loss. Obviously we're in a bit of a bind, and any help/ suggestions are very much appreciated.

    Read the article

  • Remote Desktop Encryption

    - by Kumar
    My client is RDP 6.1 (On Windows XP SP3) and Server is Windows Server 2003. I have installed an SSL certificate on server for RDP. In the RDP settings (General tab), the Encryption method is set to SSL/TLS 1.0 and Encryption level is set to "Client Compatible". I have following questions In this case is it guaranteed that all communication is encrypted even when I remote login to the server? I mean pwd is encrypted Does RDP always use some kind of encryption even if there is no SSL certificate installed on the server? In this case I do not see security lock in the connection bar. When I set encryption level to "High" then I see security lock. I do believe that communication is both cases will be encrypted. Is it true? Please reply to my questions Thanks in advance Kumar

    Read the article

  • Limit a process's relative (not absolute) processor consumption in Linux

    - by BobBanana
    What is the standard way in Linux to enforce a system policy to limit the relative CPU use of a single process? That is, on a quad-core machine, I never want a process to use more than 2 CPUs at once, even if the process creates more threads. I do not want an absolute time limit, just a relative limit so that one task cannot dominate the machine. This is also different than renice, which allows a process to use all the resources but just politely step aside if others need them too. ulimit is the usual resource limiting tool, but it does not allow such CPU restrictions.. it can limit the number of processes per user, or absolute CPU time, not restrict the maximum number of active threads of a single process. I've found a couple of user-level tools, like CPUlimit, but not a system level tool or setting. Does such a standard resource controller exist in Linux (Red Hat Enterprise, if it matters.) If there is such a limit imposed, how would a user identify it?

    Read the article

  • Which permissions need to be assigned to a normal user to allow him to change SHARE permissions?

    - by guest
    As in the subject - I am wondering, which permissions (if it is possible) do I need to assign to a regular user to modify the share permissions - e.g. to add another user with full control permissions, or to deny someone read access to the folder - on a share level. I know that this is possible through NTFS permissions, but I am wondering whether it is also possible on share level. Any ideas how to do that? Or perhaps only the adminstrator/creator/person who shared the folder has the access to this? I am using win2003 Any ideas? Thanks

    Read the article

  • How can I see if apache is overloaded and dropping or not accepting connections?

    - by cat pants
    Basically I just want to see if apache is handling a current level of high traffic or if I need to tune it to handle more connections. (I have found plenty of information on the actual tuning, so no help needed there) I know it has been dropping or not accepting connections earlier today, but not seeing anything in the error logs. Is the expected behavior to throw a 503 in the error log if apache cannot accept more connections? If so, what error logging level do I need in order to see these? What is the correct terminology: dropping connections or not accepting connections? MPM is prefork, OS is Linux, apache version is 2.2.15.

    Read the article

  • What are the recognized ways to increase the size of the RAID array online/offline?

    - by user149509
    Is it possible, in theory, increase the size of the RAID-array of any level just by adding new drive(s)? Variant like "backup whole data - delete old array - add/replace disks - create new array - restore data" is obvious so what are the other options? Does it depend on the RAID-level only or on the implementation of RAID-controller only, or on both? Adding new disks to a striped array necessarily leads to a rebuilding of the array with the redistribution of the strips to the new drives? What steps should be done to increase size of RAID-array in online/offline scenarios? Especially interesting RAID-5 and RAID-10. I would like to see the big picture.

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • Kill child process when the parent exits

    - by kolypto
    I'm preparing a script for Docker, which allows only one top-level process, which should receive the signals so we can stop it. Therefore, I'm having a script like this: one application writes to syslog (bash script in this sample), and the other one just prints it. #! /usr/bin/env bash set -eu tail -f /var/log/syslog & exec bash -c 'while true ; do logger aaaaaaaaaaaaaaaaaaa ; sleep 1 ; done' Almost solved: when the top-level process bash gets SIGTERM -- it exists, but tail -f continues to run. How do I instruct tail -f to exit when the parent process exits? E.g. it should also get the signal. Note: Can't use bash traps since exec on the last line replaces the process completely.

    Read the article

  • Galaxy Tab 3 producing continuous LightSensor error in LogCat

    - by Richard Tingle
    I am using a Galaxy Tab 3 as a test device for writing an android app. As such I'm interested in the output of the LogCat which is being filled with these error level messages. The device itself appears to work correctly, apps which rely on the light sensor correctly respond to it and the number in the error itself goes down if the light sensor is obscured. If I wasn't using it to develop apps I wouldn't even be aware of the issue but I believe it is an issue with the device itself not my app: simply plugging the tab 3 into the computer and using Eclipse - ADT to look at the LogCat without any app running leads to these errors being shown. I know I could filter the LogCat to ignore these errors but inconvenience aside; they concern me. A sample of the log cat is below (it generates errors continuously). This is on verbose so it includes some debug level (D/) messages as well as the error level messages (E/). How can I correct the device to no longer generate these errors. 06-11 10:08:45.789: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:45.992: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.195: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.398: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.601: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.804: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.007: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.210: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.414: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.617: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.820: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.023: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.039: D/dalvikvm(15201): GC_CONCURRENT freed 1947K, 17% free 16973K/20359K, paused 13ms+13ms, total 50ms 06-11 10:08:48.226: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.429: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:48.835: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.039: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.242: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.445: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:49.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:49.648: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.851: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13

    Read the article

  • Tomcat directly serve static (css, js) files shared by multiple applications

    - by Josvic Zammit
    I'm using the ExtJS framework which has a bulk of js and css files that are used for all apps. I intend to share these between a number of web applications (different war files). For this reason I would like to serve ExtJS js and css directly from the web server, in my case Tomcat6, which can be used to serve static files, as in this helpful link. Therefore I put my files under /var/lib/tomcat6/webapps/ROOT/extjs/. The static files that are directly under that directory are served correctly, e.g. /extjs/ext.js correctly serves the file at /var/lib/tomcat6/webapps/ROOT/extjs/ext.js. However files in lower-level directories, for example /extjs/welcome/css/welcome.css, which should serve the file at /var/lib/tomcat6/webapps/ROOT/extjs/welcome/css/welcome.css, return a 404. TL/DR Tomcat serves static files only at top-level directory. A 404 is returned for files deeper in the hierarchy. Config file contents: server.xml application's web.xml

    Read the article

  • Accessing a broken mdadm raid

    - by CarstenCarsten
    Hi! I used a western digital mybookworld (SOHO NAS storage using Linux) as backup for my Linux box. Suddenly, the mybookworld does not boot up any more. So I opened the box, removed the hard disk and put the hard disk into an external USB HDD case, and connected it to my Linux box. [ 530.640301] usb 2-1: new high speed USB device using ehci_hcd and address 3 [ 530.797630] scsi7 : usb-storage 2-1:1.0 [ 531.794844] scsi 7:0:0:0: Direct-Access WDC WD75 00AAKS-00RBA0 PQ: 0 ANSI: 2 [ 531.796490] sd 7:0:0:0: Attached scsi generic sg3 type 0 [ 531.797966] sd 7:0:0:0: [sdc] 1465149168 512-byte logical blocks: (750 GB/698 GiB) [ 531.800317] sd 7:0:0:0: [sdc] Write Protect is off [ 531.800327] sd 7:0:0:0: [sdc] Mode Sense: 38 00 00 00 [ 531.800333] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803821] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.803836] sdc: sdc1 sdc2 sdc3 sdc4 [ 531.815831] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 531.815842] sd 7:0:0:0: [sdc] Attached SCSI disk The dmesg output looks normal, but I was wondering why the hardisk was not mounted at all. And why there are 4 different partitions on it. fdisk showed the following: root@ubuntu:/home/ubuntu# fdisk /dev/sdc WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdc: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00007c00 Device Boot Start End Blocks Id System /dev/sdc1 4 369 2939895 fd Linux raid autodetect /dev/sdc2 370 382 104422+ fd Linux raid autodetect /dev/sdc3 383 505 987997+ fd Linux raid autodetect /dev/sdc4 506 91201 728515620 fd Linux raid autodetect Oh no! Everything seems to be created as a mdadm software raid. Calling mdadm --examine with the different partitions seems to affirm that. I think the only partition I am interested in, is /dev/sdc4 (because it is the largest). But nevertheless I called mdadm --examine with every partition. root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : 5626a2d8:070ad992:ef1c8d24:cd8e13e4 Creation Time : Wed Feb 20 00:57:49 2002 Raid Level : raid1 Used Dev Size : 2939776 (2.80 GiB 3.01 GB) Array Size : 2939776 (2.80 GiB 3.01 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 4c90bc55 - correct Events : 16682 Number Major Minor RaidDevice State this 0 8 1 0 active sync /dev/sda1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 00.90.00 UUID : 9734b3ee:2d5af206:05fe3413:585f7f26 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 104320 (101.89 MiB 106.82 MB) Array Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 2 Update Time : Wed Oct 27 20:19:08 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 55560b40 - correct Events : 9884 Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc3 /dev/sdc3: Magic : a92b4efc Version : 00.90.00 UUID : 08f30b4f:91cca15d:2332bfef:48e67824 Creation Time : Wed Feb 20 00:57:54 2002 Raid Level : raid1 Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Array Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 3 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 39717874 - correct Events : 73678 Number Major Minor RaidDevice State this 0 8 3 0 active sync 0 0 8 3 0 active sync 1 1 0 0 1 faulty removed root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc4 /dev/sdc4: Magic : a92b4efc Version : 00.90.00 UUID : febb75ca:e9d1ce18:f14cc006:f759419a Creation Time : Wed Feb 20 00:57:55 2002 Raid Level : raid1 Used Dev Size : 728515520 (694.77 GiB 746.00 GB) Array Size : 728515520 (694.77 GiB 746.00 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 4 Update Time : Sun Nov 21 11:05:27 2010 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Checksum : 2f36a392 - correct Events : 519320 Number Major Minor RaidDevice State this 0 8 4 0 active sync 0 0 8 4 0 active sync 1 1 0 0 1 faulty removed If I read the output correctly everything was removed, because it was faulty. Is there ANY way to see the contents of the largest partition? Or seeing somehow which files are broken? I see that everything is raid1 which is only mirroring, so this should be a normal partition. I am anxious to do anything with mdadm, in fear that I destroy the data on the hard disk. I would be very thankful for any help.

    Read the article

  • Nightly backups (and maybe other tasks) causing server alerts

    - by J. Pablo Fernández
    I have two independent alert notification systems for my servers. The server is a virtual machine on Linode and one of the alerts comes from Linode. The other monitoring system we use is New Relic. They are both watching out for IO utilization. Every night I get alerts from both of them as the server is using too much IO. I run quite a few tasks in the middle of the night but the one I confirmed that can cause IO-warnings is running the backups. The backup is done by s3cmd sync. I tried ionice but it still generates the warnings. Getting warnings every night reduces the efficacy of warnings when they happen for real. For Linode I could raise the level at which a warning is issued, but it might mean making the whole thing useless as the level is too high. What would be the proper solution for this?

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • Determining percentage of students between certain grades

    - by dunc
    I have an Excel spreadsheet with the following data: #-----------------------------------------------------------------------------------------------------------------------------------# # Student # KS2 Grade # Target # Expected 1 # Expected 2 # Expected 3 # FSM Status # Gifted & Talented # #-----------------------------------------------------------------------------------------------------------------------------------# # User 1 # 4 # 6 # 7 # 5 # 6 # Y # N # # User 2 # 3 # 5 # 5 # 4 # 4 # N # N # # User 3 # 5 # 6 # 6 # 6 # 7 # N # N # # User 4 # 4 # 6 # 5 # 6 # 6 # N # Y # # User 5 # 5 # 7 # 7 # 6 # 7 # N # N # # User 6 # 3 # 4 # 4 # 4 # 4 # N # N # # User 7 # 3 # 4 # 5 # 3 # 4 # Y # Y # #-----------------------------------------------------------------------------------------------------------------------------------# What I'd like to do is determine the percentage of students with certain levels, i.e. a range of levels. For instance, in the data above, I'd like to determine the % of all students that have a Target level of 5 - 7. I'd then like to also expand the formula to specify % of Gifted & Talented students with a Target level of 5 - 7. Is this possible in Excel? If so, where do I start?

    Read the article

  • Why do programmers use or recommend Mac OS X?

    - by codingbear
    I've worked on both Mac and Windows for awhile. However, I'm still having a hard time understanding why programmers enthusiastically choose Mac OS X over Windows and Linux? I know that there are programmers who prefer Windows and Linux, but I'm asking the programmers who would just use Mac OS X and nothing else, because they think Mac OS X is the greatest fit for programmers. Some might argue that Mac OS X got the beautiful UI and is nix based, but Linux can do that. Although Windows is not nix based, you can pretty much develop on any platform or language, except Cocoa/Objective-C. Is it the softwares that offer only on Mac OS X? Does that really worth using Mac? Is it to develop iPhone apps? Is it because you need to buy new Windows every 2 years (less backwards compatible)? I understand why people, who are working in multimedia/entertainment industry, would use Mac OS X; however, I don't have strong merits of Mac OS X over Windows. If you develop daily on Mac and prefer Mac over anything else, can you give me a merit that Mac has over Windows/Linux? Maybe something you can do on Mac that cannot be done in Windows/Linux with the same level of ease? I'm not trying to do another Mac vs. Windows here. I tried to find things that I do on Mac that cannot be done on Windows with the same level of ease, but I couldn't. So, I'm asking for some help.

    Read the article

  • IE 11 Developer Tools - changing console target to a different frameset or iframe

    - by vladimirl
    Originally posted on: http://geekswithblogs.net/vladimirl/archive/2013/10/25/ie-11-developer-tools---changing-console-target-to-a.aspxTo change current console iframe/frameset type this into console command line where "contentIFrame" in the iframe/frameset name (there should not be quotes around the iframe name):console.cd(contentIFrame);To return to the top level window, use cd() with no argument:console.cd();It took me some time to find out that this was possible in IE 11 Developer tools. Everything is so much easier in Chrome. No drama. Sometimes I feel that I hate IE more and more. Reference (http://msdn.microsoft.com/en-us/library/ie/dn255006(v=vs.85).aspx#console_in):All script entered in the command line executes in the global scope of the currently selected window. If your webpage is built with a frameset or iframes, those frames load their own documents in their own windows.To target the window of a frameset frame or an iframe, use the cd() command, with the frame/iframe's name or ID attribute as the argument. For example, you have a frame with the name microsoftFrame and you're loading the Microsoft homepage in it.JavaScriptcd(microsoftFrame); Current window: www.microsoft.com/en-us/default.aspx Important  Note that there were no quotes around the name of the frame. Only pass the unquoted name or ID value as the parameter.To return to the top level window, use cd() with no argument.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • DevDays ‘00 The Netherlands day #1

    - by erwin21
    First day of DevDays 2010, I was looking forward to DevDays to see all the new things like VS2010, .NET4.0, MVC2. The lineup for this year is again better than the year before, there are 100+ session of all kind of topics like Cloud, Database, Mobile, SharePoint, User experience, Visual Studio, Web. The first session of the day was a keynote by Anders Hejlsberg he talked about the history and future of programming languages. He gave his view about trends and influences in programming languages today and in the future. The second talk that i followed was from the famous Scott Hanselman, he talked about the basics of ASP.NET MVC 2, although it was a 300 level session, it was more like a level 100 session, but it was mentioned by Scott at the beginning. Although it was interesting to see all the basic things about MVC like the controllers, actions, routes, views, models etc. After the lunch the third talk for me was about moving ASP.NET webform applications to MVC from Fritz Onion. In this session he changed an example webform application part by part to a MVC application. He gave some interesting tips and tricks and showed how to solve some issues that occur while converting. Next and the fourth talk was about the difference between LINQ to SQL and  the ADO.NET  Entity Framework from Kurt Claeys. He gave a good understanding about this two options, the demos where in LINQ to SQL and the Entity Framework, the goal was to get a good understanding when and where to use both options. The last talk about this day was also from Scott Hanselman, he goes deeper into the features of ASP.NET MVC 2 and gave some interesting tips, the ninja black belt tips. He gave some tips about the tooling, the new MVC 2 html helper methods, other view engines (like NHaml, spark),T4 templating. With this tips we can be more productive and create web applications better and faster. It was a long and interesting day, I am looking forward to day #2.

    Read the article

  • Introducing SSIS Reporting Pack for SQL Server code-named Denali

    - by jamiet
    In recent blog posts I have introduced the new SSIS Catalog that is forthcoming in SQL Server Code-named Denali: What's new in SSIS in Denali Introduction to SSIS Projects in Denali Parameters in SSIS In Denali SSIS Server, Catalogs, Environments and Environment Variables in SSIS in Denali The SSIS Catalog is responsible for executing SSIS packages and also for capturing the metadata from those executions. However, at the time of writing there is no mechanism provided to view analyse and drill into that metadata and that is the reason that I am, in this blog post, introducing a suite of SSIS Catalog reports called the SSIS Reporting Pack which you can download from my SkyDrive at http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. In this first release the SSIS Reporting Pack includes five reports: Catalog – A high-level summary of all activity in the Catalog Folders – A summary of activity in each Catalog Folder Folder – Project-level activity per single Folder Executions – A visualisation of all executions per Folder/Project/Package/Environment or subset thereof Execution – Information about an individual execution Here is a screenshot of the Executions report: Notice that the SSIS Reporting Pack provides a visual overview of all executions in the Catalog. Each execution is represented as a bar on the bar chart, the success or otherwise of each execution is indicated by the colour of the bar and the execution time is indicated by the bar height. I have recorded a video that gives an overview of the SSIS Reporting which I have embedded below. If you are having any trouble viewing the video go see it at http://vimeo.com/17617974 I must stress that this is a very early version of the SSIS Reporting Pack and I am expecting it to change a lot over the coming year. I am very keen to get some feedback about this, specifically: let me know if anything does not work as you expect give me your feature requests The easiest way to get hold of of me for now is within the comments section of this blog post. That’s all for now. I hope the SSIS Reporting Pack proves useful and I look forward to hearing your feedback. Lastly, that download link again: http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. @jamiet

    Read the article

  • Web application development over C++ development..

    - by learnerforever
    Hi, I am CS undergrad and CS grad. In college I used to program in C/C++/java and have pretty much stuck to the same skill set in industry with 3 years experience. I like thinking,reading,applying logic etc, designing data structures, but I have little patience with debugging large C++ code. And having to deal with low level stuff like memory fault,memory corruption,compilation/linking issues. My confidence in programming is getting down due to this, but I like being in technical field. Does web application development like LAMP suit (Linux,apache,mysql,php),CSS,scripting (AMONG OTHER WEB DEVELOPMENT RELATED SKILLS) etc need lesser patience with debugging,and understanding of low level stuff, but your analysis/logical skills also get used? Also opportunities in web application development look more. Things like scalability, most of the stuff that Google does fascinates me, but for patience needed for dealing with C++ debugging. I make blunders while coding. How does the field look like outside C++? I am beginning to wonder if as a female, by moving to web application development, I can better manage work life balance. I have seen relatively lesser females in C++ than in Java/.net. Not very sure about web related stuff though. Also, what are the other hot technologies being used in web application development? lamp,css is something I know vaguely. Not in touch with keywords going on in this area. Please help!!.

    Read the article

  • Why do programmers use or recommend Mac OS X? [closed]

    - by codingbear
    I've worked on both Mac and Windows for awhile. However, I'm still having a hard time understanding why programmers enthusiastically choose Mac OS X over Windows and Linux? I know that there are programmers who prefer Windows and Linux, but I'm asking the programmers who would just use Mac OS X and nothing else, because they think Mac OS X is the greatest fit for programmers. Some might argue that Mac OS X got the beautiful UI and is nix based, but Linux can do that. Although Windows is not nix based, you can pretty much develop on any platform or language, except Cocoa/Objective-C. Is it the softwares that offer only on Mac OS X? Does that really worth using Mac? Is it to develop iPhone apps? Is it because you need to buy new Windows every 2 years (less backwards compatible)? I understand why people, who are working in multimedia/entertainment industry, would use Mac OS X; however, I don't have strong merits of Mac OS X over Windows. If you develop daily on Mac and prefer Mac over anything else, can you give me a merit that Mac has over Windows/Linux? Maybe something you can do on Mac that cannot be done in Windows/Linux with the same level of ease? I'm not trying to do another Mac vs. Windows here. I tried to find things that I do on Mac that cannot be done on Windows with the same level of ease, but I couldn't. So, I'm asking for some help.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Chennai Metro Water [Indian Websites]

    - by samsudeen
    If you are living in Chennai definitely you will have this question in your mind “Does Chennai 0survive this summer without any water crisis?”. The Chennai Metro Water maintains a website where in you can know all the details about the water storage, supply. This site has all the information about water supply  especially matters for  Chennai residents. I have given some of the uses of this site Water Sewer Application Status If you are a new resident and applied for water sewer application status. You can check your application (Water Sewer Application Status) by entering your WA No (Water Application No). You can also download all the required application forms. Register Complaints You can register complaints (Register Complaints) about  water supply / defective water meter and sewage blocking online. Pay Water & Sewage Tax You can pay your water & sewage tax (Pay Tax) online including the dues /arrears if any left out. Lake Level This site also maintains the current (Lake Level) and historic water levels of the Chennai water reservoirs including the daily inflow /out flow of water. You can also know the daily / month rainfall levels of these water reservoirs from 1965 onwards. Hope this  information little  useful for those who are planning / building a new house in Chennai corporation. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >