Search Results

Search found 24117 results on 965 pages for 'write'.

Page 8/965 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • How to format FAT32 filesystem infected with windows virus and that is write protected

    - by explorex
    Hi, I have a pendrive with FAT32 filesystem. it is infected with virus dont know which but has autorun.inf and create exe file within folder. I tried to format it with various filesystems and even try to delete it with GParted but couldn't because it says it is write protected i can't even delete files. How to format it? user@explorerx:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xbd04bd04 Device Boot Start End Blocks Id System /dev/sda1 * 1 498 3998720 82 Linux swap / Solaris Partition 1 does not end on cylinder boundary. /dev/sda2 499 19457 152287585+ f W95 Ext'd (LBA) /dev/sda5 5100 10198 40957686 7 HPFS/NTFS /dev/sda6 10199 14787 36861111 7 HPFS/NTFS /dev/sda7 14788 19457 37511743+ 7 HPFS/NTFS /dev/sda8 499 5099 36956160 83 Linux Partition table entries are not in disk order Disk /dev/sdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc13bc13b Device Boot Start End Blocks Id System /dev/sdc1 1 9729 78143488 7 HPFS/NTFS /dev/sdc2 9729 19457 78143488 7 HPFS/NTFS Disk /dev/sdb: 4194 MB, 4194304000 bytes 112 heads, 47 sectors/track, 1556 cylinders Units = cylinders of 5264 * 512 = 2695168 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 2 1557 4091904 b W95 FAT32

    Read the article

  • how do I write a functional specification quickly and efficiently

    - by giddy
    So I just read some fabulous articles by Joel on specs here. (Was written in 2000!!) I read all 4 parts, but Im looking for some methodical approaches to writing my specs. Im the only lonely dev, working on this fairly complicated app (or family of apps) for a very well known finance company. I've never made something this serious, I started out writing something like a bad spec, an overview of some sorts, and it has wasted a LOT of my time. Ive also made 3 mockup-kinda-thingies for my client so I have a good understanding of what they want. Also released a preview (a throw away working app with the most basic workflow), and Ive only written and tested some of the very core/base systems. I think the mistake Ive been making so far is not writing a detailed spec, so Im getting to it now. So the whole thing comprises of An MVC website (for admins & data viewing) 2 Silverlight modules (For 2 specific tasks) 1 Desktop Application Im totally short on time, resources and need to get this done quick, also, need to make sure these guys read it up equally quick and painlessly. So how do I go about it, Im looking for any tips, any real world stuff, how do you guys usually do it? Do you make a mock screenie of every dialog/form/page? Im thinking of making a dummy asp.net web forms project, then filling in html files in folders and making it look like my mvc url structure. Then having a section in the spec for the website and write up a page for every URL Ive got with a screenie. For my win forms app, Ive made somewhat of a demo Win Form project, would I then put in a dialog or stucture everything as I would in the real app and then screen shot it?

    Read the article

  • how do I write a functional spec quickly and efficiently

    - by giddy
    So I just read some fabulous articles by Joel on specs here. (Was written in 2000!!) I read all 4 parts, but Im looking for some methodical approaches to writing my specs. Im the only lonely dev, working on this fairly complicated app (or family of apps) for a very well known finance company. I've never made something this serious, I started out writing something like a bad spec, an overview of some sorts, and it has wasted a LOT of my time. Ive also made 3 mockup-kinda-thingies for my client so I have a good understanding of what they want. Also released a preview (a throw away working app with the most basic workflow), and Ive only written and tested some of the very core/base systems. I think the mistake Ive been making so far is not writing a detailed spec, so Im getting to it now. So the whole thing comprises of An MVC website (for admins & data viewing) 2 Silverlight modules (For 2 specific tasks) 1 Desktop Application Im totally short on time, resources and need to get this done quick, also, need to make sure these guys read it up equally quick and painlessly. So how do I go about it, Im looking for any tips, any real world stuff, how do you guys usually do it? Do you make a mock screenie of every dialog/form/page? Im thinking of making a dummy asp.net web forms project, then filling in html files in folders and making it look like my mvc url structure. Then having a section in the spec for the website and write up a page for every URL Ive got with a screenie. For my win forms app, Ive made somewhat of a demo Win Form project, would I then put in a dialog or stucture everything as I would in the real app and then screen shot it?

    Read the article

  • Who should write the test plan?

    - by Cheng Kiang
    Hi, I am in the in-house development team of my company, and we develop our company's web sites according to the requirements of the marketing team. Before releasing the site to them for acceptance testing, we were requested to give them a test plan to follow. However, the development team feels that since the requirements came from the requestors, they would have the best knowledge of what to test, what to lookout for, how things should behave etc and a test plan is thus not required. We are always in an argument over this, and developers find it a waste of time to write down things like:- Click on button A. Key in XYZ in the form field and click button B. You should see behaviour C. which we have to repeat for each requirement/feature requested. This is basically rephrasing what's already in the requirements document. We are moving towards using an Agile approach for managing our projects and this is also requested at the end of each iteration. Unit and integration testing aside, who should be the one to come up with the end user acceptance test plan? Should it be the reqestors or the developers? Many thanks in advance. Regards CK

    Read the article

  • How to write constructors which might fail to properly instantiate an object

    - by whitman
    Sometimes you need to write a constructor which can fail. For instance, say I want to instantiate an object with a file path, something like obj = new Object("/home/user/foo_file") As long as the path points to an appropriate file everything's fine. But if the string is not a valid path things should break. But how? You could: 1. throw an exception 2. return null object (if your programming language allows constructors to return values) 3. return a valid object but with a flag indicating that its path wasn't set properly (ugh) 4. others? I assume that the "best practices" of various programming languages would implement this differently. For instance I think ObjC prefers (2). But (2) would be impossible to implement in C++ where constructors must have void as a return type. In that case I take it that (1) is used. In your programming language of choice can you show how you'd handle this problem and explain why?

    Read the article

  • How to write a blog for SEO purpose

    - by Mathieu Imbert
    I have a photo sharing website, which provides very little textual content. Users can add tags to photos and a description, but it creates a lot of duplicate content, because most of the descriptions will be 'wow', 'lol', ... I don't think I should rely on users to build my SEO. I think it would be a great idea to write a blog, and use it to describe the best photos, start contests, explain themes, in short: create original content that search engines will love. Our website's main URL is like www.domain.com, and our new blog is hosted on blog.domain.com. From a SEO perspective, is it a good idea to keep the blog separate from the main site? This has the advantage to leave the original site unchanged, but will it add any page rank to the www.domain.com? If the blog ranks well it will obviously pass some page rank to the original through links. What do you think is the best option from a SEO perspective? Include the blog in www.domain.com? Or leave it in blog.domain.com?

    Read the article

  • How to write PowerShell code part 3 (calling external script)

    - by ybbest
    In this post, I’d like to show you how to calling external script from a PowerShell script. I’d like to use the site creation script as an example. You can download script here. 1. To call the external script, you need to first to grab the script path. You can do so by calling $scriptPath = Split-Path $myInvocation.MyCommand.Path to grab the current script path. You can then use this to build the path for your external script path. $scriptPath = Split-Path $myInvocation.MyCommand.Path $ExternalScript=$scriptPath+"\CreateSiteCollection.ps1" $configurationXmlPath=$scriptPath+"\SiteCollection.xml" [xml] $configurationXml=Get-Content $configurationXmlPath & "$ExternalScript" $configurationXml Write-Host 2.If you like to pass in any parameters , you need to define your script parameters in param () at the top of the script and separate each parameter by a comma (,) and when calling the method you do not need comma (,) to separate each parameter. #Pass in the Parameters. param ([xml] $xmlinput)

    Read the article

  • How to write PowerShell code part 2 (Using function)

    - by ybbest
    In the last post, I have showed you how to use external configuration file in your PowerShell script. In this post, I will show you how to create PowerShell function and call external PowerShell script.You can download the script here. 1. In the original script, I create the site directly using New-SPSite command. I will refactor it so that I will create a new function to create the site using New-SPSite. The PowerShell function is quite similar to a C# method. You put your function parameters in () and separate each parameter by a comma (,). Then you put your method body in {}. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } 2. The difference is you do not need semi-colon (;) at the end of each statement and when calling the method you do not need comma (,) to separate each parameter. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } #Calling the function [int] $num1=3 [int] $num2=4 $d= add $num1 $num2 Write-Host $d 3. If you like to return anything from the function, you just need to type in the object you like to return, not need to type return .e.g. $ObjectToReturn not return $ObjectToReturn

    Read the article

  • How to write PowerShell code part 2 (Using function)

    - by ybbest
    In the last post, I have showed you how to use external configuration file in your PowerShell script. In this post, I will show you how to create PowerShell function and call external PowerShell script.You can download the script here. 1. In the original script, I create the site directly using New-SPSite command. I will refactor it so that I will create a new function to create the site using New-SPSite. The PowerShell function is quite similar to a C# method. You put your function parameters in () and separate each parameter by a comma (,). Then you put your method body in {}. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } 2. The difference is you do not need semi-colon (;) at the end of each statement and when calling the method you do not need comma (,) to separate each parameter. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } #Calling the function [int] $num1=3 [int] $num2=4 $d= add $num1 $num2 Write-Host $d 3. If you like to return anything from the function, you just need to type in the object you like to return, not need to type return .e.g. $ObjectToReturn not return $ObjectToReturn

    Read the article

  • How to write regex in asp.net MVC 3 razor

    - by anirudha
    here is a small trick to write regex and exceptional code in MVC3. first trick is use @@ instead of @ it’s work don’ worry output goes @ not @@. second trick is that you can use <text></text> tag in MVC to use some code who give error because viewengine does not accept them. like var pattern = new RegExp(/^(("[\w-\s]+")|([\w-]+(?:\.[\w-]+)*)|("[\w-\s]+")([\w-]+(?:\.[\w-]+)*))(@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$)|(@\[?((25[0-5]\.|2[0-4][0-9]\.|1[0-9]{2}\.|[0-9]{1,2}\.))((25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\.){2}(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\]?$)/i);   this regex not work because it’s use @ so replace them to @@ e var pattern = new RegExp(/^(("[\w-\s]+")|([\w-]+(?:\.[\w-]+)*)|("[\w-\s]+")([\w-]+(?:\.[\w-]+)*))(@@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$)|(@@\[?((25[0-5]\.|2[0-4][0-9]\.|1[0-9]{2}\.|[0-9]{1,2}\.))((25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\.){2}(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\]?$)/i); now it will work. second way is that use text tag in MVC to make them work like <text> function isValidEmailAddress(emailAddress) { var pattern = new RegExp(/^(("[\w-\s]+")|([\w-]+(?:\.[\w-]+)*)|("[\w-\s]+")([\w-]+(?:\.[\w-]+)*))(@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$)|(@\[?((25[0-5]\.|2[0-4][0-9]\.|1[0-9]{2}\.|[0-9]{1,2}\.))((25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\.){2}(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\]?$)/i); return pattern.test(emailAddress); } </text>

    Read the article

  • How to write PowerShell code part 3 (calling external script)

    - by ybbest
    In this post, I’d like to show you how to calling external script from a PowerShell script. I’d like to use the site creation script as an example. You can download script here. 1. To call the external script, you need to first to grab the script path. You can do so by calling $scriptPath = Split-Path $myInvocation.MyCommand.Path to grab the current script path. You can then use this to build the path for your external script path. $scriptPath = Split-Path $myInvocation.MyCommand.Path $ExternalScript=$scriptPath+"\CreateSiteCollection.ps1" $configurationXmlPath=$scriptPath+"\SiteCollection.xml" [xml] $configurationXml=Get-Content $configurationXmlPath & "$ExternalScript" $configurationXml Write-Host 2.If you like to pass in any parameters , you need to define your script parameters in param () at the top of the script and separate each parameter by a comma (,) and when calling the method you do not need comma (,) to separate each parameter. #Pass in the Parameters. param ([xml] $xmlinput)

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • How can I go about learning to write a shader

    - by Donutz
    So here's the background: I'm writing a game, just for my own amusement and education really. I've already come to the conclusion that XNA was the way to go for graphics, I've bought a couple of books, I've gotten basic game graphics going, and that's great. Now I'm starting to get a little in-depth and I'm starting to need to do stuff not covered in my (beginner) books. In particular, I need to display a sprite using a mask. Actually, what I need to do is display a generic sprite with a different color for each player. After banging around on the web, it seems the way to go is to have a color texture (one for each player) which I display using the mask, then display the generic part of the sprite. This has to be done dynamically, i.e. at runtime because there are too many sprites to keep in memory if I try to generate all the permutations at startup. So, I need to use a shader. Fine. I've downloaded a sample shader program, and managed to hit it with a hammer until it does something close enough to what I want so that I know I'm on the right track. And here, we come to my problem... I have no friggin' clue what I'm doing. While there are a lot of samples and such about shaders, no one ever actually explains what's going on. For instance, I can't find any real docs on Tex2D. I feel like the guys in Zoolander poking at the computer. So, my question (yes, I have a question) -- where is a good URL or what is a good book to take me from dumskie to reasonably competent to write a basic shader?

    Read the article

  • input / output error, drives randomly refusing to read / write

    - by ILMV
    I have an issue with one of our servers running Ubuntu 10.04, it is running BackupPC and collects backups from various machines / servers around the building. On the 8th minute (12:08, 12:18, 12:28 etc) the backups are transferred to an external hard drive, we have three and rotate one drive for another everyday. The problem we are having is we are randomly experiencing input / output errors, when this happens you cannot read / write to the drive, it hasn't unmounted so I can cd to the mount point /media/backup1. The drives are not faulty as it's happening on all of them, so I'm at a loss as to what the problem could be, here is an example of the many errors we get: gzip: stdout: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_1083_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1088_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1089_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 39: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_592_tech3.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error EDIT » Resolved So it turns out Quamis was right, even though I didn't think it was possible it was actually a problem with the drive. You see we have three drives all formatted to ext2, on two of them we were getting I/O errors frequently, I cam back to Quamis' answer and discovered the fsck command, so ran it against the problems drives: fsck /dev/sdb1 This found and fixed a load of problems on the drive, most probably caused by power outages / unsafe removal of drives etc, as the drives are in the xt2 format they aren't journalled and thus aren't protected against such issues. Drives are now working beautifully, thanks all! :D

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • C system calls open / read / write / close problem.

    - by Andrei Ciobanu
    Hello, given the following code (it's supposed to write "hellowolrd" in a "helloworld" file, and then read the text): #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #define FNAME "helloworld" int main(){ int filedes, nbytes; char buf[128]; /* Creates a file */ if((filedes=open(FNAME, O_CREAT | O_EXCL | O_WRONLY | O_APPEND, S_IRUSR | S_IWUSR)) == -1){ write(2, "Error1\n", 7); } /* Writes hellow world to file */ if(write(filedes, FNAME, 10) != 10) write(2, "Error2\n", 7); /* Close file */ close(filedes); if((filedes = open(FNAME, O_RDONLY))==-1) write(2, "Error3\n", 7); /* Prints file contents on screen */ if((nbytes=read(filedes, buf, 128)) == -1) write(2, "Error4\n", 7); if(write(1, buf, nbytes) != nbytes) write(2, "Error5\n", 7); /* Close rile afte read */ close(filedes); return (0); } The first time i run the program, the output is: helloworld After that every time I to run the program, the output is: Error1 Error2 helloworld I don't understand why the text isn't appended, as I've specified the O_APPEND file. Is it because I've included O_CREAT ? It the file is already created, shouldn't O_CREAT be ignored ?

    Read the article

  • Edit write-protected files by breaking hard links

    - by Taymon
    A directory which I own and can write to contains hard links to files that I don't own and don't have write permission for. I want to open and edit these files in Emacs. When I save my changes, Emacs should rename the existing hard link by appending ~, then write my new version of the file as a new file owned by me. I was under the impression that Emacs could just do this (because of the way it does backups), but it's not working; when I save, it attempts to change the file's permissions in order to write to it (and fails because I don't own the file). How do I make this happen?

    Read the article

  • Write TSQL, win a Kindle.

    - by Fatherjack
    So recently Red Gate launched sqlmonitormetrics.red-gate.com and showed the world how to embed your own scripts harmoniously in a third party tool to get the details that you want about your SQL Server performance. The site has a way to submit your own metrics and take a copy of the ones that other people have submitted to build a library of code to keep track of key metrics of your servers performance. There have been several submissions already but they have now launched a competition to provide an incentive for you to get creative and show us what you can do with a bit of TSQL and the SQL Monitor framework*. What’s it worth? Well, if you are one of the 3 winners then you get to choose either a Kindle Fire or $199. How do you win? Simply write the T-SQL for a SQL Monitor custom metric and the relevant description and introduction for it and submit it via  sqlmonitormetrics.red-gate.com before 14th Sept 2012 and then sit back and wait while the judges review your code and your aims in writing the metric. Who are the judges and how will they judge the metrics? There are two judges for this competition, Steve Jones (Microsoft SQL Server MVP, co-founder of SQLServerCentral.com, author, blogger etc) and Jonathan Allen (um, yeah, Steve has done all the good stuff, I’m here by good fortune). We will be looking to rate the metrics on each of 3 criteria: how the metric can help with performance tuning SQL Server. how having the metric running enables DBA’s to meet best practice. how interesting /original the idea for the metric is. Our combined decision will be final etc etc **  What happens to my metric? Any metrics submitted to the competition will be automatically entered into the site library and become available for sharing once the competition is over. You’ll get full credit for metrics you submit regardless of the competition results. You can enter as many metrics as you like. How long does it take? Honestly? Once you have the T-SQL sorted then so long as you can type your name and your email address you are done : http://sqlmonitormetrics.red-gate.com/share-a-metric/ What can I monitor? If you really really want a Kindle or $199 (and let’s face it, who doesn’t? ) and are momentarily stuck for inspiration, take a look at these example custom metrics that have been written by Stuart Ainsworth, Fabiano Amorim, TJay Belt, Louis Davidson, Grant Fritchey, Brad McGehee and me  to start the library off. There are some great pieces of TSQL in those metrics gathering important stats about how SQL Server is performing.   * – framework may not be the best word here but I was under pressure and couldnt think of a better one. If you prefer try ‘engine’, or ‘application’? I don’t know, pick something that makes sense to you. ** – for the full (legal) version of the rules check the details on sqlmonitormetrics.red-gate.com or send us an email if you want any point clarified. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • How to write simple code using TDD [migrated]

    - by adeel41
    Me and my colleagues do a small TDD-Kata practice everyday for 30 minutes. For reference this is the link for the excercise http://osherove.com/tdd-kata-1/ The objective is to write better code using TDD. This is my code which I've written public class Calculator { public int Add( string numbers ) { const string commaSeparator = ","; int result = 0; if ( !String.IsNullOrEmpty( numbers ) ) result = numbers.Contains( commaSeparator ) ? AddMultipleNumbers( GetNumbers( commaSeparator, numbers ) ) : ConvertToNumber( numbers ); return result; } private int AddMultipleNumbers( IEnumerable getNumbers ) { return getNumbers.Sum(); } private IEnumerable GetNumbers( string separator, string numbers ) { var allNumbers = numbers .Replace( "\n", separator ) .Split( new string[] { separator }, StringSplitOptions.RemoveEmptyEntries ); return allNumbers.Select( ConvertToNumber ); } private int ConvertToNumber( string number ) { return Convert.ToInt32( number ); } } and the tests for this class are [TestFixture] public class CalculatorTests { private int ArrangeAct( string numbers ) { var calculator = new Calculator(); return calculator.Add( numbers ); } [Test] public void Add_WhenEmptyString_Returns0() { Assert.AreEqual( 0, ArrangeAct( String.Empty ) ); } [Test] [Sequential] public void Add_When1Number_ReturnNumber( [Values( "1", "56" )] string number, [Values( 1, 56 )] int expected ) { Assert.AreEqual( expected, ArrangeAct( number ) ); } [Test] public void Add_When2Numbers_AddThem() { Assert.AreEqual( 3, ArrangeAct( "1,2" ) ); } [Test] public void Add_WhenMoreThan2Numbers_AddThemAll() { Assert.AreEqual( 6, ArrangeAct( "1,2,3" ) ); } [Test] public void Add_SeparatorIsNewLine_AddThem() { Assert.AreEqual( 6, ArrangeAct( @"1 2,3" ) ); } } Now I'll paste code which they have written public class StringCalculator { private const char Separator = ','; public int Add( string numbers ) { const int defaultValue = 0; if ( ShouldReturnDefaultValue( numbers ) ) return defaultValue; return ConvertNumbers( numbers ); } private int ConvertNumbers( string numbers ) { var numberParts = GetNumberParts( numbers ); return numberParts.Select( ConvertSingleNumber ).Sum(); } private string[] GetNumberParts( string numbers ) { return numbers.Split( Separator ); } private int ConvertSingleNumber( string numbers ) { return Convert.ToInt32( numbers ); } private bool ShouldReturnDefaultValue( string numbers ) { return String.IsNullOrEmpty( numbers ); } } and the tests [TestFixture] public class StringCalculatorTests { [Test] public void Add_EmptyString_Returns0() { ArrangeActAndAssert( String.Empty, 0 ); } [Test] [TestCase( "1", 1 )] [TestCase( "2", 2 )] public void Add_WithOneNumber_ReturnsThatNumber( string numberText, int expected ) { ArrangeActAndAssert( numberText, expected ); } [Test] [TestCase( "1,2", 3 )] [TestCase( "3,4", 7 )] public void Add_WithTwoNumbers_ReturnsSum( string numbers, int expected ) { ArrangeActAndAssert( numbers, expected ); } [Test] public void Add_WithThreeNumbers_ReturnsSum() { ArrangeActAndAssert( "1,2,3", 6 ); } private void ArrangeActAndAssert( string numbers, int expected ) { var calculator = new StringCalculator(); var result = calculator.Add( numbers ); Assert.AreEqual( expected, result ); } } Now the question is which one is better? My point here is that we do not need so many small methods initially because StringCalculator has no sub classes and secondly the code itself is so simple that we don't need to break it up too much that it gets confusing after having so many small methods. Their point is that code should read like english and also its better if they can break it up earlier than doing refactoring later and third when they will do refactoring it would be much easier to move these methods quite easily into separate classes. My point of view against is that we never made a decision that code is difficult to understand so why we are breaking it up so early. So I need a third person's opinion to understand which option is much better.

    Read the article

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • Linux - Programmatically write to a proc file

    - by Zach
    I have found several examples online where we can create a proc file, assign read and write methods that are called every time the proc file is opened for read or written to. However, I can't seem to find any documentation on how to programatically write to a proc file. Ideally, I would like to add a timestamp with other user details every time the proc file is opened for read or for write. Again, I've found where I can add the read and write functions that are triggered when the proc file is opened, but I can't find documentation on how to actually write to a proc file programatically. This would be different from a regular IO read/write, correct?

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Benefits of a RAID BBU in addition to a double UPS + PS system

    - by Wikser
    Today I roughly measured the benefits of enabling write-back on the RAID controller on a server at work. It got no RAID battery-backup-unit (BBU) so the write-cache is currently disabled. As the server is not used to capacity (by far), the results in most test were spectacular, e.g.: Database CRUD: before 35s, after 4s Saving a 1MB Excel file: before: 20s (!), after: 0.5s Of course having a BBU is always recommended, but what are the main benefits of installing a BBU to a system, which got redundant power supplies and is attached to UPSs? Does this depend on the type of the system (database, file, terminal)? What is a realistic fail scenario which could be prevented by a BBU? Thanks in advance!

    Read the article

  • writting becomes slow after few writes

    - by user1566277
    I am running an embedded Linux on arm with a SD-Card. While writing huge amounts of data I see bizarre effects. E.g, when I dd a 15 MB file few times, it writes the file (normally) in less than 2 Secs. But After lets say 3-4 times it takes sometimes 15 to 30 Seconds to write the same file. If I sync after writing the file, then this does not happen but sync takes long time too. If there is enough gap between writing two files than presumably kernel syncs itself. How can I optimize the whole performance so that write should always finish inside 2 Seconds. The File system I am using is ext3. Any pointers?

    Read the article

  • pthreads: reader/writer locks, upgrading read lock to write lock

    - by ScaryAardvark
    I'm using read/write locks on Linux and I've found that trying to upgrade a read locked object to a write lock deadlocks. i.e. // acquire the read lock in thread 1. pthread_rwlock_rdlock( &lock ); // make a decision to upgrade the lock in threads 1. pthread_rwlock_wrlock( &lock ); // this deadlocks as already hold read lock. I've read the man page and it's quite specific. The calling thread may deadlock if at the time the call is made it holds the read-write lock (whether a read or write lock). What is the best way to upgrade a read lock to a write lock in these circumstances.. I don't want to introduce a race on the variable I'm protecting. Presumably I can create another mutex to encompass the releasing of the read lock and the acquiring of the write lock but then I don't really see the use of read/write locks. I might as well simply use a normal mutex. Thx

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >