Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 96/965 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Enum.HasFlag

    - by Scott Dorman
    An enumerated type, also called an enumeration (or just an enum for short), is simply a way to create a numeric type restricted to a predetermined set of valid values with meaningful names for those values. While most enumerations represent discrete values, or well-known combinations of those values, sometimes you want to combine values in an arbitrary fashion. These enumerations are known as flags enumerations because the values represent flags which can be set or unset. To combine multiple enumeration values, you use the logical OR operator. For example, consider the following: public enum FileAccess { None = 0, Read = 1, Write = 2, }   class Program { static void Main(string[] args) { FileAccess access = FileAccess.Read | FileAccess.Write; Console.WriteLine(access); } } The output of this simple console application is: The value 3 is the numeric value associated with the combination of FileAccess.Read and FileAccess.Write. Clearly, this isn’t the best representation. What you really want is for the output to look like: To achieve this result, you simply add the Flags attribute to the enumeration. The Flags attribute changes how the string representation of the enumeration value is displayed when using the ToString() method. Although the .NET Framework does not require it, enumerations that will be used to represent flags should be decorated with the Flags attribute since it provides a clear indication of intent. One “problem” with Flags enumerations is determining when a particular flag is set. The code to do this isn’t particularly difficult, but unless you use it regularly it can be easy to forget. To test if the access variable has the FileAccess.Read flag set, you would use the following code: (access & FileAccess.Read) == FileAccess.Read Starting with .NET 4, a HasFlag static method has been added to the Enum class which allows you to easily perform these tests: access.HasFlag(FileAccess.Read) This method follows one of the “themes” for the .NET Framework 4, which is to simplify and reduce the amount of boilerplate code like this you must write. Technorati Tags: .NET,C# 4

    Read the article

  • What is ADO ?

    - by Aamir Hasan
    What is ADO? ADO is a Microsoft technologyADO stands for ActiveX Data ObjectsADO is a Microsoft Active-X componentADO is automatically installed with Microsoft IISADO is a programming interface to access data in a databaseAccessing a Database from an ASP Page The common way to access a database from inside an ASP page is to: Create an ADO connection to a databaseOpen the database connectionCreate an ADO recordsetOpen the recordsetExtract the data you need from the recordsetClose the recordsetClose the connectionExample  <%set conn=Server.CreateObject("ADODB.Connection")conn.Provider="Microsoft.Jet.OLEDB.4.0"conn.Open(Server.Mappath("/db/northwind.mdb"))set rs = Server.CreateObject("ADODB.recordset")rs.Open "Select * from Customers", conndo until rs.EOF    for each x in rs.Fields       Response.Write(x.name)       Response.Write(" = ")       Response.Write(x.value & "<br />")    next    Response.Write("<br />")    rs.MoveNextlooprs.closeconn.close%> 

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • How to configure ldap on ubuntu 10.04 server

    - by user3215
    I am following the link to configure ldap on ubuntu 10.04 server but could not. when I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Anybody could help me?

    Read the article

  • How to remove iso 9660 from USB?

    - by a_m0d
    I have somehow managed to write an iso 9660 image onto my USB drive, which makes all my computer think that the device is actually a CD. I have tried various methods of removing this partition, but nothing seems to work. I have tried fdisk, which says $ fdisk -l /dev/sdb Cannot open /dev/sdb parted crashes when I try to use it on this device. I have even tried $ dd if=/dev/zero of=/dev/sdb but it just hangs with no output (either on screen or on disk). However, when I plug the USB in, it does mount, and I can view (but not edit) the files on it. edit: now the result is $ dd if=/dev/zero of=/dev/sdb dd: opening `/dev/sdb': Read-only file system I have also tried re-formatting it on Windows, but it gets to the end of the format process and then says "Couldn't format the drive". How can I remove this partition and get my whole USB drive back to normal again? EDIT 1: Trying a simple mkfs doesn't work: $ sudo mkfs -t vfat /dev/sdb mkfs.vfat 3.0.0 (28 Sep 2008) mkfs.vfat: Will not try to make filesystem on full-disk device '/dev/sdb' (use -I if wanted) I can't do mkfs on /dev/sdb1 because there is no such partition, as shown:$ ls /dev | grep sdb sdb EDIT 2: This is the information posted by dmesg when I plug the device in:$ dmesg . . (snip) . usb 2-1: New USB device found, idVendor=058f, idProduct=6387 usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-1: Product: Mass Storage usb 2-1: Manufacturer: Generic usb 2-1: SerialNumber: G0905000000000010885 usb-storage: device found at 4 usb-storage: waiting for device to settle before scanning usb-storage: device scan complete scsi 6:0:0:0: Direct-Access FLASH Drive AU_USB20 8.07 PQ: 0 ANSI: 2 sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB) sd 6:0:0:0: [sdb] Write Protect is off sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 sd 6:0:0:0: [sdb] Assuming drive cache: write through sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB) sd 6:0:0:0: [sdb] Write Protect is off sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 sd 6:0:0:0: [sdb] Assuming drive cache: write through sdb: unknown partition table sd 6:0:0:0: [sdb] Attached SCSI removable disk sd 6:0:0:0: Attached scsi generic sg2 type 0 ISO 9660 Extensions: Microsoft Joliet Level 3 ISO 9660 Extensions: RRIP_1991A SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts CE: hpet increasing min_delta_ns to 15000 nsec This shows that the device is formatted as ISO 9660 and that it is /dev/sdb. EDIT 3: This is the message that I find at the bottom of dmesg after running cfdisk and writing a new partition table to the disk:SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts sd 17:0:0:0: [sdb] Device not ready: Sense Key : Not Ready [current] sd 17:0:0:0: [sdb] Device not ready: < ASC=0xff ASCQ=0xffASC=0xff < ASCQ=0xff end_request: I/O error, dev sdb, sector 0 Buffer I/O error on device sdb, logical block 0 lost page write due to I/O error on sdb

    Read the article

  • please explain my fio results - is O_SYNC|O_DIRECT misbehaving on linux?

    - by Zoltan
    I'm going mad over figuring out what the problem could be with one of our storage boxes. With a simple fio script I'm testing random writes using bs=1M and direct=1. The SSD is a Samsung 840pro attached to an LSI HBA (3Gbit/s ports). This is the result I'm getting under FreeBSD 9.1: WRITE: io=13169MB, aggrb=224743KB/s, minb=224743KB/s, maxb=224743KB/s, mint=60002msec, maxt=60002msec This is regardless of sync being set to 0 or 1. On linux, this is the result with sync=0: WRITE: io=14828MB, aggrb=253060KB/s, minb=253060KB/s, maxb=253060KB/s, mint=60001msec, maxt=60001msec and with sync=1: WRITE: io=6360.0MB, aggrb=108542KB/s, minb=108542KB/s, maxb=108542KB/s, mint=60001msec, maxt=60001msec My understanding is that since I'm operating on the raw block device, O_SYNC should not make any difference - there's no filesystem, any barrier, anything between the writes and the drive itself. Especially with O_DIRECT|O_SYNC set. Any ideas? For reference, here's the fio script I'm testing with: [global] bs=1M ioengine=sync iodepth=4 size=16g direct=1 runtime=60 filename=/dev/sdh sync=1 [rand-write] rw=randwrite stonewall

    Read the article

  • BizTalk Pipeline Component Error: "Object reference not set to an instance of an object"

    - by Stuart Brierley
    Yesterday I posted about my BizTalk Archiving Pipeline Component, which can be found on Codeplex if anyone is interested in taking a look. During testing of this component I began to encounter an error whereby the component would throw an "Object reference not set to an instance of an object" error when processing as a part of a Custom Pipeline. This was occurring when the component was reading a ReadOnlySeekableStream so that the data can be archived to file, but the actual code throwing the error was somewhere in the depths of the Microsoft.BizTalk.Streaming stack. It turns out that there is a known issue where this exception can be thrown because the garbage collector has disposed of of the stream before execution of the custom pipeline has completed. To get around this you need to add the streams in your code to the pipeline context resource tracker.   So a block of my code goes from:                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } to                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         pc.ResourceTracker.AddResource(originalStrm);                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } So far this seems to have solved the issue, the error is no more, and my archive component is continuing its way through testing.

    Read the article

  • Difference between Software Services & IT Consulting.

    - by Rohit
    I have been looking into sites of IT companies but I am confused with the terms they use for their offerings. Some write: "Software Services, IT Consulting", some write: "Technology, Consulting", some write" "Product engineering, Application Development". Can someone clarify what is the difference between: (1) Software services & IT Consulting. (2) Technology and Consulting.

    Read the article

  • Permissions on DVD folders in redhat10

    - by aryan
    I have written a data DVD by k3b. When I mount the DVD on my system I can't read and write on it's folders. I tried to set their permissions but it's not possible. I mean that when I set file access to Read and Write and press the Apply permissions to enclosed files button, after a few seconds my new settings (Read and Write) will be reverted to "---". Can any one guide me, please?

    Read the article

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

  • .htaccess RewriteRule Problem

    - by Kunal Gautam
    Before asking question let me tell you some assumptions here there are 5 files on my webserver index.php config.php read.php write.php .htaccess I've wrote following URL rewriting rule in .htaccess RewriteEngine on RewriteRule ^(\w+)$ read.php?id=$1 Now when I type domain.com/xyz it fetch data from read.php?id=xyz thats nice :) But when I type domain.com/index it fetch data from index.php or when i type domain.com/write or domain.com/config or domain.com/read it fetch data from write.php , config.php and read.php respectively I want data to be fectched from read.php?id=index or read.php?id=config or read.php?id=read or read.php?id=write Any one can help me regarding this ? Sorry for my poor english

    Read the article

  • Improving the speed of writing code in C#

    - by Robert Harvey
    Laugh if you want, but I used to develop substantial line-of-business applications in VB6, long before the .NET framework came along. Why, when I was your age, we used to walk two miles in the snow, uphill. Both ways... Love it or hate it, VB6 had a REPL-like feel, and a very rapid development cycle. I would like to know how to come closer to that process in C#. In VB6, I could write a function, execute it, debug it and have it fully functional in a few minutes. I am told this is how the Lisp crowd works. It's a very rapid-fire style of programming. In C# I write a function, then I write a unit test for that function (which is OK, I understand the value of that), then I right-click, run test, wait for the project to compile (takes about 10 seconds right now, which would be an eternity for a REPL loop), and get an exception. Honestly, this feels more like my junior college days, when I used to feed punch cards into a hopper and wait for a printout (exaggerating only slightly for effect). Additionally, my tendency nowadays is to make everything public while I'm testing it. Unit testing with private accessors works fine, but you can't trace through the code (unless, of course, I'm doing something wrong) while you're using them. So what I'd like to know is, what adjustments have you made to your development process in C# to streamline it, and make it possible to write and verify your code very rapidly?

    Read the article

  • What are some of the benefits of a "Micro-ORM"?

    - by Wayne M
    I've been looking into the so-called "Micro ORMs" like Dapper and (to a lesser extent as it relies on .NET 4.0) Massive as these might be easier to implement at work than a full-blown ORM since our current system is highly reliant on stored procedures and would require significant refactoring to work with an ORM like NHibernate or EF. What is the benefit of using one of these over a full-featured ORM? It seems like just a thin layer around a database connection that still forces you to write raw SQL - perhaps I'm wrong but I was always told the reason for ORMs in the first place is so you didn't have to write SQL, it could be automatically generated; especially for multi-table joins and mapping relationships between tables which are a pain to do in pure SQL but trivial with an ORM. For instance, looking at an example of Dapper: var connection = new SqlConnection(); // setup here... var person = connection.Query<Person>("select * from people where PersonId = @personId", new { PersonId = 42 }); How is that any different than using a handrolled ADO.NET data layer, except that you don't have to write the command, set the parameters and I suppose map the entity back using a Builder. It looks like you could even use a stored procedure call as the SQL string. Are there other tangible benefits that I'm missing here where a Micro ORM makes sense to use? I'm not really seeing how it's saving anything over the "old" way of using ADO.NET except maybe a few lines of code - you still have to write to figure out what SQL you need to execute (which can get hairy) and you still have to map relationships between tables (the part that IMHO ORMs help the most with).

    Read the article

  • Powershell Remoting Handover Variables

    - by icnivad
    I've opened a Remote Session s1, and like to run a function with Parameters i handover in my scriptblock: Simplified example with Write-Host: $a = "aaa" $b = "bbb" $c = "ccc" $d = "ddd" Write-Host "AAAAA: $a $b $c $d" #This works fine as it's locally Invoke-Command -Session $s1 -scriptblock {Write-Host "BBBBB: $a $b $c $d"} #These variables are empty What is the cleanest way to handover Variables (I normally receive from a local csv file) to the scriptblock?

    Read the article

  • Log Blog

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Logging – A log blog In a another blog (Missing Fields and Defaults) I spoke about not doing a blog about log files, but then I looked at it again and realized that this is a nice opportunity to show a simple yet powerful tool and also deal with static variables and functions in C#. My log had to be able to answer a few simple logging rules:   To log or not to log? That is the question – Always log! That is the answer  Do we share a log? Even when a file is opened with a minimal lock, it does not share well and performance greatly suffers. So sharing a log is not a good idea. Also, when sharing, it is harder to find your particular entries and you have to establish rules about retention. My recommendation – Do Not Share!  How verbose? Your log can be very verbose – a good thing when testing, very terse – a good thing in day-to-day runs, or somewhere in between. You must be the judge. In my Blog, I elect to always report a run with start and end times, and always report errors. I normally use 5 levels of logging: 4 – write all, 3 – write more, 2 – write some, 1 – write errors and timing, 0 – write none. The code sample below is more general than that. It uses the config file to set the max log level and each call to the log assigns a level to the call itself. If the level is above the .config highest level, the line will not be written. Programmers decide which log belongs to which level and thus we can set the .config differently for production and testing.  Where do I keep the log? If your career is important to you, discuss this with the boss and with the system admin. We keep logs in the L: drive of our server and make sure that we have a directory for each app that needs a log. When adding a new app, add a new directory. The default location for the log is also found in the .config file Print One or Many? There are two options here:   1.     Print many, Open but once once – you start the stream and close it only when the program ends. This is what you can do when you perform in “batch” mode like in a console app or a stsadm extension.The advantage to this is that starting a closing a stream is expensive and time consuming and because we use a unique file, keeping it open for a long time does not cause contention problems. 2.     Print one entry at a time or Open many – every time you write a line, you start the stream, write to it and close it. This work for event receivers, feature receivers, and web parts. Here scalability requires us to create objects on the fly and get rid of them as soon as possible.  A default value of the onceOrMany resides in the .config.  All of the above applies to any windows or web application, not just SharePoint.  So as usual, here is a routine that does it all, and a few simple functions that call it for a variety of purposes.   So without further ado, here is app.config  <?xml version="1.0" encoding="utf-8" ?> <configuration>     <configSections>         <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, ublicKeyToken=b77a5c561934e089" >         <section name="statics.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />         </sectionGroup>     </configSections>     <applicationSettings>         <statics.Properties.Settings>             <setting name="oneOrMany" serializeAs="String">                 <value>False</value>             </setting>             <setting name="logURI" serializeAs="String">                 <value>C:\staticLog.txt</value>             </setting>             <setting name="highestLevel" serializeAs="String">                 <value>2</value>             </setting>         </statics.Properties.Settings>     </applicationSettings> </configuration>   And now the code:  In order to persist the variables between calls and also to be able to persist (or not to persist) the log file itself, I created an EventLog class with static variables and functions. Static functions do not need an instance of the class in order to work. If you ever wondered why our Main function is static, the answer is that something needs to run before instantiation so that other objects may be instantiated, and this is what the “static” Main does. The various logging functions and variables are created as static because they do not need instantiation and as a fringe benefit they remain un-destroyed between calls. The Main function here is just used for testing. Note that it does not instantiate anything, just uses the log functions. This is possible because the functions are static. Also note that the function calls are of the form: Class.Function.  using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace statics {       class Program     {         static void Main(string[] args)         {             //write a single line             EventLog.LogEvents("ha ha", 3, "C:\\hahafile.txt", 4, true, false);             //this single line will not be written because the msgLevel is too high             EventLog.LogEvents("baba", 3, "C:\\babafile.txt", 2, true, false);             //The next 4 lines will be written in succession - no closing             EventLog.LogLine("blah blah", 1);             EventLog.LogLine("da da", 1);             EventLog.LogLine("ma ma", 1);             EventLog.LogLine("lah lah", 1);             EventLog.CloseLog(); // log will close             //now with specific functions             EventLog.LogSingleLine("one line", 1);             //this is just a test, the log is already closed             EventLog.CloseLog();         }     }     public class EventLog     {         public static string logURI = Properties.Settings.Default.logURI;         public static bool isOneLine = Properties.Settings.Default.oneOrMany;         public static bool isOpen = false;         public static int highestLevel = Properties.Settings.Default.highestLevel;         public static StreamWriter sw;         /// <summary>         /// the program will "print" the msg into the log         /// unless msgLevel is > msgLimit         /// onceOrMany is true when once - the program will open the log         /// print the msg and close the log. False when many the program will         /// keep the log open until close = true         /// normally all the arguments will come from the app.config         /// called by many overloads of logLine         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         /// <param name="logFileName"></param>         /// <param name="msgLimit"></param>         /// <param name="onceOrMany"></param>         /// <param name="close"></param>         public static void LogEvents(string msg, int msgLevel, string logFileName, int msgLimit, bool oneOrMany, bool close)         {             //to print or not to print             if (msgLevel <= msgLimit)             {                 //open the file. from the argument (logFileName) or from the config (logURI)                 if (!isOpen)                 {                     string logFile = logFileName;                     if (logFileName == "")                     {                         logFile = logURI;                     }                     sw = new StreamWriter(logFile, true);                     sw.WriteLine("Started At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     isOpen = true;                 }                 //print                 sw.WriteLine(msg);             }             //close when instructed             if (close || oneOrMany)             {                 if (isOpen)                 {                     sw.WriteLine("Ended At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     sw.Close();                     isOpen = false;                 }             }         }           /// <summary>         /// The simplest, just msg and level         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogLine(string msg, int msgLevel)         {             //use the given msg and msgLevel and all others are defaults             LogEvents(msg, msgLevel, "", highestLevel, isOneLine, false);         }                 /// <summary>         /// one line at a time - open print close         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogSingleLine(string msg, int msgLevel)         {             LogEvents(msg, msgLevel, "", highestLevel, true, true);         }           /// <summary>         /// used to close. high level, low limit, once and close are set         /// </summary>         /// <param name="close"></param>         public static void CloseLog()         {             LogEvents("", 15, "", 1, true, true);         }           }     }   }   That’s all folks!

    Read the article

  • What are the necessary periodic checks for server?

    - by Edmund
    Hi all, I have some server which my team use for hosting internal applications for development purpose. I thinking of setting up some periodic checks but do now know how to go about it. Can advise on the following? Preferably windows bat file or linux script How to write a script that will check the content of a webpage to verify if it is down. How to write a script that will check if the website is down by pinging it How to write a script that will check the diskspace of the server is running out of diskspace. How to write a script that will email back to system administrator if either of the above tasks are not fulfilled?

    Read the article

  • One of the partion on a usb harddisk cannot automount

    - by holmescn
    It is a very strange problem. My usb harddisk has four partitions, one is primary, the other three are logical (contained within an extended partition). When I plug in the disk, three of the partitions are mounted automatically except one--the first logical partition in the extended partition. Initially I thought it is the problem of system (at that time I used Mint). But after I change to Ubuntu 12.04, the problem wasn't solved. I don't want to add a rule in fstab, and I want to know what happened. The disk is fine, and the partition can be accessed in Windows and mounted manually. result of dmesg | tail: [100933.557649] usb 2-1.2: new high-speed USB device number 4 using ehci_hcd [100933.651891] scsi8 : usb-storage 2-1.2:1.0 [100934.649047] scsi 8:0:0:0: Direct-Access SAMSUNG PQ: 0 ANSI: 0 [100934.650963] sd 8:0:0:0: Attached scsi generic sg2 type 0 [100934.651342] sd 8:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [100934.651977] sd 8:0:0:0: [sdb] Write Protect is off [100934.651989] sd 8:0:0:0: [sdb] Mode Sense: 03 00 00 00 [100934.652836] sd 8:0:0:0: [sdb] No Caching mode page present [100934.652848] sd 8:0:0:0: [sdb] Assuming drive cache: write through [100934.655354] sd 8:0:0:0: [sdb] No Caching mode page present [100934.655367] sd 8:0:0:0: [sdb] Assuming drive cache: write through [100934.734652] sdb: sdb1 sdb3 < sdb5 sdb6 sdb7 > [100934.737706] sd 8:0:0:0: [sdb] No Caching mode page present [100934.737725] sd 8:0:0:0: [sdb] Assuming drive cache: write through [100934.737731] sd 8:0:0:0: [sdb] Attached SCSI disk result of parted -l: Model: SAMSUNG (scsi) Disk /dev/sdb: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 21.5GB 21.5GB primary ntfs 3 21.5GB 320GB 299GB extended lba 5 21.5GB 129GB 107GB logical ntfs 6 129GB 236GB 107GB logical ntfs 7 236GB 320GB 83.8GB logical ntfs

    Read the article

  • Configuring Linux Network

    - by Reiler
    Hi I'm working on some software, that runs on a Centos 5.xx installation. I'ts not allowed for our customers to log in to Linux, everything is done from Windows applications, developed by us. So we have build a frontend for the user to configure network setup: Static/DHCP, ip-address, gateway, DNS, Hostname. Right now I let the user enter the information in the Windows app, and then write it on the Linux server like this: Write to /etc/resolv.conf: Nameserver Write to /etc/sysconfig/network: Gateway and Hostname Write to /etc/sysconfig/network-scripts/ifcfg-eth0: Ipaddress, Netmask, Bootproto(DHCP or Static) I also (after some time) found out that I was unable to send mail, unless I wrote in /etc/hosts: 127.0.0.1 Hostname All this seems to work, but is there a better/easier way to do this? Also, I read the network configuration nearly the same way, but if I use DHCP, I miss som information, for instance the Ip-address. I know that I can get some information from the commandline (ifconfig), but I dont get for instance Hostname, Gateway and DNS. Is there a commandline tool that will display this?

    Read the article

  • Permission folders redhat10

    - by aryan
    Dear all, I have written a data DVD by k3b and now that I paste DVD on my system I have limitation in order to read and write on it's folders. I tried to set their permission but it's not possible. I mean that when I set File access to Read and write and press Apply permission to enclosed files botton, after a few seconds my setting (Read and write) will be disappeared and it returns to ---. Can any one guide m, please? Thanks

    Read the article

  • MegaCli newly created disk doesn't appear under /dev/sdX

    - by Henry-Nicolas Tourneur
    After having successfully added 2 new disks in a new RAID virtual drive (background initialization done), I would have exepected it to appear under /dev/sdh but it's not there (so, unusable). The system is running a CentOS 5.2 64 bits, HAL and udev daemons are running, not records of any sdh apparition under the messsage log file or in dmesg, only MegaCli do see that virtual drive. Any idea ? Some data: [root@server ~]# ./MegaCli -LDInfo -LALL -a0 Adapter 0 -- Virtual Drive Information: Virtual Disk: 0 (target id: 0) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:139392MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Virtual Disk: 1 (target id: 1) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:285568MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default [root@server ~]# ls -l /dev/disk/by-id/scsi-360* lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09 -> ../../sda lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part2 -> ../../sda2 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee -> ../../sdf lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee-part1 -> ../../sdf1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f -> ../../sdb lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec -> ../../sde lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec-part1 -> ../../sde1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d -> ../../sdd lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7 -> ../../sdc lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1 -> ../../sdg lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1-part1 -> ../../sdg1

    Read the article

  • Get to Know a Candidate (19-25 of 25): Independent Candidates

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting.  Information sourced for Wikipedia. The following independent candidates have gained access to at least one state ballot. Richard Duncan, of Ohio; Vice-presidential nominee: Ricky Johnson Candidate Ballot Access: Ohio - (18 Electoral)  Write-In Candidate Access: Alaska, Florida, Indiana, Maryland Randall Terry, of West Virginia; Vice-presidential nominee: Missy Smith Candidate Ballot Access: Kentucky, Nebraska, West Virginia - (18 Electoral)  Write-In Candidate Access: Colorado, Indiana Sheila Tittle, of Texas; Vice-presidential nominee: Matthew Turner Candidate Ballot Access: Colorado, Louisiana - (17 Electoral) Jeff Boss, of New Jersey; Vice-presidential nominee: Bob Pasternak Candidate Ballot Access: New Jersey - (14 Electoral) Dean Morstad, of Minnesota; Vice-presidential nominee: Josh Franke-Hyland Candidate Ballot Access: Minnesota - (10 Electoral)  Write-In Candidate Access: Utah Jill Reed, of Wyoming; Vice-presidential nominee: Tom Cary Candidate Ballot Access: Colorado - (9 Electoral)  Write-In Candidate Access: Indiana, Florida Jerry Litzel, of Iowa; Vice-presidential nominee: Jim Litzel Candidate Ballot Access: Iowa - (6 Electoral) That wraps it up people. We have reviewed 25 presidential candidates in the 2012 U.S. election. Look for more blog posts about the election to come.

    Read the article

  • RAID 1 not performing as expected

    - by Faken
    I recently bought a new 320Gb hard drive for my computer to set up RAID 1 on it for some added security. Installation went as smooth as could possibly be (plug in power, plug in data cable, start up computer, Intel software recognized new drive, right click create RAID 1, done!). However, for some inexplicable reason, I seem to have strange test results when using BENCH32. On my old configuration, a single 7200 rpm drive, I achieved about 60 MB/s write and 70 MB/s read. With a new RAID 1 configuration, I would expect the write to be slightly diminished but read to be significantly improved (though not exactly double speed). However, with the new configuration, I am getting 90 MB/s write and only about 80 MB/s read. I should NOT be getting improved write performance, especially NOT better than read! What's going on? My system setup is: q6600 2.4ghz CPU 4Gb DDR2 667mhz RAM on board Intel ICH9R "RAID chip" 2x Seagate 7200 RPM 320GB drives in RAID 1 Widows 7 home premium 64-bit

    Read the article

  • On Windows Server 2003, permissions are not propagating to a group that is a member of a group

    - by Joshua K
    Windows Server 2003 on i386. FTP server is running as the SYSTEM user/group. Some files we want served (read and write) are owned by the group 'ftp.' ftp has full read/write/whatever permissions on those files and directories. SYSTEM can't read/write those directories. So, I added SYSTEM to the 'ftp' group. Windows happily complied, but even after restarting Filezilla, it still could not read/write those files. Is there any way to do what we want without "re-permissioning" all those files? Running the ftp server as 'ftp' isn't really an option because it also serves files that are owned by SYSTEM (And not ftp). Sigh... :) Any insights?

    Read the article

  • SearchServer2008Express Search Webservice

    - by Mike Koerner
    I was working on calling the Search Server 2008 Express search webservice from Powershell.  I kept getting <ResponsePacket xmlns="urn:Microsoft.Search.Response"><Response domain=""><Status>ERROR_NO_RESPONSE</Status><DebugErrorMessage>The search request was unable to connect to the Search Service.</DebugErrorMessage></Response></ResponsePacket>I checked the user authorization, the webservice search status, even the WSDL.  Turns out the URL for the SearchServer2008 search webservice was incorrect.  I was calling $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"and it should have been$URI= "http://ss2008/_vti_bin/search.asmx?WSDL"Here is my sample powershell script:# WSS Documentation http://msdn.microsoft.com/en-us/library/bb862916.aspx$error.clear()#Bad SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"#Good SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/search.asmx?WSDL"$search = New-WebServiceProxy -uri $URI -namespace WSS -class Search -UseDefaultCredential $queryXml = "<QueryPacket Revision='1000'>  <Query >    <SupportedFormats>      <Format revision='1'>urn:Microsoft.Search.Response.Document.Document</Format>    </SupportedFormats>    <Context>      <QueryText language='en-US' type='MSSQLFT'>SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Microsoft')</QueryText>      <!--<QueryText language='en-US' type='TEXT'>Microsoft</QueryText> -->    </Context>  </Query></QueryPacket>" $statusResponse = $search.Status()write-host '$statusResponse:'  $statusResponse $GetPortalSearchInfo = $search.GetPortalSearchInfo()write-host '$GetPortalSearchInfo:'  $GetPortalSearchInfo $queryResult = $search.Query($queryXml)write-host '$queryResult:'  $queryResult

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >