Search Results

Search found 24117 results on 965 pages for 'write'.

Page 92/965 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Expected IOPS for log writing on PS6000X SAN?

    - by dssz
    Customer is experiencing poor Sybase ASE 15 performance on a PS6000X SAN with 16 X 450GB 10K in RAID-50. The server is a Dell R710 running 2003 server R2 64bit in ESX 4.0.0,256968 I've used sqlio to benchmark the sequential write performance of 4KB blocks on the drive. sqlio -kW -t1 -s600 -dE -o1 -fsequential -b4 -BH -LS sqliotestfile.dat Result is 1900 IOPS. However, when Sybase is running a sustained workload of small inserts SAN HQ shows a consistent 590 IOPS (and 100% 4K write activity). It also shows that the write latency increases to 1.2ms from <1ms. Monitoring and tests in Sybase demonstrate the performance problem is IO related and in particular there is a lot of wait time writing to the log. The SAN indicates that write caching is enabled. What IOPS should the SAN be capable of for 4k sequential write activity? Also, with write caching enabled, shouldn't the controller be batching up the 4K writes into something more efficient? Also, any tips on Sybase on ESX would be appreciated.

    Read the article

  • Matlab computations done over Apple Filing Protocol (AFP) depend on POSIX permissions, ignores ACLs

    - by flumignan
    I'm a system administrator and have never used Matlab, so forgive my general ignorance of the program. My users have encountered problems when executing scripted Matlab actions over AFP to a Mac OS X Server 10.6.7 where the access control list (ACL) should allow actions, but the POSIX-style permissions disallow the activity. It seems as if Matlab, run locally on the Mac workstations on datasets on the remote server, ignores the ACLs entirely. This is the only application I've ever seen behave this way. The server's filesystem is HFS+J and all other activity is performing as expected. These users cannot use CIFS because of our integration with external directory systems. In this example, the directory bxdata, the members of the group cibturner should be able to modify the files. Indeed, they can using any other method except via Matlab scripts. When the Matlab script hits these files, the POSIX permissions of 644 disallow modification. It's as if the ACLs are irrelevant. [root@cib 16:00:24 /14181.2_5sM]# ls -leh@ bxdata/ total 128 -rw-r--r--+ 1 kel32 staff 18K Feb 15 09:31 TS-5sMath030708-21073-1.edat 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown -rw-r--r--+ 1 kel32 staff 25K Feb 15 09:31 TS-5sMath030708-21073-1.txt 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown Because this server has HIPAA data, security is critical. We are not using networked home directories or SAN technology. The MatLab program is run on the user's hard drive; access is granted via Kerberized AFP.

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • recursive grep started at / hangs

    - by Martin
    I have used following grep search pattern on multiple platforms: grep -r -I -D skip 'string_to_match' / For example on FreeBSD 8.0, FreeBSD 6.4 and Debian 6.0(squeeze). Command does a recursive search starting from root directory, assumes that binary files do not have the 'string_to_match' and skips devices, sockets and named pipes. FreeBSD 8.0 and FreeBSD 6.4 use GNU grep version 2.5.1 and Debian 6.0 uses GNU grep version 2.6.3. On FreeBSD 6.4, last information printed to stderr was "grep: /dev/cuad0: Device busy". After this grep just idles as according to "top -m io -o total" the I/O usage of grep is nonexistent. Same behavior is true under FreeBSD 8.0, but last information sent to stderr is "grep: /tmp/.wine-0: Permission denied" on my installation. In case of Debian, last output to stderr is "grep: /proc/sysrq-trigger: Input/output error". If I check the I/O usage of grep process under Debian, it is following: root@Debian:~# iotop -bp 22439 Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / ^Croot@Debian:~# What might cause this? Is there a way to view which file grep is currently processing in case lsof is not present? I'm able to use lsof under Debian and looks like the problematic file name there is "0xc6b2c230 file struct, ty=0, op=0xc0d34120". I'm not sure what this is.. I'm not able to use lsof or fstat under FreeBSD. PS: I know I could use find utility, but this is not the question.

    Read the article

  • ExpressCache not working after Windows 8 reinstall on Samsung Series 7 Gamer

    - by Morven
    I have a Samsung Series 7 Gamer laptop which came with Windows 8. After doing a reinstall of Windows, the ExpressCache software is no longer caching. Running "eccmd -info" shows me that the software is present and it has the MSATA drive partition configured. However, it's not actually caching anything. These are the results after having the system booted for days: C:\windows\system32eccmd -info ExpressCache Command Version 1.0.94.0 Copyright¬ 2010-2012 Condusiv Technologies. Date Time: 11/3/2013 12:26:20:263 (JAMETHIEL #36) EC Cache Info ================================================== Mounted : Yes Partition Size : 7.46 GB Reserved Size : 3.00 MB Volume Size : 7.46 GB Total Used Size : 86.50 MB Total Free Space : 7.38 GB Used Data Size : 16.63 MB Used Data Size on Disk : 84.38 MB Tiered Cache Stats ================================================== Memory in use : 32.00 MB Blocks in use : 136 Read Percent : 0.02% Cache Stats ================================================== Cache Volume Drive Number : 1 Total Read Count : 97242 Total Read Size : 4.13 GB Total Cache Read Count : 0 Total Cache Read Size : 595.50 KB Total Write Count : 161546 Total Write Size : 5.89 GB Total Cache Write Count : 0 Total Cache Write Size : 0 Bytes Cache Read Percent : 0.01% Cache Write Percent : 0.00% As you can see on the last two lines, cache read and write percent is nigh on zero. Anyone know where to look next? The only guides I can find deal with ExpressCache not being present or not having a configured drive.

    Read the article

  • Partition is missing in /dev

    - by haimg
    I'm having a strange problem since I moved from Centos5 to Centos6. I have three disks, first two are used as a RAID1, and third one is a stand-alone backup disk that is not listed in /etc/fstab (it is mounded when needed and then unmounted). My problem: After a boot, /dev/sdc exists but /dev/sdc1 does not. Also, the links in /dev/disks are also absent for the first partition of sdc. Disk itself is fine, and if I hot-remove it and plug it back in, /dev/sdc1 appears ok and everything is working. My question: What subsystem manages auto-discovery of disks, partitions, etc. during the boot process (e.g. what creates /dev/disks/by-label)? How do I configure it to scan /dev/sdc too and create all relevant files and links in /dev ? Edit: Here's the relevant part of dmesg output (the only place sdc appears). It does list sdc1, but it's not in /dev! sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 3:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 3:0:0:0: [sdc] Write Protect is off sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdc: sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: DMAR:[DMA Read] Request device [00:1e.0] fault addr 361bc000 DMAR:[fault reason 06] PTE Read access is not set sdb1 sdb2 sdb3 sdc1 sda1 sd 1:0:0:0: [sdb] Attached SCSI disk sd 3:0:0:0: [sdc] Attached SCSI disk sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk

    Read the article

  • LVM mirror attempt results in "Insufficient free space"

    - by MattK
    Attempting to add a disk to mirror an LVM volume on CentOS 7 always fails with "Insufficient free space: 1 extents needed, but only 0 available". Having searched for a solution, I have tried specifying disks, multiple logging options, adding 3rd log partition, but have not found a solution Not sure if I am making a rookie mistake, or there is something more subtle wrong (I am more familiar with ZFS, new to using LVM): # lvconvert -m1 centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --mirrorlog mirrored --alloc anywhere centos_bi/home /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home /dev/sdi2 /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available The two disks are of the same size, and have identical partition layouts via "sfdisk -d /dev/sdi part_table; sfdisk /dev/sda < part_table". The current configuration is detailed below. # pvs PV VG Fmt Attr PSize PFree /dev/sda1 centos_bi lvm2 a-- 496.00m 496.00m /dev/sda2 centos_bi lvm2 a-- 465.27g 465.27g /dev/sdi2 centos_bi lvm2 a-- 465.27g 0 # vgs VG #PV #LV #SN Attr VSize VFree centos_bi 3 3 0 wz--n- 931.02g 465.75g # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices home centos_bi -wi-ao---- 391.64g /dev/sdi2(6050) root centos_bi -wi-ao---- 50.00g /dev/sdi2(106309) swap centos_bi -wi-ao---- 23.63g /dev/sdi2(0) # pvdisplay --- Physical volume --- PV Name /dev/sdi2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 119109 Free PE 0 Allocated PE 119109 --- Physical volume --- PV Name /dev/sda2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 119109 Free PE 119109 Allocated PE 0 --- Physical volume --- PV Name /dev/sda1 VG Name centos_bi PV Size 500.00 MiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 124 Free PE 124 Allocated PE 0 # vgdisplay --- Volume group --- VG Name centos_bi System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 10 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 3 Act PV 3 VG Size 931.02 GiB PE Size 4.00 MiB Total PE 238342 Alloc PE / Size 119109 / 465.27 GiB Free PE / Size 119233 / 465.75 GiB # lvdisplay --- Logical volume --- LV Path /dev/centos_bi/swap LV Name swap VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:34 -0400 LV Status available # open 2 LV Size 23.63 GiB Current LE 6050 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/centos_bi/home LV Name home VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:35 -0400 LV Status available # open 1 LV Size 391.64 GiB Current LE 100259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Path /dev/centos_bi/root LV Name root VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:37 -0400 LV Status available # open 1 LV Size 50.00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0

    Read the article

  • Adaptec 5805 after reboot don't starting

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not included. It only works if the computer is shut down and turn off. Late i update firmware "Adaptec RAID 5805 Firmware Build 18948" How to fix the problem? add Log Configuration summary Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priorityHigh Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1 List item

    Read the article

  • Adaptec 5805 not recognized after reboot

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not recognized. It only works if the computer is shut down and turned off. I have recently updated the firmware to "Adaptec RAID 5805 Firmware Build 18948". How do I fix the problem? Configuration summary --------------------------- 1. Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priority High Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1

    Read the article

  • Java Process "The pipe has been ended" problem

    - by Amit Kumar
    I am using Java Process API to write a class that receives binary input from the network (say via TCP port A), processes it and writes binary output to the network (say via TCP port B). I am using Windows XP. The code looks like this. There are two functions called run() and receive(): run is called once at the start, while receive is called whenever there is a new input received via the network. Run and receive are called from different threads. The run process starts an exe and receives the input and output stream of the exe. Run also starts a new thread to write output from the exe on to the port B. public void run() { try { Process prc = // some exe is `start`ed using ProcessBuilder OutputStream procStdIn = new BufferedOutputStream(prc.getOutputStream()); InputStream procStdOut = new BufferedInputStream(prc.getInputStream()); Thread t = new Thread(new ProcStdOutputToPort(procStdOut)); t.start(); prc.waitFor(); t.join(); procStdIn.close(); procStdOut.close(); } catch (Exception e) { e.printStackTrace(); printError("Error : " + e.getMessage()); } } The receive forwards the received input from the port A to the exe. public void receive(byte[] b) throws Exception { procStdIn.write(b); } class ProcStdOutputToPort implements Runnable { private BufferedInputStream bis; public ProcStdOutputToPort(BufferedInputStream bis) { this.bis = bis; } public void run() { try { int bytesRead; int bufLen = 1024; byte[] buffer = new byte[bufLen]; while ((bytesRead = bis.read(buffer)) != -1) { // write output to the network } } catch (IOException ex) { Logger.getLogger().log(Level.SEVERE, null, ex); } } } The problem is that I am getting the following stack inside receive() and the prc.waitfor() returns immediately afterwards. The line number shows that the stack is while writing to the exe. The pipe has been ended java.io.IOException: The pipe has been ended at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:260) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) at java.io.FilterOutputStream.write(FilterOutputStream.java:80) at xxx.receive(xxx.java:86) Any advice about this will be appreciated.

    Read the article

  • Why default constructor does not appear for value types?

    - by Arun
    The below snippet gives me a list of constructors and methods of a type. static void ReflectOnType(Type type) { Console.WriteLine(type.FullName); Console.WriteLine("------------"); List<ConstructorInfo> constructors = type.GetConstructors(BindingFlags.Public | BindingFlags.Static | BindingFlags.NonPublic |BindingFlags.Instance | BindingFlags.Default).ToList(); List<MethodInfo> methods = type.GetMethods().ToList(); Type baseType = type.BaseType; while (baseType != null) { constructors.AddRange(baseType.GetConstructors(BindingFlags.Public | BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Default)); methods.AddRange(baseType.GetMethods()); baseType = baseType.BaseType; } Console.WriteLine("Reflection on {0} type", type.Name); for (int i = 0; i < constructors.Count; i++) { Console.Write("Constructor: {0}.{1}", constructors[i].DeclaringType.Name, constructors[i].Name); Console.Write("("); ParameterInfo[] parameterInfos = constructors[i].GetParameters(); if (parameterInfos.Length > 0) { for (int j = 0; j < parameterInfos.Length; j++) { if (j > 0) { Console.Write(", "); } Console.Write("{0} {1}", parameterInfos[j].ParameterType, parameterInfos[j].Name); } } Console.Write(")"); if (constructors[i].IsSpecialName) { Console.Write(" has 'SpecialName' attribute"); } Console.WriteLine(); } Console.WriteLine(); for (int i = 0; i < methods.Count; i++) { Console.Write("Method: {0}.{1}", methods[i].DeclaringType.Name, methods[i].Name); // Determine whether or not each field is a special name. if (methods[i].IsSpecialName) { Console.Write(" has 'SpecialName' attribute"); } Console.WriteLine(); } } But when I pass an ‘int’ type to this method, why don’t I see the implicit constructor in the output? Or, how do I modify the above code to list the default constructor as well (in case I’m missing something in my code).

    Read the article

  • Java - Error Message Help

    - by Brian
    In the Code, mem is a of Class Memory and getMDR and getMAR ruturn ints. When I try to compile the code I get the following errors.....how can I fix this? Computer.java:25: write(int,int) in Memory cannot be applied to (int) Input.getInt(mem.write(cpu.getMDR())); ^ Computer.java:28: write(int,int) in Memory cannot be applied to (int) mem.write(cpu.getMAR()); Here is the code for Computer: class Computer{ private Cpu cpu; private Input in; private OutPut out; private Memory mem; public Computer() { Memory mem = new Memory(100); Input in = new Input(); OutPut out = new OutPut(); Cpu cpu = new Cpu(); System.out.println(in.getInt()); } public void run() { cpu.reset(); cpu.setMDR(mem.read(cpu.getMAR())); cpu.fetch2(); while (!cpu.stop()) { cpu.decode(); if (cpu.OutFlag()) OutPut.display(mem.read(cpu.getMAR())); if (cpu.InFlag()) Input.getInt(mem.write(cpu.getMDR())); if (cpu.StoreFlag()) { mem.write(cpu.getMAR()); cpu.getMDR(); } else { cpu.setMDR(mem.read(cpu.getMAR())); cpu.execute(); cpu.fetch(); cpu.setMDR(mem.read(cpu.getMAR())); cpu.fetch2(); } } } Here is the code for Memory: class Memory{ private MemEl[] memArray; private int size; public Memory(int s) {size = s; memArray = new MemEl[s]; for(int i = 0; i < s; i++) memArray[i] = new MemEl(); } public void write (int loc, int val) {if (loc >=0 && loc < size) memArray[loc].write(val); else System.out.println("Index Not in Domain"); } public int read (int loc) {return memArray[loc].read(); } public void dump() { for(int i = 0; i < size; i++) if(i%1 == 0) System.out.println(memArray[i].read()); else System.out.print(memArray[i].read()); } } Here is the code for getMAR and getMDR: public int getMAR() { return ir.getOpcode(); } public int getMDR() { return mdr.read(); }

    Read the article

  • How to stream XML data using XOM?

    - by Jonik
    Say I want to output a huge set of search results, as XML, into a PrintWriter or an OutputStream, using XOM. The resulting XML would look like this: <?xml version="1.0" encoding="UTF-8"?> <resultset> <result> [child elements and data] </result> ... ... [1000s of result elements more] </resultset> Because the resulting XML document could be big (tens or hundreds of megabytes, perhaps), I want to output it in a streaming fashion (instead of creating the whole Document in memory and then writing that). The granularity of outputting one <result> at a time is fine, so I want to generate one <result> after another, and write it into the stream. Assume there's already a method that helps with iterating the results and generating Element objects: public nu.xom.Element getNextResult(); So I'd simply like to do something like this pseudocode (automatic flushing enabled, so don't worry about that) : open stream/writer write declaration write start tag for <resultset> while more results: write next <result> element write end tag for <resultset> close stream/writer I've been looking at Serializer, but the necessary methods, writeStartTag(Element), writeEndTag(Element), write(DocType) are protected, not public! Is there no other way than to subclass Serializer to be able to use those methods, or to manually write the start and end tags directly into the stream as Strings, bypassing XOM altogether? (The latter wouldn't be too bad in this simple example, but in the general case it would get quite ugly.) Am I missing something or is XOM just not made for this? With dom4j I could do this easily using XMLWriter - it has constructors that take a Writer or OutputStream, and methods writeOpen(Element), writeClose(Element), writeDocType(DocumentType) etc. Compare to XOM's Serializer where the only public write method is the one that takes a whole Document. Please refrain from answering if you're not familiar with XOM! I specifically want to know if and how you can do this kind of streaming with that library. (This is related to my question about the best dom4j replacement where XOM is a strong contender.)

    Read the article

  • What are the limitations of assembler? (NASM)

    - by citronas
    Is there a technical limitation of what kind of programs I can write with assembler (NASM)? For now I've only seem some program that do arithmetic operations, like adding two numbers. Is it possible to write complex assembler programs, that provide a GUI, access the file system, plays sounds et cetera? I know I wouldn't write such programs, but I'm curious, if there are technical limitations on what kind of programs I can write with assembler.

    Read the article

  • Problem with writing a file to excel with spreadsheet

    - by winter sun
    I am trying to write excel file by using ruby 1.9 spreadsheet version 0.6.4.1 on windows. Everything is going ok, until I get to the book.write statement when I write book.write "c:/spreadsheet/excel-file.xls I keep getting the following error No such file or directory - c:/spreadsheet/excel-file.xls Can anyone tell me what I should change in this path name?

    Read the article

  • having difficulties adding new xml to my project

    - by yoavstr
    when i open a new xxx.xml file it doesnt write to the R.id my new var example when i write in xml : <__Button android:text="New Game" android:id="@+id/newGameButton" android:layout_width="wrap_content" android:layout_height="wrap_content"</Button (the _ r from this web format boundaries ) in R it should write : public static final int newGameButton=0x7f050013; note : this is a xml i added as new to my project should i make something so the eclipse tool will write automatically to the R file?

    Read the article

  • Does it should be in a namespace?

    - by Knowing me knowing you
    Do I have to put code from .cpp in a namespace from corresponding .h or it's enough to just write using declaration? //file .h namespace a { /*interface*/ class my { }; } //file .cpp using a::my; // Can I just write in this file this declaration and // after that start to write implementation, or // should I write: namespace a //everything in a namespace now { //Implementation goes here } Thanks.

    Read the article

  • Should it be in a namespace?

    - by Knowing me knowing you
    Do I have to put code from .cpp in a namespace from corresponding .h or it's enough to just write using declaration? //file .h namespace a { /*interface*/ class my { }; } //file .cpp using a::my; // Can I just write in this file this declaration and // after that start to write implementation, or // should I write: namespace a //everything in a namespace now { //Implementation goes here } Thanks.

    Read the article

  • this parameter modifier in C#?

    - by Ivan
    I'm curious about this code snippet: public static class XNAExtensions { /// <summary> /// Write a Point /// </summary> public static void Write(this NetOutgoingMessage message, Point value) { message.Write(value.X); message.Write(value.Y); } // ... }; What does the this keyword mean next to the parameter type? I can't seem to find any information about it anywhere, even in the C# specification.

    Read the article

  • Xen kernel can't see 2 disks of 6 of 1TB, does it have a limitation?

    - by PartySoft
    Linux gentoo-xen 2.6.18-xen-r12 #3 SMP Tue Oct 5 09:28:53 PDT 2010 x86_64 Intel(R) Xeon(R) CPU E5506 @ 2.13GHz GenuineIntel GNU/Linux I have 6 disks of 1 TB and i can't see all of them only 4, can anyone give me an ideea what can i do ? Filesystem Size Used Avail Use% Mounted on rootfs 886G 4.4G 836G 1% / /dev/sda3 886G 4.4G 836G 1% / rc-svcdir 1.0M 44K 980K 5% /lib64/rc/init.d shm 7.9G 0 7.9G 0% /dev/shm /dev/sdb1 917G 200M 871G 1% /home2 /dev/sdc1 917G 200M 871G 1% /home3 /dev/sdd1 917G 200M 871G 1% /home4 The hardware is Dual xeon E5506 processors on a supermicro X8DTL mobo 4.346585] ata3.00: ATA-8, max UDMA/133, 1953525168 sectors: LBA48 NCQ (depth 0/32) [ 4.346588] ata3.00: ata3: dev 0 multi count 16 [ 4.352861] ata3.00: configured for UDMA/133 [ 4.352867] scsi3 : ata_piix [ 4.352875] PM: Adding info for No Bus:host3 [ 4.510584] ata4.00: ATA-8, max UDMA/133, 1953525168 sectors: LBA48 NCQ (depth 0/32) [ 4.510587] ata4.00: ata4: dev 0 multi count 16 [ 4.516848] ata4.00: configured for UDMA/133 [ 4.516861] PM: Adding info for No Bus:target2:0:0 [ 4.516905] Vendor: ATA Model: SAMSUNG HD103SJ Rev: 1AJ1 [ 4.516910] Type: Direct-Access ANSI SCSI revision: 05 [ 4.516920] PM: Adding info for scsi:2:0:0:0 [ 4.517452] SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.517460] sde: Write Protect is off [ 4.517461] sde: Mode Sense: 00 3a 00 00 [ 4.517478] SCSI device sde: drive cache: write back [ 4.517514] SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.517521] sde: Write Protect is off [ 4.517522] sde: Mode Sense: 00 3a 00 00 [ 4.517532] SCSI device sde: drive cache: write back [ 4.517534] sde: sde1 [ 4.524551] sd 2:0:0:0: Attached scsi disk sde [ 4.524855] sd 2:0:0:0: Attached scsi generic sg4 type 0 [ 4.524874] PM: Adding info for No Bus:target3:0:0 [ 4.524928] Vendor: ATA Model: SAMSUNG HD103SJ Rev: 1AJ1 [ 4.524933] Type: Direct-Access ANSI SCSI revision: 05 [ 4.524946] PM: Adding info for scsi:3:0:0:0 [ 4.525216] SCSI device sdf: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.525227] sdf: Write Protect is off [ 4.525228] sdf: Mode Sense: 00 3a 00 00 [ 4.525242] SCSI device sdf: drive cache: write back [ 4.525280] SCSI device sdf: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.525286] sdf: Write Protect is off [ 4.525289] sdf: Mode Sense: 00 3a 00 00 [ 4.525301] SCSI device sdf: drive cache: write back [ 4.525302] sdf: sdf1 [ 4.532691] sd 3:0:0:0: Attached scsi disk sdf [ 4.533010] sd 3:0:0:0: Attached scsi generic sg5 type 0 [ 4.977669] scsi: <fdomain> Detection failed (no card) [ 5.030479] GDT-HA: Storage RAID Controller Driver. Version: 3.05 [ 5.030635] GDT-HA: Found 0 PCI Storage RAID Controllers [ 5.372350] Fusion MPT base driver 3.04.01 [ 5.372358] Copyright (c) 1999-2005 LSI Logic Corporation [ 5.579176] Fusion MPT SPI Host driver 3.04.01 [ 5.881777] ieee1394: Initialized config rom entry `ip1394' [ 6.166745] ieee1394: sbp2: Driver forced to serialize I/O (serialize_io=1) [ 6.166748] ieee1394: sbp2: Try serialize_io=0 for better performance [ 6.428866] md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 [ 6.428872] md: bitmap version 4.39 [ 6.431518] md: raid0 personality registered for level 0 [ 6.495979] md: raid1 personality registered for level 1 [ 6.570270] raid5: automatically using best checksumming function: generic_sse [ 6.575523] generic_sse: 6608.000 MB/sec [ 6.575526] raid5: using function: generic_sse (6608.000 MB/sec) [ 6.596226] raid6: int64x1 1835 MB/s [ 6.613231] raid6: int64x2 1773 MB/s [ 6.630256] raid6: int64x4 1675 MB/s [ 6.647296] raid6: int64x8 1027 MB/s [ 6.664267] raid6: sse2x1 3578 MB/s [ 6.681268] raid6: sse2x2 4207 MB/s [ 6.698280] raid6: sse2x4 4625 MB/s [ 6.698281] raid6: using algorithm sse2x4 (4625 MB/s) [ 6.698285] md: raid6 personality registered for level 6 [ 6.698286] md: raid5 personality registered for level 5 [ 6.698288] md: raid4 personality registered for level 4 [ 6.781090] md: raid10 personality registered for level 10 [ 7.007043] Intel(R) PRO/1000 Network Driver - version 7.1.9-k4 [ 7.007046] Copyright (c) 1999-2006 Intel Corporation. [ 9.229465] kjournald starting. Commit interval 5 seconds [ 9.229476] EXT3-fs: mounted filesystem with ordered data mode.

    Read the article

  • [iPhone]Objective C, objects which do not conform to NSCoding. How to write them to a file.

    - by Noah
    Hi, I am using an Objective c class, a subclass of NSObject. This class cannot be modified. I have an instance of this class that I wish to write to a file which can be retrieved and later reinstate. The object does not conform to NSCoding. To sum up, I need to save an instance of a class to a file which can be retrieved later, without using any of the NSCoding methods such as NSKeyedArchiving encodeWithCoder ... Using them returns this... NSInvalidArgumentException ...encodeWithCoder:] unrecognised selector sent to instance... Is there any other way I can store this object for later use Thank you

    Read the article

  • Write a GreaseMonkey script that reacts to domain strings (for I18N, e.g. cn,en,fr,etc.)

    - by Shizhidi
    Hello. Suppose there is a website that supports multiple languages: cn.mydomain.com or mydomain.com/cn or mydomain.cn en.mydomain.com or mydomain.com/en or mydomain.com fr.mydomain.com or mydomain.com/fr or mydomain.fr I want to write a GreaseMonkey script that has variables assigned different strings/values according to the address the user is loading the page from. How do you do that? Thanks EDIT: I realize I can just use JavaScript to get the address. Does GreaseMonkey itself support this kind of function?

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • So&hellip; What is a SharePoint Developer?

    - by Mark Rackley
    A few days ago Stacy Draper and I were chatting about what it means to be a SharePoint Developer. That actually turns about to be a conversation with lots of shades of grey. Stacy thought it would make a good blog post… well, I can’t promise this to be a GOOD blog post… So, anyway, I decided to let off a little bomb this morning by posting the following tweet on Twitter: @mrackley: Can someone be considered a SharePoint Developer if all they know how to do is work in SPD? Now, I knew this is a debate that has been going on since the first SharePoint Designer User put SharePoint Developer on their resume. There are probably several blogs out there on the subject, but with the wildfire that is jQuery and a few other new features out there I believe it is an important subject to tackle again. I got a lot of great feedback as well on Twitter. The entire twitter conversation is at the end of this blog posting. Thanks everyone for their opinions. Who cares? Why does it matter? Can’t we all just get along? Yes it matters… everything must be labeled and put in it’s proper place. Pigeon holing is the only way to go!  Just kidding.. I’m not near that anal, but yes! It is important to be able to properly identify the skill set of those people on your team and correctly identify the role you are wanting to hire. Saying you are a “SharePoint Developer” is just too vague and just barely begins to answer the question. Also, knowing who’s on your team and what they can do will ensure you give your clients the best people for the job. A Developer writes code right? So, a Developer uses Visual Studio! Whoa, hold on there Sparky. Even if I concede that to be a developer you have to write code then you still can’t say a SharePoint Developer has to use Visual Studio.  So, you can spell C#, how well can you write XSLT? How’s your jQuery? Sorry bud, that’s code whether you like it or not. There are many ways to write code in SharePoint that have nothing to do with cracking open Visual Studio. So, what are the different ways to develop in SharePoint then? How many different ways can you “develop” in SharePoint?? A lot… Out of the box features In SharePoint you can create a site, create a custom list on that site, do basic calculations in a calculated column, set up alerts, and add all sorts of web parts to a page. Let’s face it.. that IS development! javaScript/jQuery Perhaps you’ve heard by now about this thing called jQuery? It’s all over the place and the answer to a lot of people’s prayers. However be careful, with great power comes great responsibility. Remember, javaScript is executed on the client side and if you abuse it your performance could be affected. Also, Marc Anderson (@sympmarc) wrote a pretty awesome javaScript library called SPServices.  This allows you to access SharePoint’s Web Services using jQuery. How freakin cool is that? With these tools at your disposal the number of things you CAN’T do without Visual Studio grows smaller and smaller. This is definitely development no matter what anyone else says and there is no Visual Studio involved. SharePoint Designer Ahhh.. The cause of and the answer to all of your SharePoint development problems. With SharePoint Designer you can use DataView Web Parts, develop (there’s that word again) your branding, and even connect to external datasources.  There’s a lot you can do in SharePoint Designer. It’s got it’s shortcomings, but it is an invaluable tool in the SharePoint developers toolbox. InfoPath So, can InfoPath development really be considered SharePoint development? I would say yes. You can connect to SharePoint lists, populate fields in a SharePoint list, and even write code in InfoPath. Sounds like SharePoint development to me. Visual Studio – Web Services/WCF So, get this. You can write code for SharePoint and not have a clue what the 12 hive is, what “site actions” means, or know how to do ANYTHING in SharePoint? Poppycock! You say? SharePoint Web Services I say… With SharePoint Web Services you can totally interact with SharePoint without knowing anything about SharePoint. I don’t recommend it of course, but it’s possible. What can you write using SharePoint Web Services? How about a little application called SharePoint Designer? Visual Studio – Object Model And here we are finally:  the SharePoint Object Model.  When you hear “SharePoint Developer” most people think of someone opening Visual Studio and creating a custom web part, workflow, event receiver, etc.. etc.. but I hope that by now I have made the point that this is NOT the only form of SharePoint Development! Again… Who cares? Just crack open Visual Studio for everything! Problem solved! Let’s ponder for a moment, shall we? The business comes to you with a requirement that involves some pretty fancy business calculations, and a complicated view that they do NOT want to look like SharePoint. “No Problem” you proclaim you mighty SharePoint Developer. You go back to your cube, chuckle at the latest Dilbert comic, and crack open Visual Studio. Then you build your custom web part… fight with all the deployment, migration, and UAT that you must go through and proclaim victory two weeks later!!!! Well done my good sir/ma’am! Oh wait… it turns out Sally who is not a “developer” did the exact same thing with a Dataview web part and some jQuery and it’s been in production for two weeks? #CockinessFail I know there are many ASP.NET developers out there that can create a custom control and wrap it to be a SharePoint Web Part.  That does NOT mean they are SharePoint Developers though as far as I’m concerned and I personally would much rather have someone on my team that can manipulate the heck (yes, I said ‘heck’) out of SharePoint using Dataview Web Parts, jQuery, and a roll of duct tape. Just because you know how to write code in Visual Studio does not mean you are a SharePoint Developer. What’s the conclusion here? How do we define ‘it’ and what ‘it’ is called? Fortunately, this is MY blog. I don’t have to give answers, I can stir the pot, laugh and leave you to ponder what it means! There is obviously no right or wrong answer here (unless you disagree with me,then you are flat out wrong). Anyway, there are many opinions.  Here’s mine.  If you put SharePoint Developer on your resume make sure to clearly specify HOW you develop in SharePoint and what tools you use. If we must label these gurus of jQuery and SPD, how about “SharePoint Client Developer” or “SharePoint Front End Developer”? Just throwing out an idea. Whatever we call them, to say they are not developers is short-sighted, arrogant, and unfair. Of course, then we need to figure out what to call all those other SharePoint development types.  Twitter Conversation @next_connect: RT @mrackley: Can someone be considered a SharePoint Developer if all they know how to do is work in SPD? | I say no.... @mikegil:  @mrackley re: yr Developer question: SPD expert <> SP Developer. Can be "sous-developer," though. #SharePoint #SPD @WonderLaura:  Rt @mrackley Can someone be considered a SharePoint Dev if all they know how to do is work in SPD? -- My opinion is that devs write code. @exnav29:  Rt @mrackley Can someone be considered a SharePoint Dev if all they know how to do is work in SPD? => I think devs would use VS as well @ssKevin:  @WonderLaura @mrackley does that mean strictly vb and c# when it comes to #SharePoint ? @jimmywim:  @exnav29 @mrackley nah, I'd say they were a power user. Devs know their way around the 12 hive ;) @sympmarc:  RT @mrackley: Can someone be considered a SharePoint Developer if all they know how to do is work in SPD? -> Fighting words. @sympmarc:  @next_connect @mrackley Besides, we prefer to be called "hacks". ;+) @next_connect:  @sympmarc The important thing is that you don't have to develop code to solve problems and create solutions. @mrackley @mrackley:  @sympmarc @next_connect not tryin to pick fight.. just try and find consensus on definition @usher:  @mrackley I'd still argue that you have a DevLite title that's out there for the collaboration engineers (@sympmarc @next_connect) @next_connect: @usher I agree. I've called it Light Dev/ Configuration before. @sympmarc @mrackley @usher:  @next_connect I like DevLite, low calorie but still same great taste :) @mrackley @sympmarc @mrackley:  @next_connect @usher @sympmarc I don't think there's any "lite" to someone who can bend jQuery and XSLT to their will. @usher:  @mrackley okay, so would you refer to someone that writes user controls and assemblies something different (@next_connect @sympmarc) @usher:  @mrackley when looking for a developer that can write .net code, it's a bit different than an XSLT/jQuery designer. @sympmarc @next_connect @jimmywim:  @mrackley @sympmarc @next_connect I reckon a "dev" does managed code and works in the 12 hive @sympmarc:  @jimmywim @mrackley @next_connect We had a similar debate a few days ago @toddbleeker et al @sympmarc:  @sympmarc @jimmywim @mrackley @next_connect @toddbleeker @stevenmfowler More abt my Middle Tier term, but still connected. Meet bus need. @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect I used "No Assembly Required" in the past. I also suggested "Supplimenting the SharePoint DOM" @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect Others suggested Information Worker Solutions/Enhancements @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect @stevenmfowler I also like "SharePoint Scripting Solutions". All the technologies are script. @jimmywim:  @toddbleeker @sympmarc @mrackley @next_connect I like the IW solutions one... @toddbleeker:  @sympmarc @jimmywim @mrackley @next_connect @stevenmfowler This is like the debate that never ends: it is definitely not called Middle Tier. @jimmywim:  @toddbleeker @sympmarc @mrackley @next_connect @stevenmfowler "Scripting" these days makes me think PowerShell... @sympmarc:  @toddbleeker @jimmywim @mrackley @next_connect @stevenmfowler If it forces a debate on h2 best solve bus probs, I'll keep sayin Middle Tier. @usher:  @sympmarc so we know what we're looking for, we just can't define a name? @toddbleeker @jimmywim @mrackley @next_connect @stevemfowler @sympmarc:  @usher @sympmarc @toddbleeker @jimmywim @mrackley @next_connect @stevemfowler The naming seems to matter more than the substance. :-( @jimmywim:  @sympmarc @usher @toddbleeker @mrackley @next_connect @stevemfowler work brkdn defines tasks, defines tools needed, can then b grp'd by user @WonderLaura:  @mrackley @toddbleeker @jimmywim @sympmarc @usher @next_connect Funny you're asking. @johnrossjr and I spent hours this week on the subject. @stevenmfowler:  RT @toddbleeker: @sympmarc @jimmywim @mrackley @next_connect @stevenmfowler it is definitely not called Middle Tier. < I'm with Todd

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >