Search Results

Search found 2284 results on 92 pages for 'smart zulu'.

Page 81/92 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • server 2008 r2 - wbadmin systemstatebackup - system writer not found in the backup

    - by TWood
    I am trying to manually run a systemstatebackup command on my server 2008 r2 box and I am getting an error code '2155347997' when I view the backup event log details. The command line tells me that I have log files written to the c:\windows\logs\windowsserverbackup\ path but I have no files of the .log type there. My command window tells me "System Writer is not found in the backup". However when I run vssadmin list writers I find System Writer in the list and it shows normal status with no last errors stored. I am running this from an elevated command prompt as well as from a logged on administrator account. My backup target path has permission for network service to have full control and it has plenty of free space. Looking in eventlog I have two VSS error 8194 that happen immediately before the Backup error 517 which has the errorcode 2155347997 listed. All three of these errors are a result of trying to run the command for the systemstatebackup. It's my belief that some VSS related permission is failing and exiting the backup process before it ever gets started. Because of this the initial code that creates the log files must not be running and this is why I have no files. When running the systemstatebackup command from the command prompt and watching the windowsserverbackup directory I do see that I have a Wbadmin.0.etl file which gets created but it is deleted when the backup errors out and stops. I have looked online and there are numerous opinions as to the cause of this error. These are the things I have corrected to try and fix this issue before posting here: Machine runs a HP 1410i smart array controller but at one time also used a LSI scsi card. Used networkadminkb.com's kb# a467 to find one LSI_SCSI entry in HKLMSysCurrentControlSetServices which start was set to 0x0 and I modified to 0x3. No changes. In HKLMSystemCurrentControlSetServicesVSSDiag I gave network service full control where it previously only had "Special Permission". No changes. I followed KB2009272 to manually try to fix system writer. These are all of the things I have tried. What else should I look at to resolve this issue? It may be important to note that I run Mozy Pro on this server and that was known in the past to use VSS for copying operations and it occasionally threw an error. However since an update last year those error event log entries have stopped.

    Read the article

  • Intel RST accidentally selected wrong drive as system drive -- how to fix?

    - by Sean Killeen
    Question / TL;DR If Intel RST has marked a drive other than my RAID set as the system drive, how can I get it so that the RAID set is now seen as the system drive, and catch it up to my drive now? What Happened NOTE: Some perhaps unwise decisions are ahead. This is as best as I can recall the order of things. I had a 2x1TB RAID1 config. I bought the drives around the same time, and they started to die around the same time. I replaced 1st drive with a 2 TB drive before the other one's SMART errors got more serious. I waited for the RAID to replicate, then replaced the 2nd drive with a manufacturer's replacement. I got a second manufacturer's drive replacement and used it as a spare. so I now I had a 1TB/2TB drive in a RAID1 and another 1TB as a spare. The 1TB drive in the replacement set was bad from the manufacturer. Rather than mess with their refurbished stuff, I bought another 2 TB drive an upped the config to a 2x2TB RAID1 with the other, functioning manufacturer's drive as a spare. I made the mistake of trying to bring the other drive online to clean it out and the signatue clash killed my machine. When the machine rebooted, that drive was marked as the system drive. So, I have a 2x2TB RAID1 that is apparently offline, and 1 spare 1 TB refurbished drive that everything is being run from. Not a great idea. Options I'm considering Bring the 2x2TB drive back online, and then unplug the spare until I can format it in another system. This would involve some data loss, but the more I think about it, I actually think I haven't modified any data that isn't backed up or synced somewhere (go me!) Anything that isn't is likely trivial, enough that I'm willing to take the risk. One downside here is that if the 2 TB doesn't have data on it for some reason, I could be screwed trying to put the other drive back in, no? Try to somehow get the RAID1 updated with the data from the current system drive. Option 3?

    Read the article

  • file corruption on read/write 2.6.32-22-server (happens across many kernels)

    - by Jonathan
    Hi Guys, I'm having an issue where after the server has been up for a period of time (~week/few days) the server will start reading corrupt data. For instance when I run a sha1sum of a file after a fresh boot it remains the same. However after a while I will start to get segfaults and from then on whenever I read this file I get a different sha1sum. I've checked S.M.A.R.T with long tests and I've run an extended memtest86+(12 passes) My lspci is as follows: 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (int gfx) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3c) 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller 00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon HD 3300 Graphics 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller 02:00.0 Ethernet controller: Atheros Communications Atheros AR8121/AR8113/AR8114 PCI-E Ethernet Controller (rev b0) 03:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. Device 3403 I could really use some help on this, do you have any idea what could cause this? It's really frustrating me as it seems to trigger entirely randomly and will not go away until I reboot. I'm also use KVM for virtualization as well as MD for software RAID on this server and the processor is a Phenom II X4 965. I don't believe it's the software raid however as this affects files also hosted on non-raid partitions so I don't know.

    Read the article

  • apache sendmail: trying to change user "from" address from apache to domain account

    - by Wes
    I apologize if I am asking a question already answered, but my problem isn't really that I haven't found an answer. I have, in fact, found a half-dozen different "solutions" to my problem, tried them all, in various combinations, and have been consistently unsuccessful. The goal All I want to do is change the envelope "from" address for all email sent from [email protected] to [email protected], always. What I've already done I am running Apache, PHP, and sendmail on CentOS 5.5, [email protected]. We have an SMTP server at 192.168.0.4. The domain's email accounts are all at @domain.org. I have successfully set up "smart host" using this line in the sendmail.mc file: define(`SMART_HOST', `192.168.0.4')dnl Then I set up masquerading, and was hopeful this would solve it. I have this in the .mc file: FEATURE(`masquerade_entire_domain')dnl FEATURE(`masquerade_envelope')dnl FEATURE(`allmasquerade')dnl MASQUERADE_AS(`domain.org')dnl MASQUERADE_DOMAIN(`domain.org.')dnl MASQUERADE_DOMAIN(`localhost.localdomain.')dnl This rewrites "to" addresses, but not "from" addresses. Testing from the command line: sendmail -v [email protected] Always is shown from the local user (in this case root, or my local user account). I had read that "sendmail" command sometimes bypasses masquerading. Nevertheless, using the "mail" command has the same result. After that, I have explored several "solutions", including: mailertable virtusertable FEATURE(`accept_unresolvable_domains')dnl LOCAL_DOMAIN(`localhost.localdomain')dnl FEATURE(`genericstable')dnl /etc/mail/access file /etc/mail/local-host-names file /etc/mail/trusted-users file All to no affect. The last thing I've tried So, I decided to go in a different direction, and try to set the envelope "from" address via PHP, using either the configuration in /etc/php.ini, or adding the -f parameter to the mail() function or to sendmail command. If I run this command: sendmail -v -f [email protected] [email protected] I get this error in /var/log/maillog: Mar 30 08:56:16 localhost sendmail[24022]: p2UCuE8w024022: [email protected], size=5, class=0, nrcpts=1, msgid=<[email protected]>, relay=user@localhost Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: [email protected], [email protected] (500/502), delay=00:00:05, xdelay=00:00:03, mailer=relay, pri=30005, relay=[192.168.0.4] [192.168.0.4], dsn=5.1.1, stat=User unknown Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: p2UCuE8x024022: DSN: User unknown Mar 30 08:56:23 localhost sendmail[24022]: p2UCuE8x024022: [email protected], delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=31029, relay=[192.168.0.4] [192.168.0.4], dsn=2.0.0, stat=Sent (Ok: queued as B5E2E40E0A2) Which is basically a "User unknown" 550 error. Help Please help. What do I need to change? Should I just start over in the sendmail.mc file? It has a ton of config options stuffed in it, over days of trying things. Why is changing the envelope "from" address via the command line generating a "User unknown" error?

    Read the article

  • filtering itunes library items by file location

    - by Cawas
    3 answers and unfortunately no solution yet. The Problem I've got way more than 1000 duplicated items in my iTunes Library pointing to a non-existant place (the "where" under "get info" window), along with other duplicated items and other MIAs (Missing In Action). Is there any simple way to just delete all of them and only them? From the library, of course. By that I mean some MIAs are pointing to /Volumes while some are pointing to .../music/Music/... or just .../music/.... I want to delete all pointing to /Volumes as to later I'll recover the rest. Check the image below. Some Background I tried searching for a specific key word on the path and creating smart play list, but with no result. Being able to just sort all library by path would be a perfect solution! I believe old iTunes could do that. PowerTunes can do it (sort by path) but I can't do anything with its list. I would also welcome any program able to handle this, then import and properly export back the iTunes library. Since this seems to just not be clear enough... AppleScript doesn't work That's because AppleScript just can't gather the missing info anywhere in iTunes Library. Maybe we could use AppleScript by opening the XML file, but that's a whole nother issue. Here's a quote from my conversation with Doug the man himself Adams last december: I don't think you do understand. There is no way to get the path to the file of a dead track because iTunes has "forgotten" it. That is, by definition, what a dead track is. Doug On Dec 21, 2010, at 7:08 AM, Caue Rego wrote: yes I understand that and have seem the script. but I'm not looking for the file. just the old broken path reference to it. Sent from my iPhone On 21/12/2010, at 10:00, Doug Adams wrote: You cannot locate missing files of dead tracks because, by definition, a dead track is one that doesn't have any file information. If you look at "Super Remove Dead Tracks", you will notice it looks for tracks that have "missing value" for the location property.

    Read the article

  • Are random packets normal?

    - by TheLQ
    About a month ago on one of my servers I started receiving random packets from IPs all over the world. So I did the smart thing and stopped putting off installing an IDS. This IDS is a ClearOS Gateway which comes with Snort and SnortSam. I enabled it, checked There is a total of 4 ports open, two of which forward to the server I'm talking about. These ports are 3724 and 8085, so they aren't going to be easily detected in a port scan. However checking some logs of this server I found that the attack is resuming. I found this ... Accepting connection from '75.166.155.122' [Auth] got unknown packet from '75.166.155.122' Accepting connection from '98.164.154.93' [Auth] got unknown packet from '98.164.154.93' Ping MySQL to keep connection alive Accepting connection from '70.241.195.129' [Auth] got unknown packet from '70.241.195.129' Accepting connection from '67.182.229.169' [Auth] got unknown packet from '67.182.229.169' Accepting connection from '69.137.140.38' [Auth] got unknown packet from '69.137.140.38' Accepting connection from '76.31.72.55' [Auth] got unknown packet from '76.31.72.55' Accepting connection from '97.88.139.39' [Auth] got unknown packet from '97.88.139.39' Accepting connection from '173.35.62.112' [Auth] got unknown packet from '173.35.62.112' Accepting connection from '187.15.10.73' [Auth] got unknown packet from '187.15.10.73' Accepting connection from '66.66.94.124' [Auth] got unknown packet from '66.66.94.124' Accepting connection from '75.159.219.124' [Auth] got unknown packet from '75.159.219.124' Accepting connection from '99.102.100.82' [Auth] got unknown packet from '99.102.100.82' Accepting connection from '24.128.240.45' [Auth] got unknown packet from '24.128.240.45' Accepting connection from '99.231.7.39' [Auth] got unknown packet from '99.231.7.39' Accepting connection from '206.255.79.56' [Auth] got unknown packet from '206.255.79.56' Accepting connection from '68.97.106.235' [Auth] got unknown packet from '68.97.106.235' Accepting connection from '69.134.67.251' [Auth] got unknown packet from '69.134.67.251' Accepting connection from '63.228.138.186' [Auth] got unknown packet from '63.228.138.186' Accepting connection from '184.39.146.193' [Auth] got unknown packet from '184.39.146.193' Accepting connection from '69.171.161.102' [Auth] got unknown packet from '69.171.161.102' Accepting connection from '76.0.47.228' [Auth] got unknown packet from '76.0.47.228' Ping MySQL to keep connection alive Accepting connection from '126.112.201.14' [Auth] got unknown packet from '126.112.201.14' Ping MySQL to keep connection alive Now that scares me. Why isn't Snort detecting this? How were they able to find this specific port? More importantly, what normally would these packets contain? Is this something I should be worried about? How can I stop this?

    Read the article

  • SSL support with Apache and Proxytunnel

    - by whuppy
    I'm inside a strict corporate environment. https traffic goes out via an internal proxy (for this example it's 10.10.04.33:8443) that's smart enough to block ssh'ing directly to ssh.glakspod.org:443. I can get out via proxytunnel. I set up an apache2 VirtualHost at ssh.glakspod.org:443 thus: ServerAdmin [email protected] ServerName ssh.glakspod.org <!-- Proxy Section --> <!-- Used in conjunction with ProxyTunnel --> <!-- proxytunnel -q -p 10.10.04.33:8443 -r ssh.glakspod.org:443 -d %host:%port --> ProxyRequests on ProxyVia on AllowCONNECT 22 <Proxy *> Order deny,allow Deny from all Allow from 74.101 </Proxy> So far so good: I hit the Apache proxy with a CONNECT and then PuTTY and my ssh server shake hands and I'm off to the races. There are, however, two problems with this setup: The internal proxy server can sniff my CONNECT request and also see that an SSH handshake is taking place. I want the entire connection between my desktop and ssh.glakspod.org:443 to look like HTTPS traffic no matter how closely the internal proxy inspects it. I can't get the VirtualHost to be a regular https site while proxying. I'd like the proxy to coexist with something like this: SSLEngine on SSLProxyEngine on SSLCertificateFile /path/to/ca/samapache.crt SSLCertificateKeyFile /path/to/ca/samapache.key SSLCACertificateFile /path/to/ca/ca.crt DocumentRoot /mnt/wallabee/www/html <Directory /mnt/wallabee/www/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <!-- Need a valid client cert to get into the sanctum --> <Directory /mnt/wallabee/www/html/sanctum> SSLVerifyClient require SSLOptions +FakeBasicAuth +ExportCertData SSLVerifyDepth 1 </Directory> So my question is: How to I enable SSL support on the ssh.glakspod.org:443 VirtualHost that will work with ProxyTunnel? I've tried various combinations of proxytunnel's -e, -E, and -X flags without any luck. The only lead I've found is Apache Bug No. 29744, but I haven't been able to find a patch that will install cleanly on Ubuntu Jaunty's Apache version 2.2.11-2ubuntu2.6. Thanks in advance.

    Read the article

  • How to enjoy DVD on Apple iPad

    - by user44251
    I believe many people spent a sleepless night yesterday waiting for the new Apple Tablet to come, just a few days ago or perhaps longer I noticed fierce debate about it, its name, size, capacity, processor, main features, price etc. And now, they can take a long breath with the new Apple Tablet named iPad officially released on 28, January, 2010 (Beijing Time). But I know a new battle just begins. iPad, sounds somewhat like iPod and it really shares some similarities in terms of shape like smart, light and portable. It has a 9.7-inch, LED-backlit, IPS display with a remarkable precise Multi-Touch screen. And yet, at just 1.5 lbs and 0.5 inches thin, it's easy to carry and use everywhere. It can greatly facilitates your experience with the web, emails, photos and videos. Right now, it can run almost 140.000 of the apps on the Apple store. It can even run the apps you have downloaded for your iPhone or iPod touch. But so far, I haven't seen any possibility that it can work with DVD, probability there is no built-in DVD-ROM or DVD player which can play DVD directly. As Apple iPad states, the video formats supported are MPEG-4 (MP4, M4V), H.264, MOV etc and audio formats accepted are AAC, Proteceted AAC, MP3, AIFF and WAV etc, those are formats that are commonly used with iMac. This could really a hard nut to crack if you want to watch your favourite DVD on this magic Apple iPad. But don't worry, there is still way out, you just need a few steps for ripping and importing DVD movies to Apple iPad with a simple application DVD to iPad converter What's on DVD to iPad Converter for Mac DVD to iPad converter for Mac is a powerful and professional application designed for the newly released Apple iPad which can rip, convert your DVD contents to Apple iPad compatible MPEG-4 (MP4, M4V), H.264, MOV etc, and other popular file formats like AVI, WMV, MPG, MKV, VOB, 3GP, FLV etc can also be converted so that you can put on your portable devices like iPod, iPhone, iRiver, BlackBerry etc. Besides, it can also extract audio from DVD videos and save as MP3, AIFF, AAC, WAV etc. Mac DVD to iPad converter has also been enhanced that can run both on PowerPC and Intel (Snow Leopard included). It can offer versatile editing features which allows you to make your own DVD videos. For example, you can cut your DVD to whatever length you like by Trim, crop off unwanted parts from DVD clips by Crop, add special effect like Gray, Emboss and Old film to make your videos more artistic. Besides, its built-in merging feature and batch mode allows you to join several DVD clips into a single one and do batch conversion. And more features can be expected if you afford a few minutes to try.

    Read the article

  • Windows 7 Home hangs at "Welcome" screen

    - by White Phoenix
    I'm asking on behalf of a friend who's currently having problems with his machine. Windows 7 Home 32-bit. He's too far away for me to help by going over to his house - I'm helping him over the Internet. This is his current machine: http://www.newegg.com/Product/Product.aspx?Item=N82E16883227134 The only two changes he made to that machine is to swap out the gfx card for a EVGA GTX 460 and the PSU for a Corsair TX650. Here's what happened: He was playing a computer game (fairly CPU/GPU intensive) and had some music going in the background in foobar while playing. Suddenly, he notices the music stopped playing, so he switches to foobar to try to close it, but it freezes up (window won't respond). So he figures it's just foobar having a bad day and force quits that program. Suddenly, his game won't respond, so he force quits that, then the entire computer just went to crap at that point, so he hits the restart button on his machine. Computer POSTS fine, but now he gets stuck at the Windows "welcome" screen (his account is set to auto-login). HD activity light is solid yellow but he doesn't hear HDD activity. He tried booting into Safe Mode - gets stuck at the "welcome screen". Tried a STartup Repair within Windows 7, it found a few problems, but still gets stuck at welcome. I advised him to boot off the DVD - sfc /scannow found nothing (couldn't use the regular /scannow option; says there's a repair pending, had to use use offbootdir/offwindir command switches). Ran startup repair 3 times - found nothing. My friend runs virus/malware scans on a regular basis, so he's fairly sure it's not that either. Right now I'm having my friend run chkdsk /R on the computer while in this Startup Recovery mode - so far it's caught a few bad sectors. However at this point I'm kinda wondering which way to go if chkdsk doesn't fix it. Quick Google search said someone had success by booting Windows with bootlogging on - some others have success with running the aforemented chkdsk, etc. The fact that Windows cannot even boot into Safe Mode concerns me. While we're waiting for chkdsk /R to finish, are there any other options I can give my friend short of reinstalling Windows 7? He has his data on a separate partition so that's not a major problem (though it'll be an annoyance for him). I suspect his hard drive may be having some issues, but my main concern is getting him back up and running before we start diagnosing the hard drive (I may have him run some sort of SMART test utility later).

    Read the article

  • Windows Server 2008 Software Raid 5 - Data integrity issues

    - by Fopedush
    I've got a server running Windows Server 2008 R2, with a (windows native) software raid-5 array. The array consists of 7x 1TB Western Digital RE3 and RE4 drives. I have offline backups of this array. The problem is this: I noticed a few days ago after copying a large file to the disk that there was an integrity issue with that file - it was a ~12GB file that I had downloaded via uTorrent. After moving it to the raid array, I used uTorrent to relocate the download location, and performed a re-check so I could seed it from that location. The recheck found that only 6308/6310 chunks of the copied file were intact. My next step was to write a quick powershell script that would copy files to the array, while performing a SHA1 hash of the original and resultant files and comparing them. Smaller files (100-1000MB) copied over just fine. When I started copying larger data (~15GB), I found that the hash check failed about 2/3rds of the time. The corrupt files had very, very small inconsistencies - less than .01%. I further eliminated the possibility of networking or client issues by placing this large file on the C:\ of the server, and copying it repeatedly from there to the array, seeing similar results. Copying the data via explorer, powershell, or the standard windows command prompt yield the same results. None of the copies fail or report any problems. The raid array itself is listed as healthy in disk management. After a few experiments, I shut down the server and ran memtest overnight. No errors were detected. A basic run of chkdsk found no problems, but I did not use the /R flag, as I was unsure how that might affect a software raid-5 volume. I next ran Crystal Disk Info to check the smart data on the drives - but found that CDI only detected 5 out of 7 of the disks in the array. I have no idea why. Nevertheless, CDI shows the following "caution" flags on a single one of the drives: 05 199 199 140 000000000001 Reallocated Sectors Count C5 200 200 __0 000000000001 Current Pending Sector Count Which is a little bit alarming, but I don't really know what to do with the information. I hardly feel like one reallocated sector could be causing this. At this point, I'm looking for some guidance on what to do next. I need to determine the cause of this issue, but I'm hesitant to run chkdsk /R or any bootable disk health checkers because I'm afraid they might break the array. I've considered triggering a re-sync of the array, but I'm not actually sure how to do that without doing something silly like manually dropping a disk and then restoring it. Any advice that could help me ferret out the precise cause of this issue would be greatly appreciated.

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • New harddrives failing within weeks.

    - by Jason Kealey
    I've experienced 8 hard disk failures in 3 months and have tried many things to solve the issue permanently but I have failed. I would like to know if you have any advice for me. System was running Win XP on an Asus P5W-DH Deluxe. I have setup a RAID-1 array. I started out with 2 x 500 GB 7200RPM Western Digital drives. One died. I took it out to RMA it. On the same day, the router was fried. Assumed a power surge occurred; connected an older UPS to protect the system. Once I got my hands on an identical disk, I installed it. The RAID array was rebuilt. A few days later, the other one died. Assumed the rebuild caused it to fail. Took it out for RMA. Before the other one arrived, the remaining one died. I then discovered I could re-enable them using the Intel Matrix Storage Manager. I re-enabled both and the system seemed fine for a week, until both died again. I got two new 1.5 TB 7200RPM Seagate drives and re-installed Windows 7. Also replaced the UPS and power supply. They both died again. The voltage on the plug is stable between 120 and 122V as per the UPS. None of the other devices have had any problems (monitors, etc.). At this point, I see two options: a) electrical issue in the house that was, for some reason, not blocked by the UPS. b) something else inside the system causing surges? motherboard? onboard raid controller? Failures happen fairly quickly, between 2 and 14 days after I fix the previous issue. I just gotten a new computer (Core i7) to replace it. If it is stable, I can determine that b) was the problem. If it fries its hard drive again, I can determine that it is an electrical issue in the house. Do you have any other thoughts? Any tools I can run on the drives that failed to get more information about the original SMART event history?

    Read the article

  • HP Proliant G7 hardware RAID configuration automation with ribcl

    - by karthik
    I have been trying to automate hardware RAID configuration of HP proliant machines before OS installation (So I can not use hpacucli) ssh into iLO3 doesn't have option for RAID configuration I use ribcl but there is no command for RAID config, however I see this under the command GET_EMBEDDED_HEALTH. <STORAGE> <CONTROLLER> <LABEL VALUE="Controller on System Board"/> <STATUS VALUE="OK"/> <CONTROLLER_STATUS VALUE="OK"/> <SERIAL_NUMBER VALUE="50014380215F0070"/> <MODEL VALUE="HP Smart Array P420i Controller"/> <FW_VERSION VALUE="3.41"/> <DRIVE_ENCLOSURE> <LABEL VALUE="Port 1I Box 1"/> <STATUS VALUE="OK"/> <DRIVE_BAY VALUE="04"/> </DRIVE_ENCLOSURE> <DRIVE_ENCLOSURE> <LABEL VALUE="Port 2I Box 0"/> <STATUS VALUE="OK"/> <DRIVE_BAY VALUE="01"/> </DRIVE_ENCLOSURE> <LOGICAL_DRIVE> <LABEL VALUE="01"/> <STATUS VALUE="OK"/> <CAPACITY VALUE="68 GB"/> <FAULT_TOLERANCE VALUE="RAID 0"/> <PHYSICAL_DRIVE> <LABEL VALUE="Port 1I Box 1 Bay 3"/> <STATUS VALUE="OK"/> <SERIAL_NUMBER VALUE="6TA0N3SZ0000B231CYDT"/> <MODEL VALUE="EH0072FAWJA"/> <CAPACITY VALUE="68 GB"/> <LOCATION VALUE="Port 1I Box 1 Bay 3"/> <FW_VERSION VALUE="HPDH"/> <DRIVE_CONFIGURATION VALUE="Configured"/> </PHYSICAL_DRIVE> </LOGICAL_DRIVE> </CONTROLLER> </STORAGE> My question is, is there a way I modify/create this xml piece (say I have 2 Logical drive with one spare) and reboot the server it takes effect ? If this approach is not correct are there any other ways to automate hardware raid config ?

    Read the article

  • Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider

    - by Gopinath
    If you going to buy iPhone 4S on a two year contract in USA, Europe or Australia you may not find it expensive. But if you are planning to buy it in any other parts of the world, you will definitely feel the heat of ridiculous iPhone 4S price. In India iPhone 4S costs approximately costs $1000 which is 30% more than the price tag of an unlocked iPhone sold in USA. Personally I love iPhones as there is no match for the user experience provided by Apple as well as the wide range of really meaning applications available for iPhone. But it breaks heart to spend $1000 for a phone and I’m forced to look at alternates available in the market. Here are the four iPhone 4S alternates available in almost all the countries where we can buy iPhone 4S Google Galaxy Nexus The Galaxy Nexus is Google’s own Android smartphone manufactured by Samsung and sold under the brand name of Google Nexus. Galaxy Nexus is the pure Android phone available in the market without any bloat software or custom user interfaces like other Androids available in the market. Galaxy Nexus is also the first Android phone to be shipped with the latest version of Android OS, Ice Cream Sandwich. This phone is the benchmark for the rest of Android phones that are going to enter the market soon. In the words of Google this smartphone is called as “Galaxy Nexus: Simple. Beautiful. Beyond Smart.”.  BGR review summarizes the phone as This is almost comical at this point, but the Samsung Galaxy Nexus is my favourite Android device in the world. Easily replacing the HTC Rezound, the Motorola DROID RAZR, and Samsung Galaxy S II, the Galaxy Nexus champions in a brand new version of Android that pushes itself further than almost any other mobile OS in the industry. Samsung Galaxy S II The one single company that is able to sell more smartphones than Apple is Samsung. Samsung recently displaced Apple from the top smartphone seller spot and occupied it with loads of pride. Samsung’s Galaxy S II fits as one the best alternatives to Apple’s iPhone 4S with it’s beautiful design and remarkable performance. Engadget summarizes Samsung Galaxy S2 review as It’s the best Android smartphone yet, but more importantly, it might well be the best smartphone, period. Of course, a 4.3-inch screen size won’t suit everyone, no matter how stupendously thin the device that carries it may be, and we also can’t say for sure that the Galaxy S II would justify a long-term iOS user foresaking his investment into one ecosystem and making the leap to another. Nonetheless, if you’re asking us what smartphone to buy today, unconstrained by such externalities, the Galaxy S II would be the clear choice. Sometimes it’s just as simple as that. Nokia Lumia 800 Here comes unexpected Windows Phone in to the boxing ring. May be they are not as great as Androids available in the market today, but they are picking up very quickly. Especially the Nokia Lumia 800 seems to be first ever Windows Phone 7 aimed at competing serious with Androids and iPhones available in the market. There are reports that Nokia Lumia 800 is outselling all Androids in UK and few high profile tech blogs are calling it as the king of Windows Phone. Considering this phone while evaluating the alternative of iPhone 4S will not disappoint you. We assure. Droid RAZR Remember the Motorola Driod that swept entire Android market share couple of years ago? The first two version of Motorola Droids were the best in the market and they out performed almost every other Android phone those days. The invasion of Samsung Androids, Motorola lost it charm. With the recent release of Droid RAZR, Motorola seems to be in the right direction to reclaiming the prestige. Droid RAZR is the thinnest smartphone available in the market and it’s beauty is not just skin deep. Here is a review of the phone from Engadget blog the RAZR’s beauty is not only skin deep. The LTE radio, 1.2GHz dual-core processor and 1GB of RAM make sure this sleek number is ready to run with the big boys. It kept pace with, and in some cases clearly outclassed its high-end competition. Despite its deficiencies in the display department and underwhelming battery life, the RAZR looks to be a perfectly viable alternative when considering the similarly-pricey Rezound and Galaxy Nexus Further Reading So we have seen the four alternates of iPhone 4S available in the market and I personally love to buy a Samsung smartphone if I’m don’t have money to afford an iPhone 4S. If you are interested in deep diving into the alternates, here few links that help you do more research Apple iPhone 4S vs. Samsung Galaxy Nexus vs. Motorola Droid RAZR: How Their Specs Compare by Huffington Post Nokia Lumia 800 vs. iPhone 4S vs. Nexus Galaxy: Spec Smackdown by PC World Browser Speed Test: Nokia Lumia 800 vs. iPhone 4S vs. Samsung Galaxy S II – by Gizmodo iPhone 4S vs Samsung Galaxy S II by pocket lint Apple iPhone 4S vs. Samsung Galaxy S II by techie buzz This article titled,Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • LWJGL - Eclipse error [on hold]

    - by Zarkopafilis
    When I try to run my lwjgl project, an error pops . Here is the log file: # A fatal error has been detected by the Java Runtime Environment: # EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x6d8fcc0a, pid=5612, tid=900 # JRE version: 6.0_16-b01 Java VM: Java HotSpot(TM) Client VM (14.2-b01 mixed mode windows-x86 ) Problematic frame: V [jvm.dll+0xfcc0a] # If you would like to submit a bug report, please visit: http://java.sun.com/webapps/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x016b9000): JavaThread "main" [_thread_in_vm, id=900, stack(0x00160000,0x001b0000)] siginfo: ExceptionCode=0xc0000005, reading address 0x00000000 Registers: EAX=0x00000000, EBX=0x00000000, ECX=0x00000006, EDX=0x00000000 ESP=0x001af4d4, EBP=0x001af524, ESI=0x016b9000, EDI=0x016b9110 EIP=0x6d8fcc0a, EFLAGS=0x00010246 Top of Stack: (sp=0x001af4d4) 0x001af4d4: 6da44bd8 016b9110 00000000 001af668 0x001af4e4: ffffffff 22200000 001af620 76ec39c2 0x001af4f4: 001af524 6d801086 0000000b 001afd34 0x001af504: 016b9000 016dd990 016b9000 00000000 0x001af514: 001af5f4 6d9ee000 6d9ef2f0 ffffffff 0x001af524: 001af58c 10008c85 016b9110 00000000 0x001af534: 00000000 000a0554 00000000 00000024 0x001af544: 00000000 00000000 001af6ac 00000000 Instructions: (pc=0x6d8fcc0a) 0x6d8fcbfa: e8 e8 d0 1d 08 00 8b 45 10 c7 45 d8 0b 00 00 00 0x6d8fcc0a: 8b 00 8b 48 08 0f b7 51 26 8b 40 0c 8b 4c 90 20 Stack: [0x00160000,0x001b0000], sp=0x001af4d4, free space=317k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [jvm.dll+0xfcc0a] C [lwjgl.dll+0x8c85] C [USER32.dll+0x18876] C [USER32.dll+0x170f4] C [USER32.dll+0x1119e] C [ntdll.dll+0x460ce] C [USER32.dll+0x10e29] C [USER32.dll+0x10e84] C [lwjgl.dll+0x1cf0] j org.lwjgl.opengl.WindowsDisplay.createWindow(Lorg/lwjgl/opengl/DrawableLWJGL;Lorg/lwjgl/opengl/DisplayMode;Ljava/awt/Canvas;II)V+102 j org.lwjgl.opengl.Display.createWindow()V+71 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;Lorg/lwjgl/opengl/Drawable;Lorg/lwjgl/opengl/ContextAttribs;)V+72 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;)V+12 j org.lwjgl.opengl.Display.create()V+7 j zarkopafilis.koding.io.javafx.Main.main([Ljava/lang/String;)V+16 v ~StubRoutines::call_stub V [jvm.dll+0xecf9c] V [jvm.dll+0x1741e1] V [jvm.dll+0xed01d] V [jvm.dll+0xf5be5] V [jvm.dll+0xfd83d] C [javaw.exe+0x2155] C [javaw.exe+0x833e] C [kernel32.dll+0x51154] C [ntdll.dll+0x5b2b9] C [ntdll.dll+0x5b28c] Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.WindowsDisplay.nCreateWindow(IIIIZZJ)J+0 j org.lwjgl.opengl.WindowsDisplay.createWindow(Lorg/lwjgl/opengl/DrawableLWJGL;Lorg/lwjgl/opengl/DisplayMode;Ljava/awt/Canvas;II)V+102 j org.lwjgl.opengl.Display.createWindow()V+71 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;Lorg/lwjgl/opengl/Drawable;Lorg/lwjgl/opengl/ContextAttribs;)V+72 j org.lwjgl.opengl.Display.create(Lorg/lwjgl/opengl/PixelFormat;)V+12 j org.lwjgl.opengl.Display.create()V+7 j zarkopafilis.koding.io.javafx.Main.main([Ljava/lang/String;)V+16 v ~StubRoutines::call_stub --------------- P R O C E S S --------------- Java Threads: ( = current thread ) 0x0179a400 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=4460, stack(0x0b900000,0x0b950000)] 0x01795400 JavaThread "CompilerThread0" daemon [_thread_blocked, id=5264, stack(0x0b8b0000,0x0b900000)] 0x01790c00 JavaThread "Attach Listener" daemon [_thread_blocked, id=6080, stack(0x0b860000,0x0b8b0000)] 0x01786400 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=1204, stack(0x0b810000,0x0b860000)] 0x01759c00 JavaThread "Finalizer" daemon [_thread_blocked, id=5772, stack(0x0b7c0000,0x0b810000)] 0x01755000 JavaThread "Reference Handler" daemon [_thread_blocked, id=4696, stack(0x01640000,0x01690000)] =0x016b9000 JavaThread "main" [_thread_in_vm, id=900, stack(0x00160000,0x001b0000)] Other Threads: 0x01751c00 VMThread [stack: 0x015f0000,0x01640000] [id=4052] 0x0179c800 WatcherThread [stack: 0x0b950000,0x0b9a0000] [id=3340] VM state:not at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: None Heap def new generation total 960K, used 816K [0x037c0000, 0x038c0000, 0x03ca0000) eden space 896K, 91% used [0x037c0000, 0x0388c2c0, 0x038a0000) from space 64K, 0% used [0x038a0000, 0x038a0000, 0x038b0000) to space 64K, 0% used [0x038b0000, 0x038b0000, 0x038c0000) tenured generation total 4096K, used 0K [0x03ca0000, 0x040a0000, 0x077c0000) the space 4096K, 0% used [0x03ca0000, 0x03ca0000, 0x03ca0200, 0x040a0000) compacting perm gen total 12288K, used 2143K [0x077c0000, 0x083c0000, 0x0b7c0000) the space 12288K, 17% used [0x077c0000, 0x079d7e38, 0x079d8000, 0x083c0000) No shared spaces configured. Dynamic libraries: 0x00400000 - 0x00424000 C:\Program Files\Java\jre6\bin\javaw.exe 0x77550000 - 0x7768e000 C:\Windows\SYSTEM32\ntdll.dll 0x75a80000 - 0x75b54000 C:\Windows\system32\kernel32.dll 0x758d0000 - 0x7591b000 C:\Windows\system32\KERNELBASE.dll 0x759e0000 - 0x75a80000 C:\Windows\system32\ADVAPI32.dll 0x76070000 - 0x7611c000 C:\Windows\system32\msvcrt.dll 0x77250000 - 0x77269000 C:\Windows\SYSTEM32\sechost.dll 0x771a0000 - 0x77241000 C:\Windows\system32\RPCRT4.dll 0x76eb0000 - 0x76f79000 C:\Windows\system32\USER32.dll 0x76e60000 - 0x76eae000 C:\Windows\system32\GDI32.dll 0x77770000 - 0x7777a000 C:\Windows\system32\LPK.dll 0x75fd0000 - 0x7606e000 C:\Windows\system32\USP10.dll 0x770b0000 - 0x770cf000 C:\Windows\system32\IMM32.DLL 0x770d0000 - 0x7719c000 C:\Windows\system32\MSCTF.dll 0x7c340000 - 0x7c396000 C:\Program Files\Java\jre6\bin\msvcr71.dll 0x6d800000 - 0x6da8b000 C:\Program Files\Java\jre6\bin\client\jvm.dll 0x73a00000 - 0x73a32000 C:\Windows\system32\WINMM.dll 0x75610000 - 0x7565b000 C:\Windows\system32\apphelp.dll 0x6d7b0000 - 0x6d7bc000 C:\Program Files\Java\jre6\bin\verify.dll 0x6d330000 - 0x6d34f000 C:\Program Files\Java\jre6\bin\java.dll 0x6d290000 - 0x6d298000 C:\Program Files\Java\jre6\bin\hpi.dll 0x776e0000 - 0x776e5000 C:\Windows\system32\PSAPI.DLL 0x6d7f0000 - 0x6d7ff000 C:\Program Files\Java\jre6\bin\zip.dll 0x10000000 - 0x1004c000 C:\Users\theo\Desktop\workspace\JavaFX1\lib\natives\windows\lwjgl.dll 0x5d170000 - 0x5d238000 C:\Windows\system32\OPENGL32.dll 0x6e7b0000 - 0x6e7d2000 C:\Windows\system32\GLU32.dll 0x70620000 - 0x70707000 C:\Windows\system32\DDRAW.dll 0x70610000 - 0x70616000 C:\Windows\system32\DCIMAN32.dll 0x75b60000 - 0x75cfd000 C:\Windows\system32\SETUPAPI.dll 0x759b0000 - 0x759d7000 C:\Windows\system32\CFGMGR32.dll 0x76d70000 - 0x76dff000 C:\Windows\system32\OLEAUT32.dll 0x75db0000 - 0x75f0c000 C:\Windows\system32\ole32.dll 0x758b0000 - 0x758c2000 C:\Windows\system32\DEVOBJ.dll 0x74060000 - 0x74073000 C:\Windows\system32\dwmapi.dll 0x74b60000 - 0x74b69000 C:\Windows\system32\VERSION.dll 0x745f0000 - 0x7478e000 C:\Windows\WinSxS\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16661_none_420fe3fa2b8113bd\COMCTL32.dll 0x75d50000 - 0x75da7000 C:\Windows\system32\SHLWAPI.dll 0x74370000 - 0x743b0000 C:\Windows\system32\uxtheme.dll 0x22200000 - 0x22206000 C:\Program Files\ESET\ESET Smart Security\eplgHooks.dll VM Arguments: jvm_args: -Djava.library.path=C:\Users\theo\Desktop\workspace\JavaFX1\lib\natives\windows -Dfile.encoding=Cp1253 java_command: zarkopafilis.koding.io.javafx.Main Launcher Type: SUN_STANDARD Environment Variables: PATH=C:/Program Files/Java/jre6/bin/client;C:/Program Files/Java/jre6/bin;C:/Program Files/Java/jre6/lib/i386;C:\Perl\site\bin;C:\Perl\bin;C:\Ruby200\bin;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Windows Live\Shared;C:\Users\theo\Desktop\eclipse; USERNAME=theo OS=Windows_NT PROCESSOR_IDENTIFIER=x86 Family 6 Model 37 Stepping 5, GenuineIntel --------------- S Y S T E M --------------- OS: Windows 7 Build 7600 CPU:total 4 (8 cores per cpu, 2 threads per core) family 6 model 37 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, ht Memory: 4k page, physical 2097151k(1257972k free), swap 4194303k(4194303k free) vm_info: Java HotSpot(TM) Client VM (14.2-b01) for windows-x86 JRE (1.6.0_16-b01), built on Jul 31 2009 11:26:58 by "java_re" with MS VC++ 7.1 time: Wed Oct 23 22:00:12 2013 elapsed time: 0 seconds Code: Display.setDisplayMode(new DisplayMode(800,600)); Display.create();//Error here I am using JDK 6

    Read the article

  • Developer’s Life – Summary of Superhero Articles

    - by Pinal Dave
    Earlier this year, I wrote an article series where I talked about developer’s life and compared it with Superhero. I have got amazing response to this series and I have been receiving quite a lots of email suggesting that I should write more blog post about them. Currently I am not planning to write more blog post but I will soon continue another series. In this blog post, I have summarized the entire series. Let me know if you want me to write about any superhero. I will see what I can do about that hero. Developer’s Life – Every Developer is a Captain America Captain America was first created as a comic book character in the 1940’s as a way to boost morale during World War II.  Aimed at a children’s audience, his legacy faded away when the war ended.  However, he has recently has a major reboot to become a popular movie character that deals with modern issues. Developer’s Life – Every Developer is the Incredible Hulk The Incredible Hulk is possibly one of the scariest superheroes out there.  All superheroes are meant to be “out of this world” and awe-inspiring, but I think most people will agree with I say The Hulk takes this to the next level.  He is the result of an industrial accident, which is scary enough in it’s own right.  Plus, when mild-mannered Bruce Banner is angered, he goes completely out-of-control and transforms into a destructive monster that he cannot control and has no memories of. Developer’s Life – Every Developer is a Wonder Woman We have focused a lot lately on this “superhero series.”  I love fantasy books and movies, and I feel like there is a lot to be learned from them.  As I am writing this series, though, I have noticed that every super hero I write about is a man.  So today, I would like to talk about the major female super hero – Wonder Woman. Developer’s Life – Every Developer is a Harry Potter Harry Potter might not be a superhero in the traditional sense, but I believe he still has a lot to teach us and show us about life as a developer.  If you have been living under a rock for the last 17 years, you might not know that Harry Potter is the main character in an extremely popular series of books and movies documenting the education and tribulation of a young wizard (and his friends). Developer’s Life – Every Developer is Like Transformers Transformers may not be superheroes – they don’t wear capes, they don’t have amazing powers outside of their size and folding ability, they’re not even human (technically).  Part of their enduring popularity is that while we are enjoying over-the-top movies, we are learning about good leadership and strong personal skills. Developer’s Life – Every Developer is a Iron Man Iron Man is another superhero who is not naturally “super,” but relies on his brain (and money) to turn him into a fighting machine.  While traditional superheroes are still popular, a three-movie franchise and incorporation into the new Avengers series shows that Iron Man is popular enough on his own. Developer’s Life – Every Developer is a Sherlock Holmes I have been thinking a lot about how developers are like super heroes, and I have written two blog posts now comparing them to Spiderman and Superman.  I have a lot of love and respect for developers, and I hope that they are enjoying these articles, and others are learning a little bit about the profession.  There is another fictional character who, while not technically asuper hero, is very powerful, and I also think stands as a good example of a developer. That character is Sherlock Holmes.  Sherlock Holmes is a British detective, first made popular at the turn of the 19thcentury by author Sir Arthur Conan Doyle.  The original Sherlock Holmes was a brilliant detective who could solve the most mind-boggling crime through simple observations and deduction. Developer’s Life – Every Developer is a Chhota Bheem Chhota Bheem is a cartoon character that is extremely popular where I live.  He is my daughter’s favorite characters.  I like to say that children love Chhota Bheem more than their parents – it is lucky for us he is not real!  Children love Chhota Bheem because he is the absolute “good guy.”  He is smart, loyal, and strong.  He and his friends live in Dholakpur and fight off their many enemies – and always win – in every episode.  In each episode, they learn something about friendship, bravery, and being kind to others.  Chhota Bheem is a good role model for children, and I think that he is a good role model for developers are well. Developer’s Life – Every Developer is a Batman Batman is one of the darkest superheroes in the fantasy canon.  He does not come to his powers through any sort of magical coincidence or radioactive insect, but through a lot of psychological scarring caused by witnessing the death of his parents.  Despite his dark back story, he possesses a lot of admirable abilities that I feel bear comparison to developers. Developer’s Life – Every Developer is a Superman I enjoyed comparing developers to Spiderman so much, that I have decided to continue the trend and encourage some of my favorite people (developers) with another favorite superhero – Superman.  Superman is probably the most famous superhero – and one of the most inspiring. Developer’s Life – Every Developer is a Spiderman I have to admit, Spiderman is my favorite superhero.  The most recent movie recently was released in theaters, so it has been at the front of my mind for some time. Spiderman was my favorite superhero even before the latest movie came out, but of course I took my whole family to see the movie as soon as I could!  Every one of us loved it, including my daughter.  We all left the movie thinking how great it would be to be Spiderman.  So, with that in mind, I started thinking about how we are like Spiderman in our everyday lives, especially developers. I would like to know which Superhero is your favorite hero! Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • Adobe Photoshop Vs Lightroom Vs Aperture

    - by Aditi
    Adobe Photoshop is the standard choice for photographers, graphic artists and Web designers. Adobe Photoshop Lightroom  & Apple’s Aperture are also in the same league but the usage is vastly different. Although Photoshop is most popular & widely used by photographers, but in many ways it’s less relevant to photographers than ever before. As Lightroom & Aperture is aimed squarely at photographers for photo-processing. With this write up we are going to help you choose what is right for you and why. Adobe Photoshop Adobe Photoshop is the most liked tool for the detailed photo editing & designing work. Photoshop provides great features for rollover and Image slicing. Adobe Photoshop includes comprehensive optimization features for producing the highest quality Web graphics with the smallest possible file sizes. You can also create startling animations with it. Designers & Editors know how important precise masking is, PhotoShop lets you do that with various detailing tools. Art history brush, contact sheets, and history palette are some of the smart features, which add to its viability. Download Whether you’re producing printed pages or moving images, you can work more efficiently and produce better results because of its smooth integration across other adobe applications. Buy supporting layer effects, it allows you to quickly add drop shadows, inner and outer glows, bevels, and embossing to layers. It also provides Seamless Web Graphics Workflow. Photoshop is hands-down the BEST for editing. Photoshop Cons: • Slower, less precise editing features in Bridge • Processing lots of images requires actions and can be slower than exporting images from Lightroom • Much slower with editing and processing a large number of images Aperture Apple Aperture is aimed at the professional photographer who shoots predominantly raw files. It helps them to manage their workflow and perform their initial Raw conversion in a better way. Aperture provides adjustment tools such as Histogram to modify color and white balance, but most of the editing of photos is left for Photoshop. It gives users the option of seeing their photographs laid out like slides or negatives on a light table. It boasts of – stars, color-coding and easy techniques for filtering and picking images. Aperture has moved forward few steps than Photoshop, but most of the editing work has been left for Photoshop as it features seamless Photoshop integration. Aperture Pros: Aperture is a step up from the iPhoto software that comes with every Mac, and fairly easy to learn. Adjustments are made in a logical order from top to bottom of the menu. You can store the images in a library or any folder you choose. Aperture also works really well with direct Canon files. It is just $79 if you buy it through Apple’s App Store Moving forward, it will run on the iPad, and possibly the iPhone – Adobe products like Lightroom and Photoshop may never offer these options It is much nicer and simpler user interface. Lightroom Lightroom does a smashing job of basic fixing and editing. It is more advanced tool for photographers. They can use it to have a startling photography effect. Light room has many advanced features, which makes it one of the best tools for photographers and far ahead of the other two. They are Nondestructive editing. Nothing is actually changed in an image until the photo is exported. Better controls over organizing your photos. Lightroom helps to gather a group of photos to use in a slideshow. Lightroom has larger Compare and Survey views of images. Quickly customizable interface. Simple keystrokes allow you to perform different All Lightroom controls are kept available in panels right next to the photos. Always-available History palette, it doesn’t go when you close lightroom. You gain more colors to work with compared to Photoshop and with more precise control. Local control, or adjusting small parts of a photo without affecting anything else, has long been an important part of photography. In Lightroom 2, you can darken, lighten, and affect color and change sharpness and other aspects of specific areas in the photo simply by brushing your cursor across the areas. Photoshop has far more power in its Cloning and Healing Brush tools than Lightroom, but Lightroom offers simple cloning and healing that’s nondestructive. Lightroom supports the RAW formats of more cameras than Aperture. Lightroom provides the option of storing images outside the application in the file system. It costs less than photoshop. Download Why PhotoShop is advanced than Lightroom? There are countless image processing plug-ins on the market for doing specialized processing in Photoshop. For example, if your image needs sophisticated noise reduction, you can use the Noiseware plug-in with Photoshop to do a much better job or noise removal than Lightroom can do. Lightroom’s advantages over Aperture 3 Will always have better integration with Photoshop. Lightroom is backed by bigger and more active user community (So abundant availability for tutorials, etc.) Better noise reduction tool. Especially for photographers the Lens-distortion correction tool  is perfect Lightroom Cons: • Have to Import images to work on them • Slows down with over 10,000 images in the catalog • For processing just one or two images this is a slower workflow Photoshop Pros: • ACR has the same RAW processing controls as Lightroom • ACR Histogram is specialized to the chosen color space (Lightroom is locked into ProPhoto RGB color space with an sRGB tone curve) • Don’t have to Import images to open in Bridge or ACR • Ability to customize processing of RAW images with Photoshop Actions Pricing and Availability Get LightRoomGet PhotoShop Latest version Of Photoshop can be purchased from Adobe store and Adobe authorized reseller and it costs US$999. Latest version of Aperture can be bought for US$199 from Apple Online store or Mac App Store. You can buy latest version of LightRoom from Adobe Store or Adobe Authorized reseller for US$299. Related posts:Adobe Photoshop CS5 vs Photoshop CS5 extended Web based Alternatives to Photoshop 10 Free Alternatives for Adobe Photoshop Software

    Read the article

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

  • jQuery 1.4 Opacity and IE Filters

    - by Rick Strahl
    Ran into a small problem today with my client side jQuery library after switching to jQuery 1.4. I ran into a problem with a shadow plugin that I use to provide drop shadows for absolute elements – for Mozilla WebKit browsers the –moz-box-shadow and –webkit-box-shadow CSS attributes are used but for IE a manual element is created to provide the shadow that underlays the original element along with a blur filter to provide the fuzziness in the shadow. Some of the key pieces are: var vis = el.is(":visible"); if (!vis) el.show(); // must be visible to get .position var pos = el.position(); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); This has always worked in previous versions of jQuery, but with 1.4 the original filter no longer works. It appears that applying the opacity after the original filter wipes out the original filter. IOW, the opacity filter is not applied incrementally, but absolutely which is a real bummer. Luckily the workaround is relatively easy by just switching the order in which the opacity and filter are applied. If I apply the blur after the opacity I get my correct behavior back with both opacity: sh.show() .css({ position: "absolute", width: el.outerWidth(), height: el.outerHeight(), opacity: opt.opacity, background: opt.color, left: pos.left + opt.offset, top: pos.top + opt.offset }); if (typeof shEl.style.filter == "string") sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')'); While this works this still causes problems in other areas where opacity is implicitly set in code such as for fade operations or in the case of my shadow component the style/property watcher that keeps the shadow and main object linked. Both of these may set the opacity explicitly and that is still broken as it will effectively kill the blur filter. This seems like a really strange design decision by the jQuery team, since clearly the jquery css function does the right thing for setting filters. Internally however, the opacity setting doesn’t use .css instead hardcoding the filter which given jQuery’s usual flexibility and smart code seems really inappropriate. The following is from jQuery.js 1.4: var style = elem.style || elem, set = value !== undefined; // IE uses filters for opacity if ( !jQuery.support.opacity && name === "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level style.zoom = 1; // Set the alpha filter to set the opacity var opacity = parseInt( value, 10 ) + "" === "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"; var filter = style.filter || jQuery.curCSS( elem, "filter" ) || ""; style.filter = ralpha.test(filter) ? filter.replace(ralpha, opacity) : opacity; } return style.filter && style.filter.indexOf("opacity=") >= 0 ? (parseFloat( ropacity.exec(style.filter)[1] ) / 100) + "": ""; } You can see here that the style is explicitly set in code rather than relying on $.css() to assign the value resulting in the old filter getting wiped out. jQuery 1.32 looks a little different: // IE uses filters for opacity if ( !jQuery.support.opacity && name == "opacity" ) { if ( set ) { // IE has trouble with opacity if it does not have layout // Force it by setting the zoom level elem.zoom = 1; // Set the alpha filter to set the opacity elem.filter = (elem.filter || "").replace( /alpha\([^)]*\)/, "" ) + (parseInt( value ) + '' == "NaN" ? "" : "alpha(opacity=" + value * 100 + ")"); } return elem.filter && elem.filter.indexOf("opacity=") >= 0 ? (parseFloat( elem.filter.match(/opacity=([^)]*)/)[1] ) / 100) + '': ""; } Offhand I’m not sure why the latter works better since it too is assigning the filter. However, when checking with the IE script debugger I can see that there are actually a couple of filter tags assigned when using jQuery 1.32 but only one when I use jQuery 1.4. Note also that the jQuery 1.3 compatibility plugin for jQUery 1.4 doesn’t address this issue either. Resources ww.jquery.js (shadow plug-in $.fn.shadow) © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • Dynamic Grouping and Columns

    - by Tim Dexter
    Some good collaboration between myself and Kan Nishida (Oracle BIP Consulting) over at bipconsulting on a question that came in yesterday to an internal mailing list. Is there a way to allow columns to be place into a template dynamically? This would be similar to the Answers Column selector. A customer has said Crystal can do this and I am trying to see how BI Pub can do the same. Example: Report has Regions as a dimension in a table, they want the user to select a parameter that will insert either Units or Dollars without having to create multiple templates. Now whether Crystal can actually do it or not is another question, can Publisher? Yes we can! Kan took the first stab. His approach, was to allow to swap out columns in a table in the report. Some quick steps: 1. Create a parameter from BIP server UI 2. Declare the parameter in RTF template You can check this post to see how you can declare the parameter from the server. http://bipconsulting.blogspot.com/2010/02/how-to-pass-user-input-values-to-report.html 3. Use the parameter value to condition if a particular column needs to be displayed or not. You can use <?if@column:.....?> syntax for Column level IF condition. The if@column is covered in user documentation. This would allow a developer to create a report with the parameter or multiple parameters to allow the user to pick a column to be included in the report. I took a slightly different tack, with the mention of the column selector in the Answers report I took that to mean that the user wanted to select more of a dimensional column and then have the report recalculate all its totals and subtotals based on that selected column. This is a little bit more involved and involves some smart XSL and XPATH expressions, but still very doable. The user can select a column as a parameter, that is passed to the template rather than the query. The parameter value that is actually passed is the element name that you want to regroup the data by. Inside the template we then reference that parameter value in our for-each-group loop. That's where we need the trixy XSL/XPATH code to get the regrouping to happen. At this juncture, I need to hat tip to Klaus, for his article on dynamic sorting that he wrote back in 2006. I basically took his sorting code and applied it to the for-each loop. You can follow both of Kan's first two steps above i.e. Create a parameter from BIP server UI - this just needs to be based on a 'list' type list of value with name/value pairs e.g. Department/DEPARTMENT_NAME, Job/JOB_TITLE, etc. The user picks the 'friendly' value and the server passes the element name to the template. Declare the parameter in RTF template - been here before lots of times right? <?param@begin:group1;'"DEPARTMENT_NAME"'?> I have used a default value so that I can test the funtionality inside the template builder (notice the single and double quotes.) Next step is to use the template builder to build a re-grouped report layout. It does not matter if its hard coded right now; we will add in the dynamic piece next. Once you have a functioning template that is re-grouping correctly. Open up the for-each-group field and modify it to use the parameter: <?for-each-group:ROW;./*[name(.) = $group1]?> 'group1' is my grouping parameter, declared above. We need the XPATH expression to find the column in the XML structure we want to group that matches the one passed by the parameter. Its essentially looking through the data tree for a match. We can show the actual grouping value in the report output with a similar XPATH expression <?./*[name(.) = $group1]?> In my example, I took things a little further so that I could have a dynamic label for the parameter value. For instance if I am using MANAGER as the parameter I want to show: Manager: Tim Dexter My XML elements are readable e.g. DEPARTMENT_NAME. Its a simple case of replacing the underscore with a space and then 'initcapping' the result: <?xdoxslt:init_cap(translate($group1,'_',' '))?> With this in place, the user can now select a grouping column in the BIP report viewer and the layout will re-group the data and any calculations based on that column. I built a group above report but you could equally build the group left version to truly mimic the Answers column selector. If you are interested you can get an example report, sample data and layout template here. Of course, you can combine Klaus' dynamic sorting, Kan's conditional column approach and this dynamic grouping to build a real kick ass report for users that will keep them happy for hours..

    Read the article

  • Knockoutjs - stringify to handling observables and custom events

    - by Renso
    Goal: Once you viewmodel has been built and populated with data, at some point it goal of it all is to persist the data to the database (or some other media). Regardless of where you want to save it, your client-side viewmodel needs to be converted to a JSON string and sent back to the server. Environment considerations: jQuery 1.4.3+ Knockoutjs version 1.1.2   How to: So let’s set the stage, you are using Knockoutjs and you have a viewmodel with some Knockout dependencies. You want to make sure it is in the proper JSON format and via ajax post it to the server for persistence.   First order of business is to deal with the viewmodel (JSON) object. To most the JSON stringifier sounds familiar. The JSON stringifier converts JavaScript data structures into JSON text. JSON does not support cyclic data structures, so be careful to not give cyclical structures to the JSON stringifier. You may ask, is this the best way to do it? What about those observables and other Knockout properties that I don’t want to persist or want their actual value persisted and not their function, etc. Not sure if you were aware, but KO already has a method; ko.utils.stringifyJson() - it's mostly just a wrapper around JSON.stringify. (which is native in some browsers, and can be made available by referencing json2.js in others). What does it do that the regular stringify does not is that it automatically converts observable, dependentObservable, or observableArray to their underlying value to JSON. Hold on! There is a new feature in this version of Knockout, the ko.toJSON. It is part of the core library and it will clone the view model’s object graph, so you don’t mess it up after you have stringified  it and unwrap all its observables. It's smart enough to avoid reference cycles. Since you are using the MVVM pattern it would assume you are not trying to reference DOM nodes from your view. Wait a minute. I can already see this info on the http://knockoutjs.com/examples/contactsEditor.html website, why mention it all here? First of this is a much nicer blog, no orange ? At this time, you may want to have a look at the blog and see what I am talking about. See the save event, how they stringify the view model’s contacts only? That’s cool but what if your view model is a representation of your object you want to persist, meaning it has no property that represents the json object you want to persist, it is the view model itself. The example in http://knockoutjs.com/examples/contactsEditor.html assumes you have a list of contacts you may want to persist. In the example here, you want to persist the view model itself. The viewmodel here looks something like this:     var myViewmodel = {         accountName: ko.observable(""),         accountType: ko.observable("Active")     };     myViewmodel.isItActive = ko.dependentObservable(function () {         return myViewmodel.accountType() == "Active";     });     myViewmodel.clickToSaveMe = function() {         SaveTheAccount();     }; Here is the function in charge of saving the account: Function SaveTheAccount() {     $.ajax({         data: ko.toJSON(viewmodel),         url: $('#ajaxSaveAccountUrl').val(),         type: "POST",         dataType: "json",         async: false,         success: function (result) {             if (result && result.Success == true) {                 $('#accountMessage').html('<span class="fadeMyContainerSlowly">The account has been saved</span>').show();                 FadeContainerAwaySlowly();             }         },         error: function (xmlHttpRequest, textStatus, errorThrown) {             alert('An error occurred: ' + errorThrown);         }     }); //ajax }; Try run this and your browser will eventually freeze up or crash. Firebug will tell you that you have a repetitive call to the first function call in your model that keeps firing infinitely.  What is happening is that Knockout serializes the view model to a JSON string by traversing the object graph and firing off the functions, again-and-again. Not sure why it does that, but it does. So what is the work around: Nullify your function calls and then post it:         var lightweightModel = viewmodel.clickToSaveMe = null;         data: ko.toJSON(lightweightModel), So then I traced the JSON string on the server and found it having issues with primitive types. C#, by the way. So I changed ko.toJSON(model) to ko.toJS(model), and that solved my problem. Of course you could just create a property on the viewmodel for the account itself, so you only have to serialize the property and not the entire viewmodel. If that is an option then that would be the way to go. If your view model contains other properties in the view model that you also want to post then that would not be an option and then you’ll know what to watch out for. Hope this helps.

    Read the article

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • BI&EPM in Focus June 2014

    - by Mike.Hallett(at)Oracle-BI&EPM
    Applications Webcast Centre – A Library of Discussion and Research for Best Practice: Achieving Reliable Planning, Budgeting and Forecasting Talent Analytics and Big Data – Is HR ready for the challenge Enterprise Data – The cost of non-quality Customers Josephine Niemiec from ADP talks about Oracle Hyperion Workforce Planning at Collaborate 2014 (link) Video Chris Nelms from Ameren talks about Oracle BI Spend and Procurement Analytics at Collaborate 2014 (link) Video Leggett & Platt Leverages Oracle Hyperion EPM and Demantra (link) Video Pella Corporation Accelerates Close Cycle by Cutting Time for Financial Consolidation from Three Days to Less Than One Day (link) Secretaría General de Administración de Justicia en España Enhances Citizen Services with Near-Real-Time Business Intelligence Gleaned from 500 Databases  (link) Bellco Credit Union Speeds Budget Development by 30%—Gains Insight into Specific Branch and Financial Product Profitability  (link)  Video QDQ media Speeds up Financial Reporting by 24x, Gains Business Agility, and Integrates Seamlessly into Corporate Accounting System  (link) Westfield Group Maximizes Shopping Mall Revenue, Shortens Year-End Financial Consolidation by 75%  (link)  IL&FS Transportation Networks Shortens Financial Consolidation and Reporting Cycle by Eight Days, Gains In-Depth Insight into Business Performance   (link) Angel Trains Optimizes Rail Operations for Purchasing, Sourcing, and Project Management to Meet Challenges of Evolving Rail Industry  (link) Enterprise Performance Management June 11, at Oracle Utrecht, NL: Morning session: Explore Planning and Budgeting in the Cloud (link) June 12, London: PureApps Presents: Best Practice Financial Consolidation and Reporting Workshop (link) July 3, Koln: Oracle Hyperion Business Analytics Roundtable (link) Blog: What's Your Tax Strategy? Automate the Operational Transfer Pricing Process (link) YouTube Video: Automate Tax Reporting with Oracle Hyperion Tax Provision (link) YouTube Video: Introducing Oracle Hyperion Planning’s Tablet Optimized Interface (link) OracleEPMWebcasts @ YouTube (link) Partner webcasts: Wednesday, 4 June, 5.00 GMT - Case Study:  Lessons Learned from Edgewater Ranzal's Internal Implementation of Oracle Planning & Budgeting Cloud Service (PBCS) - Learn more and register here! Thursday, 5 June, 4.00 GMT - Achieving Accountable Care Using Oracle Technology - Learn more and register here! Tuesday, 17 June, 4.00 GMT - Optimizing Performance for Oracle EPM Systems - Learn more and register here! Oracle University Blog: The Coolest Features Available with Oracle Hyperion 11.1.2.3 – Training from OU to help you to best use them (link) Support: Proactive Support: EPM Hyperion Planning 11.1.2.3.500 Using RMI Service [Blog] Proactive Support: Planning and Budgeting Cloud Service Videos (link) Planning and Budgeting Cloud Service (PBCS) 11.1.2.3.410 Patch Bundle [Doc ID 1670981.1] Hyperion Analytic Provider Services 11.1.2.2.106 Patch Set Update [Doc ID 1667350.1] Hyperion Essbase 11.1.2.2.106 Patch Set Update [Doc ID 1667346.1] Hyperion Essbase Administration Services 11.1.2.2.106 Patch Set Update [Doc ID 1667348.1] Hyperion Essbase Studio 11.1.2.2.106 Patch Set Update [Doc ID 1667329.1] Hyperion Smart View 11.1.2.5.210 Patch Set Update [Doc ID 1669427.1] Using HPCM, HSF or DRM Communities (link) Business Intelligence June 12, Birmingham, UK: Oracle Big Data at Work - Use Cases and Architecture (link) June 17, London: Oracle at Cloud & Big Data World Forums (link) June 17, Partner Webcast: Transform your Planning Capabilities with Peloton's CloudAccelerator for Oracle PBCS (link) June 19, London: Oracle at the Whitehall Media Big Data Analytics Conference and Exhibition (link) June 19, London: Partner Event - Agile BI Conference by Peak Indicators [link] June 25, Munich: Oracle Special Day auf der TDWI 2014 Konferenz (link) July 15, London: Oracle Endeca Information Discovery Workshop (link) July 16, London: BI Applications Workshop – Financial Analytics & Procurement Analytics (link) July 17, London: BI Applications Workshop – HR Analytics (link) Milan, Italy: L’Osservatorio Big Data Analytics & Business Intelligence with Politecnico di Milano (link) OBIA 11.1.1.8.1 - Now Available [Blog] What’s New in OBIA 11.1.1.8.1 [Blog] BI Blog: A closer look at Oracle BI Applications 11.1.1.8.1 release (link) Press Release: BI Applications Deliver Greater Insight into Talent and Procurement (link) Support Blog: OBIA 11.1.1.8.1 Upgrade Guide & Documentation (link) YouTube Video: Glenn Hoormann of Ludus talks to us about Oracle Business Intelligence and ERP at Collaborate 2014 (Link) YouTube Video: Performance Architects talks about key BI and Mobile trends, including Endeca at Collaborate 2014 (link) Big Data Blog: 3 Keys for Using Big Data Effectively for Enhanced Customer Experience (link) Big Data Lite Demo VM 3.0 Now Available on OTN BI Blog: Data Relationship Governance - Workflow in a Bottle (link) MDM Blog: Register for Product Data Management Weekly Cloudcasts (link) MDM Blog: Improve your Customer Experience with High Quality Information (link) MDM Blog: Big Data Challenges & Considerations (link) Oracle University: Oracle BI Applications 11g: Implementation using ODI (link) Proactive Support: Monthly Index [Blog] My Oracle Support: Partner Accreditation for Business Analytics Support [Blog] OBIEE 11g Test-to-Production (T2P) / Clone Procedures Guide [Blog] Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Recent Innovations to ILOM

    - by B.Koch
    by Josh Rosen If you are wondering how Oracle can make some of the most advanced, reliable, and fault tolerant servers on the market, look no further than Oracle Integrated Lights Out Manager or ILOM.  We build ILOM into every server we create, from Oracle x86 Systems such as X3-2 to the SPARC T-Series family. Oracle ILOM is an embedded service processor, but it's really more than that.  It's a computer within a computer.  It's smart, it's tightly integrated into all aspects of the server's operation, and it's a big reason why Oracle servers are used for some of the most mission-critical workloads out there. To understand the value of ILOM, there is no better place to start than its fault management capability.  We have taken the sophisticated fault management architecture from Solaris, developed and refined over a decade, and built it into each and every ILOM. ILOM detects a potential issue at its earliest stage, watching low-level sensors.   If the root cause of a problem is not clear from a single error reading, ILOM will look for other clues and combine multiple pieces of information to correctly identify a failing component. ILOM provides peace of mind. We tailor our fault management for each new server platform that we produce.  You can rest assured that it's always actively keeping the server healthy.  And if there is a problem, you can be confident it will let you know by sending you a notification by e-mail or trap. We also heard IT managers tell us they needed a Ph.D. in computer engineering to manage today's servers. It doesn't have to be that way.  Thanks to the latest innovations to Oracle ILOM, we present hardware inventory and status in way that makes sense – to anyone.  Green means everything is healthy and red means something is wrong.  When a component needs to be replaced a clear message indicates where the problem is and points you at a knowledge article about that problem.  It's that simple. Simpler management and simple interfaces mean reduced complexity and lower costs to manage.  And we know that's really important. ILOM does all this while also providing advanced service processor features you depend on for managing enterprise class systems.  You can remotely control the server power, interact with a virtual video console for the server, and mount media on the server remotely.  There is no need to spend money on a KVM switch to get this functionality. And when people hear how advanced ILOM is, they can't believe ILOM is free.  All features are enabled and included with each Oracle server that you buy.  There are no advanced licenses you need to purchase or features to unlock. Configuring ILOM has also never been easier.  It is now possible to configure almost all aspects of the server directly from ILOM.  This includes changing BIOS settings, persistently modifying boot order, and optimizing power settings -- all directly from ILOM. But Oracle's innovation does not stop with ILOM.  Oracle has engineered Oracle Enterprise Manager Ops Center to integrate directly with ILOM, providing centralized management across all of our servers. Ops Center will discover each of your Oracle servers over the network by searching for ILOMs.  When it finds one, it knows how to communicate with ILOM to monitoring and configure that server from application to disk. Since every server that Oracle produces, from x86 Systems to SPARC T-Series up and down the line, comes with Oracle ILOM, you can manage all Oracle servers in the same way.  And while all of our servers may have different components on the inside, each with their specialized functions, the way you integrate them and the way you monitor and manage them is exactly the same. Oracle ILOM is state-of-art.  If you are looking for a server that make systems management simple and is easy to integrate and maintain, check out the latest advances to Oracle ILOM. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >