Search Results

Search found 22627 results on 906 pages for 'program transformation'.

Page 518/906 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Is copy paste programming bad ?

    - by ring bearer
    With plain google as well as google code search tools it is easy to find how to program using some resource or solve certain problems ( such as a Java class, or a ftp block in perl etc) and so developers are so tempted to just purely copy paste the code (in a way re-use) - is this an incompetency? I have done this myself though I think I am a better programmer than many others I have seen. Who has the time to RTFM? In this age of information abundance, I do not think that copy paste programming is bad. Isn't that what sites like stackoverflow do anyway? People ask - ok here is my problem - how to solve it? now someone will post complete code and the person who asked the question would simply copy paste the most voted answer. No matter how small the problem is. I am working with a bunch of young coders who heavily rely on internet to get their job done. I see convenience (for example, you may be quite good with algorithms and such but you may not know how to use a BufferedReader in Java - would you read complete Javadoc for BufferedReader or look up some example of using it somewhere??) in copy pasting and modifying code to get the job done. What are the real dangers of copy paste coding that can impact their competency?

    Read the article

  • Experience your music in a whole new way with Zune for PC

    - by Matthew Guay
    Tired of the standard Media Player look and feel, and want something new and innovative?  Zune offers a fresh, new way to enjoy your music, videos, pictures, and podcasts, whether or not you own a Zune device. Microsoft started out on a new multimedia experience for PCs and mobile devices with the launch of the Zune several years ago.  The Zune devices have been well received and noted for their innovative UI, and the Zune HD’s fluid interface is the foundation for the widely anticipated Windows Phone 7.  But regardless of whether or not you have a Zune Device, you can still use the exciting new UI and services directly from your PC.  Zune for Windows is a very nice media player that offers a music and video store and wide support for multimedia formats including those used in Apple products.  And if you enjoy listening to a wide variety of music, it also offers the Zune Pass which lets you stream an unlimited number of songs to your computer and download 10 songs for keeps per month for $14.99/month. Or you can do a pre-paid music card as well.  It does all this using the new Metro UI which beautifully shows information using text in a whole new way.  Here’s a quick look at setting up and using Zune on your PC. Getting Started Download the installer (link below), and run it to begin setup.  Please note that Zune offers a separate version for computers running the 64 bit version of Windows Vista or 7, so choose it if your computer is running these. Once your download is finished, run the installer to setup Zune on your computer.  Accept the EULA when prompted. If there are any updates available, they will automatically download and install during the setup.  So, if you’re installing Zune from a disk (for example, one packaged with a Zune device), you don’t have to worry if you have the latest version.  Zune will proceed to install on your computer.   It may prompt you to restart your computer after installation; click Restart Now so you can proceed with your Zune setup.  The reboot appears to be for Zune device support, and the program ran fine otherwise without rebooting, so you could possibly skip this step if you’re not using a Zune device.  However, to be on the safe side, go ahead and reboot. After rebooting, launch Zune.  It will play a cute introduction video on first launch; press skip if you don’t want to watch it. Zune will now ask you if you want to keep the default settings or change them.  Choose Start to keep the defaults, or Settings to customize to your wishes.  Do note that the default settings will set Zune as your default media player, so click Settings if you wish to change this. If you choose to change the default settings, you can change how Zune finds and stores media on your computer.  In Windows 7, Zune will by default use your Windows 7 Libraries to manage your media, and will in fact add a new Podcasts library to Windows 7. If your media is stored on another location, such as on a server, then you can add this to the Library.  Please note that this adds the location to your system-wide library, not just the Zune player. There’s one last step.  Enter three of your favorite artists, and Zune will add Smart DJ mixes to your Quickplay list based on these.  Some less famous or popular artists may not be recognized, so you may have to try another if your choice isn’t available.  Or, you can click Skip if you don’t want to do this right now. Welcome to Zune!  This is the default first page, QuickPlay, where you can easily access your pinned and new items.   If you have a Zune account, or would like to create a new one, click Sign In on the top. Creating a new account is quick and simple, and if you’re new to Zune, you can try out a 14 day trial of Zune Pass for free if you want. Zune allows you to share your listening habits and favorites with friends or the world, but you can turn this off or change it if you like. Using Zune for Windows To access your media, click the Collection link on the top left.  Zune will show all the media you already have stored on your computer, organized by artist and album. Right-click on any album, and you choose to have Zune find album art or do a variety of other tasks with the media.   When playing media, you can view it in several unique ways.  First, the default Mix view will show related tracks to the music you’re playing from Smart DJ.  You can either play these fully if you’re a Zune Pass subscriber, or otherwise you can play 30 second previews. Then, for many popular artists, Zune will change the player background to show pictures and information in a unique way while the music is playing.  The information may range from history about the artist to the popularity of the song being played.   Zune also works as a nice viewer for the pictures on your computer. Start a slideshow, and Zune will play your pictures with nice transition effects and music from your library. Zune Store The Zune Store offers a wide variety of music, TV shows, and videos for purchase.  If you’re a Zune Pass subscriber, you can listen to or download any song without purchasing it; otherwise, you can preview a 30 second clip first. Zune also offers a wide selection of Podcasts you can subscribe to for free. Using Zune for PC with a Zune Device If you have a Zune device attached to your computer, you can easily add media files to it by simply dragging them to the Zune device icon in the left corner.  In the future, this will also work with Windows Phone 7 devices. If you have a Zune HD, you can also download and add apps to your device. Here’s the detailed information window for the weather app.  Click Download to add it to your device.   Mini Mode The Zune player generally takes up a large portion of your screen, and is actually most impressive when run maximized.  However, if you’re simply wanting to enjoy your tunes while you’re using your computer, you can use the Mini mode to still view music info and control Zune in a smaller mode.  Click the Mini Player button near the window control buttons in the top right to activate it. Now Zune will take up much less of your desktop.  This window will stay on top of other windows so you can still easily view and control it. Zune will display an image of the artist if one is available, and this shows up in Mini mode more often than it does in the full mode. And, in Windows 7, you could simply minimize Zune as you can control it directly from the taskbar thumbnail preview.   Even more controls are available from Zune’s jumplist in Windows 7.  You can directly access your Quickplay links or choose to shuffle all music without leaving the taskbar. Settings Although Zune is designed to be used without confusing menus and settings, you can tweak the program to your liking from the settings panel.  Click Settings near the top left of the window. Here you can change file storage, types, burn, metadata, and many more settings.  You can also setup Zune to stream media to your XBOX 360 if you have one.   You can also customize Zune’s look with a variety of modern backgrounds and gradients. Conclusion If you’re ready for a fresh way to enjoy your media, Zune is designed for you.  It’s innovative UI definitely sets it apart from standard media players, and is very pleasing to use.  Zune is especially nice if your computer is using XP, Vista Home Basic, or 7 Starter as these versions of Windows don’t include Media Center.  Additionally, the mini player mode is a nice touch that brings a feature of Windows 7’s Media Player to XP and Vista.  Zune is definitely one of our favorite music apps.  Try it out, and get a fresh view of your music today! Link Download Zune for Windows Similar Articles Productive Geek Tips Redeem Pre-paid Zune Card Points for Zune Marketplace MediaUpdate Your Zune Player SoftwaredoubleTwist is an iTunes Alternative that Supports Several DevicesFind Free or Cheap Indie Music at Amie StreetAmie Street Downloader Makes Purchasing Music Easier TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC

    Read the article

  • The Earth at Night [Video]

    - by Jason Fitzpatrick
    This fresh video from NASA provides the clearest view of the Earth at night ever seen, thanks to the Suomi National Polar-orbiting Partnership Satellite. Check out the video and accompanying pics to see the stunning views. In daylight our big blue marble is all land, oceans and clouds. But the night – is electric. This view of Earth at night is a cloud-free view from space as acquired by the Suomi National Polar-orbiting Partnership Satellite (Suomi NPP). A joint program by NASA and NOAA, Suomi NPP captured this nighttime image by the satellite’s Visible Infrared Imaging Radiometer Suite (VIIRS). The day-night band on VIIRS detects light in a range of wavelengths from green to near infrared and uses filtering techniques to observe signals such as city lights, gas flares, and wildfires. This new image is a composite of data acquired over nine days in April and thirteen days in October 2012. It took 312 satellite orbits and 2.5 terabytes of data to get a clear shot of every parcel of land surface. This video uses the Earth at night view created by NASA’s Earth Observatory with data processed by NOAA’s National Geophysical Data Center and combined with a version of the Earth Observatory’s Blue Marble: Next Generation. Hit up the link below for the full NASA press release, including more videos and photos. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • Podcast Show Notes: The Red Room Interview &ndash; Part 2

    - by Bob Rhubart
    Room bloggers Sean Boiling, Richard Ward, and Mervin Chaing bring their in-the-trenches perspective to the conversation once again in this week’s edition of the OTN ArchBeat Podcast. Listen. (Missed last week? No problemo: Listen to Part 1) In this segment the conversation turns to SOA governance and balancing the need for reuse against the need for speed.  It’s no mystery that many people react to the term “SOA Governance” in much the same way as they would to the sound of Darth Vader’s respirator. But Mervin explains how a simple change in terminology can go a long way toward lowering blood pressure. Those interested in connecting with Sean, Richard, or Mervin can do so via the links listed below: Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog And you’ll find the complete list of the Red Room SOA Best Practice Posts in last week’s show notes. The third and final segment of the Red Room series runs next week.  I have enough material from the original interview for a fourth program,  but it’ll have to wait. Also, as mentioned last week, the podcast name change is now complete, from Arch2Arch, to ArchBeat. As WPBH-TV9 weatherman Phil Connors says, “Anything different is good.”   Technorati Tags: archbeat,podcast. arch2arch,soa,soa governance,oracle,otn Flickr Tags: archbeat,podcast. arch2arch,soa,soa governance,oracle,otn

    Read the article

  • BAD DC transfering FSMO Roles to ADC

    - by Suleman
    I have a DC (FQDN:server.icmcpk.local) and an ADC (FQDN:file-server.icmcpk.local). Recently my DC is facing a bad sector problem so I changed the Operation Masters to file-server for all five roles. but when ever i turn off the OLD DC the file-server also stops wroking with AD and GPMC further i m also unable to join any other computer to this domain. For Test purpose i also added a new ADC (FQDN:wds-server.icmcpk.local) but no succes with the old DC off i had to turn the old DC on and then joined it. I m attaching the Dcdiags for all three servers. Kindly help me so that i b able to reinstall new HDD and it can go online again. --------------------------------------- Server --------------------------------------- C:\Program Files\Support Tools>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\SERVER Starting test: Connectivity ......................... SERVER passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\SERVER Starting test: Replications [Replications Check,SERVER] A recent replication attempt failed: From FILE-SERVER to SERVER Naming Context: DC=ForestDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: DC=ForestDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From FILE-SERVER to SERVER Naming Context: DC=DomainDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: DC=DomainDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From FILE-SERVER to SERVER Naming Context: CN=Schema,CN=Configuration,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: CN=Schema,CN=Configuration,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. ......................... SERVER passed test Replications Starting test: NCSecDesc ......................... SERVER passed test NCSecDesc Starting test: NetLogons ......................... SERVER passed test NetLogons Starting test: Advertising ......................... SERVER passed test Advertising Starting test: KnowsOfRoleHolders ......................... SERVER passed test KnowsOfRoleHolders Starting test: RidManager ......................... SERVER passed test RidManager Starting test: MachineAccount ......................... SERVER passed test MachineAccount Starting test: Services ......................... SERVER passed test Services Starting test: ObjectsReplicated ......................... SERVER passed test ObjectsReplicated Starting test: frssysvol ......................... SERVER passed test frssysvol Starting test: frsevent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. ......................... SERVER failed test frsevent Starting test: kccevent ......................... SERVER passed test kccevent Starting test: systemlog An Error Event occured. EventID: 0x80001778 Time Generated: 05/04/2012 14:05:39 Event String: The previous system shutdown at 1:26:31 PM on An Error Event occured. EventID: 0x825A0011 Time Generated: 05/04/2012 14:07:45 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:13:40 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:14:25 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:14:25 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:14:38 (Event String could not be retrieved) An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:16:14 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Generate Activation Context failed for An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:16:14 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Generate Activation Context failed for An Error Event occured. EventID: 0x825A0011 Time Generated: 05/04/2012 14:22:57 (Event String could not be retrieved) An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:22:59 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Generate Activation Context failed for An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:22:59 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Generate Activation Context failed for ......................... SERVER failed test systemlog Starting test: VerifyReferences ......................... SERVER passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : icmcpk Starting test: CrossRefValidation ......................... icmcpk passed test CrossRefValidation Starting test: CheckSDRefDom ......................... icmcpk passed test CheckSDRefDom Running enterprise tests on : icmcpk.local Starting test: Intersite ......................... icmcpk.local passed test Intersite Starting test: FsmoCheck ......................... icmcpk.local passed test FsmoCheck ---------------------- File-Server ---------------------- C:\Users\Administrator.ICMCPK>dcdiag Directory Server Diagnosis Performing initial setup: Trying to find home server... Home Server = FILE-SERVER * Identified AD Forest. Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\FILE-SERVER Starting test: Connectivity ......................... FILE-SERVER passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\FILE-SERVER Starting test: Advertising Warning: DsGetDcName returned information for \\Server.icmcpk.local, when we were trying to reach FILE-SERVER. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... FILE-SERVER failed test Advertising Starting test: FrsEvent ......................... FILE-SERVER passed test FrsEvent Starting test: DFSREvent ......................... FILE-SERVER passed test DFSREvent Starting test: SysVolCheck ......................... FILE-SERVER passed test SysVolCheck Starting test: KccEvent ......................... FILE-SERVER passed test KccEvent Starting test: KnowsOfRoleHolders ......................... FILE-SERVER passed test KnowsOfRoleHolders Starting test: MachineAccount ......................... FILE-SERVER passed test MachineAccount Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=icmcpk,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=icmcpk,DC=local ......................... FILE-SERVER failed test NCSecDesc Starting test: NetLogons Unable to connect to the NETLOGON share! (\\FILE-SERVER\netlogon) [FILE-SERVER] An net use or LsaPolicy operation failed with error 67, The network name cannot be found.. ......................... FILE-SERVER failed test NetLogons Starting test: ObjectsReplicated ......................... FILE-SERVER passed test ObjectsReplicated Starting test: Replications ......................... FILE-SERVER passed test Replications Starting test: RidManager ......................... FILE-SERVER passed test RidManager Starting test: Services ......................... FILE-SERVER passed test Services Starting test: SystemLog An Error Event occurred. EventID: 0x00000469 Time Generated: 05/04/2012 14:01:10 Event String: The processing of Group Policy failed because of lack of network con nectivity to a domain controller. This may be a transient condition. A success m essage would be generated once the machine gets connected to the domain controll er and Group Policy has succesfully processed. If you do not see a success messa ge for several hours, then contact your administrator. An Warning Event occurred. EventID: 0x8000A001 Time Generated: 05/04/2012 14:07:11 Event String: The Security System could not establish a secured connection with th e server ldap/icmcpk.local/[email protected]. No authentication protocol was available. An Warning Event occurred. EventID: 0x00000BBC Time Generated: 05/04/2012 14:30:34 Event String: Windows Defender Real-Time Protection agent has detected changes. Mi crosoft recommends you analyze the software that made these changes for potentia l risks. You can use information about how these programs operate to choose whet her to allow them to run or remove them from your computer. Allow changes only if you trust the program or the software publisher. Windows Defender can't undo changes that you allow. An Warning Event occurred. EventID: 0x00000BBC Time Generated: 05/04/2012 14:30:36 Event String: Windows Defender Real-Time Protection agent has detected changes. Mi crosoft recommends you analyze the software that made these changes for potentia l risks. You can use information about how these programs operate to choose whet her to allow them to run or remove them from your computer. Allow changes only if you trust the program or the software publisher. Windows Defender can't undo changes that you allow. ......................... FILE-SERVER failed test SystemLog Starting test: VerifyReferences ......................... FILE-SERVER passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : icmcpk Starting test: CheckSDRefDom ......................... icmcpk passed test CheckSDRefDom Starting test: CrossRefValidation ......................... icmcpk passed test CrossRefValidation Running enterprise tests on : icmcpk.local Starting test: LocatorCheck ......................... icmcpk.local passed test LocatorCheck Starting test: Intersite ......................... icmcpk.local passed test Intersite --------------------- WDS-Server --------------------- C:\Users\Administrator.ICMCPK>dcdiag Directory Server Diagnosis Performing initial setup: Trying to find home server... Home Server = WDS-SERVER * Identified AD Forest. Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\WDS-SERVER Starting test: Connectivity ......................... WDS-SERVER passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\WDS-SERVER Starting test: Advertising Warning: DsGetDcName returned information for \\Server.icmcpk.local, when we were trying to reach WDS-SERVER. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... WDS-SERVER failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. ......................... WDS-SERVER passed test FrsEvent Starting test: DFSREvent ......................... WDS-SERVER passed test DFSREvent Starting test: SysVolCheck ......................... WDS-SERVER passed test SysVolCheck Starting test: KccEvent ......................... WDS-SERVER passed test KccEvent Starting test: KnowsOfRoleHolders ......................... WDS-SERVER passed test KnowsOfRoleHolders Starting test: MachineAccount ......................... WDS-SERVER passed test MachineAccount Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=icmcpk,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=icmcpk,DC=local ......................... WDS-SERVER failed test NCSecDesc Starting test: NetLogons Unable to connect to the NETLOGON share! (\\WDS-SERVER\netlogon) [WDS-SERVER] An net use or LsaPolicy operation failed with error 67, The network name cannot be found.. ......................... WDS-SERVER failed test NetLogons Starting test: ObjectsReplicated ......................... WDS-SERVER passed test ObjectsReplicated Starting test: Replications ......................... WDS-SERVER passed test Replications Starting test: RidManager ......................... WDS-SERVER passed test RidManager Starting test: Services ......................... WDS-SERVER passed test Services Starting test: SystemLog An Error Event occurred. EventID: 0x0000041E Time Generated: 05/04/2012 14:02:55 Event String: The processing of Group Policy failed. Windows could not obtain the name of a domain controller. This could be caused by a name resolution failure. Verify your Domain Name Sysytem (DNS) is configured and working correctly. An Error Event occurred. EventID: 0x0000041E Time Generated: 05/04/2012 14:08:33 Event String: The processing of Group Policy failed. Windows could not obtain the name of a domain controller. This could be caused by a name resolution failure. Verify your Domain Name Sysytem (DNS) is configured and working correctly. ......................... WDS-SERVER failed test SystemLog Starting test: VerifyReferences ......................... WDS-SERVER passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : icmcpk Starting test: CheckSDRefDom ......................... icmcpk passed test CheckSDRefDom Starting test: CrossRefValidation ......................... icmcpk passed test CrossRefValidation Running enterprise tests on : icmcpk.local Starting test: LocatorCheck ......................... icmcpk.local passed test LocatorCheck Starting test: Intersite ......................... icmcpk.local passed test Intersite

    Read the article

  • unmet dependencies and broken count>0 problem

    - by Simon
    I tried installing fbreader, following all the steps, but ended up with unmet dependencies, i also think a file is referenced in two locations at once and hence killing it.. any ideas how I can fix it? i've done alot of research and tried: simon@simon-Studio-1558:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dkms patch Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libzlcore0.12 The following NEW packages will be installed: libzlcore0.12 0 upgraded, 1 newly installed, 0 to remove and 61 not upgraded. 6 not fully installed or removed. Need to get 0 B/270 kB of archives. After this operation, 811 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 179860 files and directories currently installed.) Unpacking libzlcore0.12 (from .../libzlcore0.12_0.12.10dfsg-4_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) sorry for the formatting, but it basically isn't liking: dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 Any ideas? Also I don't care about keeping the program, but the error is stopping sudo apt-get remove fbreader from working too.

    Read the article

  • Should I be worried about overengineering programming assignments given during interview process?

    - by DormoTheNord
    I recently had a phone interview with a company. After that phone interview, I was told to complete a short programming assignment (a small program; shouldn't take more than three hours). I'm only directly instructed to complete the assignment and turn in the code. I was given complete freedom to use any language I wished and was not told exactly how to turn in the code. Immediately I planned on throwing it on Github, writing a test suite for it, using Travis-CI (free continuous integration for public Github repositories) to run the test suites, and using CMake to build the Linux makefiles for Travis-CI. That way, not only can I demonstrate that I understand how to use Git, CMake, Travis-CI, and how to write tests, but I can also simply link to the Travis-CI page so they can see the output of the tests. I figured that'd make it a tiny bit more convenient for the interviewer. Since I know those technologies well, it would add essentially no time to the assignment. However, I'm a bit worried that doing all this for a relatively simple task would look bad. Although it wouldn't add much more time at all for me, I don't want them thinking I spend too much time on things that should be simple.

    Read the article

  • mac osX file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • Podcast Show Notes: Fear and Loathing in SOA

    - by Bob Rhubart
    The latest program (#47) in the Arch2Arch podcast series is the first of three segments from another virtual mini-meet-up with architects from the OTN community, recorded on March 9, 2010. In keeping with the meet-up format, I sent an invitation to my list of past participants in Arch2Arch panel discussions. The following people showed up to take seats at the virtual table and drive the conversation: Hajo Normann is a SOA architect and consultant at EDS in Frankfurt Blog | LinkedIn | Oracle Mix | Oracle ACE Profile | Books  Jeff Davies is a Senior Product Manager at Oracle, and is the primary author of The Definitive Guide to SOA: Oracle Service Bus Homepage | Blog | LinkedIn | Oracle Mix Pat Shepherd is an enterprise architect with the Oracle Enterprise Solutions Group. Oracle Mix | LinkedIn | Blog This first segment focuses on a discussion of the persistent fear of SOA the panelists have observed among many developers and architects. Listen to Part 1 The discussion continues in next week’s segment with a look at the misinformation and misunderstanding behind the fear of SOA, and a discussion of possible solutions. So stay tuned: RSS   del.icio.us Tags: oracle,otn,arch2arch,podcast,soa,service-oriented architecture,enterprise architecture Technorati Tags: oracle,otn,arch2arch,podcast,soa,service-oriented architecture,enterprise architecture

    Read the article

  • Sync custom AD properties to SharePoint Profile

    - by KunaalKapoor
    Here are some step-by-step instructions regarding configuring SharePoint to sync with custom AD attributes:Add the custom attribute in Active DirectoryThis part will have to be your doing; here is some documentation regarding creating customattributes in AD:http://msdn.microsoft.com/en-us/library/ms675085(VS.85).aspxhttp://technet.microsoft.com/en-us/magazine/2008.05.schema.aspxhttp://blogs.technet.com/b/isingh/archive/2007/02/18/adding-custom-attributes-in-active-directory.aspx2. Open up the miisclient.exe (C:\Program Files\Microsoft Office Servers\14.0\Synchronization Service\UIShell\miisclient.exe)a. This will have to be opened up with the farm admin account3. Click on "Management Agents" in the ribbon4. Right-click the Active Directory Management Agent ("MOSS-<name of sync connection>") and click "Refresh Schema"a. When prompted, enter the credentials for the farm account5. Once complete, close out of miisclient.exe6. Go into Central Admin --> Application Management --> Manage Service Applications --> Go into the User Profile Service Application7. Click on "Manage User Properties"8. Click on "New Property"9. Put in the correct information regarding the attribute that was created10. At the bottom of this page, under the "Source Data Connection" drop down, select the AD synchronization connection you have already configured11. For the "Attribute" drop down, select the new attribute you have created12. For the "Direction" drop down, select "Import"13. Click "OK"14. Run a full synchronization for the user profile service application and the custom property will get synced (as long as the attribute is set in Active Directory for the desired users)

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Deferred vs Immediate execution in Linq

    - by Jalpesh P. Vadgama
    In this post, We are going to learn about Deferred vs Immediate execution in Linq.  There an interesting variations how Linq operators executes and in this post we are going to learn both Deferred execution and immediate execution. What is Deferred Execution? In the Deferred execution query will be executed and evaluated at the time of query variables usage. Let’s take an example to understand Deferred Execution better. Example: Following is a code for that. using System; using System.Collections.Generic; using System.Linq; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { var customers = new List<Customer>( new[] { new Customer{FirstName = "Jalpesh",LastName = "Vadgama"}, new Customer{FirstName = "Vishal",LastName = "Vadgama"}, new Customer{FirstName = "Tushar",LastName = "Maru"} } ); var newCustomers = customers.Where(c => c.LastName == "Vadgama"); customers.Add(new Customer {FirstName = "Teerth", LastName = "Vadgama"}); foreach (var c in newCustomers) { Console.WriteLine(c.FirstName); } } public class Customer { public string FirstName { get; set; } public string LastName { get; set; } } } } More on my personal blog @www.dotnetjalps.com

    Read the article

  • Linux memory fragmentation

    - by Raghu
    Hi all, Is there a way to detect memory fragmentation on linux ? This is because on some long running servers I have noticed performance degradation and only after I restart process I see better performance. I noticed it more when using linux huge page support -- are huge pages in linux more prone to fragmentation ? I have looked at /proc/buddyinfo in particular. I want to know whether there are any better ways(not just CLI commands per se, any program or theoretical background would do) to look at it.

    Read the article

  • New Year's Resolutions and Keeping in Touch in 2011

    - by Brian Dayton
    The run-up to Oracle OpenWorld 2010 San Francisco--and the launch of Fusion Applications--was a busy time for many of us working on the applications business at Oracle. The great news was that the Oracle Applications general sessions, sessions, demogrounds and other programs were very well attended and well received. Unfortunately, for this blog, the work wasn't done there. Yes, there haven't been many additional blog entries since the previous one, which one industry analyst told us "That's a good post!" That being said, our New Year's Resolution is to blog more frequently about what's been keeping us busy since Oracle OpenWorld San Francisco. A quick summary: - A 4-part webcast series covering major elements of Oracle's Applications strategy - Oracle OpenWorld Brazil - Oracle OpenWorld China - A stellar fiscal Q2 for Oracle and our applications business - Engagement with many Oracle Fusion Applications Early Adopter customers (more on this in the coming year) Objectives for the Coming Year Looking forward at 2011 there are many ways in which we hope to continue making connections with our valued customers and partners, sharing information about where Oracle Applications are headed, and answering questions about how to manage your Oracle Applications roadmap. Things to look for in 2011: - Stay connected with Oracle Applications on a daily basis via our Facebook page. You don't have to be a member of Facebook---but if you are and "like" the page you'll have daily insights and updates delivered to your account http://www.facebook.com/OracleApps - Coming soon, an Oracle Applications strategy update World Tour---a global program that takes key updates and information to cities around the globe - Save the date: On February 3rd, Oracle will be hosting a global, online conference for Oracle Applications customers, partners and interested parties Happy New Year and look for us in 2011.

    Read the article

  • Gauge network traffic for each Citrix session

    - by molecule
    Hi all, We are currently reviewing the bandwidth of our WAN links. How much bandwidth does a "typical" Citrix session utilize over a WAN link? JFYI - we are using Citrix Program Neighborhood V10 and each session is configured to use 256 colors. I have set up PRTG and it appears that for a server hosting 4 users, the traffic is approximately 100k to 300k per session. Is that about right? If you had to set a benchmark on a per-user basis, how much bandwidth would you assign to each user? Thanks in advance.

    Read the article

  • Start script on network connect

    - by Nate Mara
    I am trying to get a GNU/Linux Bash script to run as soon as a network connection is established on my Raspberry Pi. I tried following the instructions on several pages: I have tried adding my script to /etc/network/if-up.d and running sudo chmod ugo+x on the file. I have tried adding the line post-up <path/to/script.sh> to /etc/network/interfaces I am really quite clueless here. More info: The script runs fine when manually run, here it is: http://pastebin.com/UJvt5HYU (I did remove my personal info (email addresses, passwords), but other than that, the script is unchanged. This script also uses the sendEmail program (can be found at http://caspian.dotconf.net/menu/Software/SendEmail/).

    Read the article

  • How do I get a hold of the Reply-To header and remunge it in postfix

    - by Mikhail
    I have a legacy application that emails via php. 5% of the emails aren't going through. The solution is to route all email through a fancy verified mail server like Amazon's SES. I am having some trouble implementing this functionality. It seems this guy had a similar problem. My question is where in postfix can I set a filter that will take as input the the message headers, so that I can manually set the From field and the Reply-To field to [email protected] and [email protected], respectively. Where whatever_php_wants is dictated by the php program and the users registration email. I know where to set the noreply portion, but I don't know the exact place in postfix's configuration files where I can intercept complete emails and pass them to a script. Edit So I want emails to look like: FROM: [email protected] REPLY-TO: the_users_address@their_email_service.com

    Read the article

  • Common mistakes which lead to corrupted invariants

    - by Dave B.
    My main source of income is web development and through this I have come to enjoy the wonders of programming as my knowledge of different languages has increased over the years through work and personal play. At some point I reached a decision that my college education was not enough and that I wanted to go back to school to get a university degree in either computer science or software engineering. I have tried a number of things in my life and it took me a while before I found something that I feel is a passion and this is it. There is one aspect of this area of study that I find throws me off though. I find the formal methods of proving program correctness a challenge. It is not that I have trouble writing code correctly, I can look at an algorithm and see how it is correct or flawed but I struggle sometimes to translate this into formal definitions. I have gotten perfect or near perfect marks on every programming assignment I have done at the college level but I recently got a swath of textbooks from a guy from univeristy of waterloo and found that I have had trouble when it comes to a few of the formalisms. Well at this point its really just one thing specifically, It would really help me if some of you could provide to me some good examples of common mistakes which lead to corrupted invariants, especially in loops. I have a few software engineering and computer science textbooks but they only show how things should be. I would like to know how things go wrong so that it is easier to recognize when it happens. Its almost embarrassing to broach this subject because formalisms are really basic foundations upon which matters of substance are built. I want to overcome this now so that it does not hinder me later.

    Read the article

  • Configure USB GSM Modem

    - by adisembiring
    I want to configure EVDO Usb Modem in Ubuntu 10.10 I insert my usb modem to laptop and check the usb is detected or not using $sudo lsusb and the result is: Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 201e:2009 Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 14cd:6600 Super Top USB 2.0 IDE DEVICE Bus 002 Device 002: ID 201e:2009 is my us device. and than I execute command $dmesg | grep -e tty [ 0.000000] console [tty0] enabled [ 818.054660] usb 2-1.3: GSM modem (1-port) converter now attached to ttyUSB0 [ 818.055125] usb 2-1.3: GSM modem (1-port) converter now attached to ttyUSB1 [ 818.055647] usb 2-1.3: GSM modem (1-port) converter now attached to ttyUSB2 [ 818.330641] option1 ttyUSB0: GSM modem (1-port) converter now disconnected from ttyUSB0 [ 818.330743] option1 ttyUSB1: GSM modem (1-port) converter now disconnected from ttyUSB1 [ 818.330840] option1 ttyUSB2: GSM modem (1-port) converter now disconnected from ttyUSB2 [ 1054.917473] usb 2-1.2: GSM modem (1-port) converter now attached to ttyUSB0 [ 1054.917995] usb 2-1.2: GSM modem (1-port) converter now attached to ttyUSB1 [ 1054.918481] usb 2-1.2: GSM modem (1-port) converter now attached to ttyUSB2 [ 1055.214087] option1 ttyUSB0: GSM modem (1-port) converter now disconnected from ttyUSB0 [ 1055.214221] option1 ttyUSB1: GSM modem (1-port) converter now disconnected from ttyUSB1 [ 1055.214356] option1 ttyUSB2: GSM modem (1-port) converter now disconnected from ttyUSB2 Why converter disconnected from ttyUSB2 ? and than I try to execute command $sudo wvdialconf sorry, no modem was detected! Is it in use by another program ? did you configure it properly with setserials ? I change the same type usb modem from my friend, but I still get error above.

    Read the article

  • Refactor This (Ugly Code)!

    - by Alois Kraus
    Ayende has put on his blog some ugly code to refactor. First and foremost it is nearly impossible to reason about other peoples code without knowing the driving forces behind the current code. It is certainly possible to make it much cleaner when potential sources of errors cannot happen in the first place due to good design. I can see what the intention of the code is but I do not know about every brittle detail if I am allowed to reorder things here and there to simplify things. So I decided to make it much simpler by identifying the different responsibilities of the methods and encapsulate it in different classes. The code we need to refactor seems to deal with a handler after a message has been sent to a message queue. The handler does complete the current transaction if there is any and does handle any errors happening there. If during the the completion of the transaction errors occur the transaction is at least disposed. We can enter the handler already in a faulty state where we try to deliver the complete event in any case and signal a failure event and try to resend the message again to the queue if it was not inside a transaction. All is decorated with many try/catch blocks, duplicated code and some state variables to route the program flow. It is hard to understand and difficult to reason about. In other words: This code is a mess and could be written by me if I was under pressure. Here comes to code we want to refactor:         private void HandleMessageCompletion(                                      Message message,                                      TransactionScope tx,                                      OpenedQueue messageQueue,                                      Exception exception,                                      Action<CurrentMessageInformation, Exception> messageCompleted,                                      Action<CurrentMessageInformation> beforeTransactionCommit)         {             var txDisposed = false;             if (exception == null)             {                 try                 {                     if (tx != null)                     {                         if (beforeTransactionCommit != null)                             beforeTransactionCommit(currentMessageInformation);                         tx.Complete();                         tx.Dispose();                         txDisposed = true;                     }                     try                     {                         if (messageCompleted != null)                             messageCompleted(currentMessageInformation, exception);                     }                     catch (Exception e)                     {                         Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);                     }                     return;                 }                 catch (Exception e)                 {                     Trace.TraceWarning("Failed to complete transaction, moving to error mode"+ e);                     exception = e;                 }             }             try             {                 if (txDisposed == false && tx != null)                 {                     Trace.TraceWarning("Disposing transaction in error mode");                     tx.Dispose();                 }             }             catch (Exception e)             {                 Trace.TraceWarning("Failed to dispose of transaction in error mode."+ e);             }             if (message == null)                 return;                 try             {                 if (messageCompleted != null)                     messageCompleted(currentMessageInformation, exception);             }             catch (Exception e)             {                 Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);             }               try             {                 var copy = MessageProcessingFailure;                 if (copy != null)                     copy(currentMessageInformation, exception);             }             catch (Exception moduleException)             {                 Trace.TraceError("Module failed to process message failure: " + exception.Message+                                              moduleException);             }               if (messageQueue.IsTransactional == false)// put the item back in the queue             {                 messageQueue.Send(message);             }         }     You can see quite some processing and handling going on there. Yes this looks like real world code one did put together to make things work and he does not trust his callbacks. I guess these are event handlers which are optional and the delegates were extracted from an event to call them back later when necessary.  Lets see what the author of this code did intend:          private void HandleMessageCompletion(             TransactionHandler transactionHandler,             MessageCompletionHandler handler,             CurrentMessageInformation messageInfo,             ErrorCollector errors             )         {               // commit current pending transaction             transactionHandler.CallHandlerAndCommit(messageInfo, errors);               // We have an error for a null message do not send completion event             if (messageInfo.CurrentMessage == null)                 return;               // Send completion event in any case regardless of errors             handler.OnMessageCompleted(messageInfo, errors);               // put message back if queue is not transactional             transactionHandler.ResendMessageOnError(messageInfo.CurrentMessage, errors);         }   I did not bother to write the intention here again since the code should be pretty self explaining by now. I have used comments to explain the still nontrivial procedure step by step revealing the real intention about all this complex program flow. The original complexity of the problem domain does not go away but by applying the techniques of SRP (Single Responsibility Principle) and some functional style but we can abstract the necessary complexity away in useful abstractions which make it much easier to reason about it. Since most of the method seems to deal with errors I thought it was a good idea to encapsulate the error state of our current message in an ErrorCollector object which stores all exceptions in a list along with a description what the error all was about in the exception itself. We can log it later or not depending on the log level or whatever. It is really just a simple list that encapsulates the current error state.          class ErrorCollector          {              List<Exception> _Errors = new List<Exception>();                public void Add(Exception ex, string description)              {                  ex.Data["Description"] = description;                  _Errors.Add(ex);              }                public Exception Last              {                  get                  {                      return _Errors.LastOrDefault();                  }              }                public bool HasError              {                  get                  {                      return _Errors.Count > 0;                  }              }          }   Since the error state is global we have two choices to store a reference in the other helper objects (TransactionHandler and MessageCompletionHandler)or pass it to the method calls when necessary. I did chose the latter one because a second argument does not hurt and makes it easier to reason about the overall state while the helper objects remain stateless and immutable which makes the helper objects much easier to understand and as a bonus thread safe as well. This does not mean that the stored member variables are stateless or thread safe as well but at least our helper classes are it. Most of the complexity is located the transaction handling I consider as a separate responsibility that I delegate to the TransactionHandler which does nothing if there is no transaction or Call the Before Commit Handler Commit Transaction Dispose Transaction if commit did throw In fact it has a second responsibility to resend the message if the transaction did fail. I did see a good fit there since it deals with transaction failures.          class TransactionHandler          {              TransactionScope _Tx;              Action<CurrentMessageInformation> _BeforeCommit;              OpenedQueue _MessageQueue;                public TransactionHandler(TransactionScope tx, Action<CurrentMessageInformation> beforeCommit, OpenedQueue messageQueue)              {                  _Tx = tx;                  _BeforeCommit = beforeCommit;                  _MessageQueue = messageQueue;              }                public void CallHandlerAndCommit(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  if (_Tx != null && !errors.HasError)                  {                      try                      {                          if (_BeforeCommit != null)                          {                              _BeforeCommit(currentMessageInfo);                          }                            _Tx.Complete();                          _Tx.Dispose();                      }                      catch (Exception ex)                      {                          errors.Add(ex, "Failed to complete transaction, moving to error mode");                          Trace.TraceWarning("Disposing transaction in error mode");                          try                          {                              _Tx.Dispose();                          }                          catch (Exception ex2)                          {                              errors.Add(ex2, "Failed to dispose of transaction in error mode.");                          }                      }                  }              }                public void ResendMessageOnError(Message message, ErrorCollector errors)              {                  if (errors.HasError && !_MessageQueue.IsTransactional)                  {                      _MessageQueue.Send(message);                  }              }          } If we need to change the handling in the future we have a much easier time to reason about our application flow than before. After we did complete our transaction and called our callback we can call the completion handler which is the main purpose of the HandleMessageCompletion method after all. The responsiblity o the MessageCompletionHandler is to call the completion callback and the failure callback when some error has occurred.            class MessageCompletionHandler          {              Action<CurrentMessageInformation, Exception> _MessageCompletedHandler;              Action<CurrentMessageInformation, Exception> _MessageProcessingFailure;                public MessageCompletionHandler(Action<CurrentMessageInformation, Exception> messageCompletedHandler,                                              Action<CurrentMessageInformation, Exception> messageProcessingFailure)              {                  _MessageCompletedHandler = messageCompletedHandler;                  _MessageProcessingFailure = messageProcessingFailure;              }                  public void OnMessageCompleted(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageCompletedHandler != null)                      {                          _MessageCompletedHandler(currentMessageInfo, errors.Last);                      }                  }                  catch (Exception ex)                  {                      errors.Add(ex, "An error occured when raising the MessageCompleted event, the error will NOT affect the message processing");                  }                    if (errors.HasError)                  {                      SignalFailedMessage(currentMessageInfo, errors);                  }              }                void SignalFailedMessage(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageProcessingFailure != null)                          _MessageProcessingFailure(currentMessageInfo, errors.Last);                  }                  catch (Exception moduleException)                  {                      errors.Add(moduleException, "Module failed to process message failure");                  }              }            }   If for some reason I did screw up the logic and we need to call the completion handler from our Transaction handler we can simple add to the CallHandlerAndCommit method a third argument to the MessageCompletionHandler and we are fine again. If the logic becomes even more complex and we need to ensure that the completed event is triggered only once we have now one place the completion handler to capture the state. During this refactoring I simple put things together that belong together and came up with useful abstractions. If you look at the original argument list of the HandleMessageCompletion method I have put many things together:   Original Arguments New Arguments Encapsulate Message message CurrentMessageInformation messageInfo         Message message TransactionScope tx Action<CurrentMessageInformation> beforeTransactionCommit OpenedQueue messageQueue TransactionHandler transactionHandler        TransactionScope tx        OpenedQueue messageQueue        Action<CurrentMessageInformation> beforeTransactionCommit Exception exception,             ErrorCollector errors Action<CurrentMessageInformation, Exception> messageCompleted MessageCompletionHandler handler          Action<CurrentMessageInformation, Exception> messageCompleted          Action<CurrentMessageInformation, Exception> messageProcessingFailure The reason is simple: Put the things that have relationships together and you will find nearly automatically useful abstractions. I hope this makes sense to you. If you see a way to make it even more simple you can show Ayende your improved version as well.

    Read the article

  • How to move 100mb hidden system reserved partition on Windows Server 2008 R2 to other drive?

    - by Artyom Krivokrisenko
    Hello! I have a server with two 1.5TB hard drives. I was going to install a Windows Server 2008 R2 and create software RAID1 using Windows Disk Management Utility. I instaleld Windows, open this console and that is what I see: http://i.imgur.com/KoC9a.png Setup program created a System Reserved Partition at my second HDD. I don't understand now, how can I create RAID1, because space, which supposed to be used for copy of disk C, now is used for this hidden partition. So is there any way now to create correct RAID1? May it is possible to move this partition to the Disk 0, where I have plenty of free space? Unfortunately I can't reinstall Windows and apply other options at the disk management step of the installation, because installation image is not longer connected to the server and I have no physical access to server, only remote desktop.

    Read the article

  • How to make sure my GPO are applied in the correct order

    - by Florent
    I'm deploying VMware player through a GPO, and I'd like to apply specific ACLs to the install folder, and to the D:\VMWARE folder I'm creating during the player install. I also have to add the vmware user account the "can log on locally" right. To do so, I've created a GPO whose scope is the same as my Vmware player install GPO. This GPO works well, BUT when applied at the same time as my deployment GPO, it seems to be applied before the deploy GPO, and then : - Cannot find the vmware user account - Cannot find the c:\program files\vmware folder - Cannot find the D:\vmware folder because none of them have already been created by the vmware player install. And the only way for me to apply my security GPO is to execute gpudate /force command manually, which i don't want to use (it's supposed to be an automatic install) I've checked the gpo processing order, my security GPO should be applied AFTER my install GPO (security GPO is number 1, deploy GPO is number to), but it don't seem to be the case. Does anyone got an idea to solve this ?

    Read the article

  • VSNewFile: A Visual Studio Addin to More Easily Add New Items to a Project

    - by InfinitiesLoop
    My first Visual Studio Add-in! Creating add-ins is pretty simple, once you get used to the CommandBar model it is using, which is apparently a general Office suite extensibility mechanism. Anyway, let me first explain my motivation for this. It started out as an academic exercise, as I have always wanted to dip my feet in a little VS extensibility. But I thought of a legitimate need for an add-in, at least in my personal experience, so it took on new life. But I figured I can’t be the only one who has felt this way, so I decided to publish the add-in, and host it on GitHub (VSNewFile on GitHub) hoping to spur contributions. Adding Files the Built-in Way Here’s the problem I wanted to solve. You’re working on a project, and it’s time to add a new file to the project. Whatever it is – a class, script, html page, aspx page, or what-have-you, you go through a menu or keyboard shortcut to get to the “Add New Item” dialog. Typically, you do it by right-clicking the location where you want the file (the project or a folder of it): This brings up a dialog the contains, well, every conceivable type of item you might want to add. It’s all the available item templates, which can result in anywhere from a ton to a veritable sea of choices. To be fair, this dialog has been revamped in Visual Studio 2010, which organizes it a little better than Visual Studio 2008, and adds a search box. It also loads noticeably faster.   To me, this dialog is just getting in my way. If I want to add a JavaScript script to my project, I don’t want to have to hunt for the script template item in this dialog. Yes, it is categorized, and yes, it now has a search box. But still, all this UI to swim through when all I need is a new file in the project. I will name it. I will provide the content, I don’t even need a ‘template’. VS kind of realizes this. In the add menu in a class library project, for example, there is a “Add Class…” choice. But all this really does is select that project item from the dialog by default. You still must wait for the dialog, see it, and type in a name for the file. How is that really any different than hitting F2 on an existing item? It isn’t. Adding Files the Hack Way What I often find myself doing, just to avoid going through this dialog, is to copy and paste an existing file, rename it, then “CTRL-A, DEL” the content. In a few short keystrokes I’ve got my new file. Even if the original file wasn’t the right type, it doesn’t matter – I will rename it anyway, including the extension. It works well enough if the place I am adding the file to doesn’t have much in it already. But if there are a lot of files at that level, it sucks, because the new file will have the name “Copy of xyz”, causing it to be moved into the ‘C’ section of the alphabetically sorted items, which might be far, far away from the original file (and so I tend to try and copy a file that starts with ‘C’ *evil grin*). Using ‘Export Template’ To be completely fair I should at least mention this feature. I’m not even sure if this is new in VS 2010 or not (I think so). But it allows you to export a project item or items, including potential project references required by it. Then it becomes a new item in the available ‘installed templates’. No doubt this is useful to help bootstrap new projects. But that still requires you to go through the ‘New Item’ dialog. Adding Files with VSNewFile So hopefully I have sufficiently defined the problem and got a few of you to think, “Yeah, me too!”… What VSNewFile does is let you skip the dialog entirely by adding project items directly to the context menu. But it does a bit more than that, so do read on. For example, to add a new class, you can right-click the location and pick that option. A new .cs file is instantly added to the project, and the new item is selected and put into the ‘rename’ mode immediately. The default items available are shown here. But you can customize them. You can also customize the content of each template. To do so, you create a directory in your documents folder, ‘VSNewFile Templates’. In there, you drop the templates you want to use, but you name them in a particular way. For example, here’s a template that will add a new item named “Add TITLE”. It will add a project item named “SOMEFILE.foo” (or ‘SOMEFILE1.foo’ if that exists, etc). The format of the file name is: <ORDER>_<KEY>_<BASE FILENAME>_<ICON ID>_<TITLE>.<EXTENTION> Where: <ORDER> is a number that lets you determine the order of the items in the menu (relative to each other). <KEY> is a case sensitive identifier different for each template item. More on that later. <BASE FILENAME> is the default name of the file, which doesn’t matter that much, since they will be renaming it anyway. <ICON ID> is a number the dictates the icon used for the menu item. There are a huge number of built-in choices. More on that later. <TITLE> is the string that will appear in the menu. And, the contents of the file are the default content for the item (the ‘template’). The content of the file can contain anything you want, of course. But it also supports two tokens: %NAMESPACE% and %FILENAME%, which will be replaced with the corresponding values. Here is the content of this sample: testing Namespace = %NAMESPACE% Filename = %FILENAME% I kind went back and forth on this. I could have made it so there’d be an XML or JSON file that defines the templates, instead of cramming all this data into the filename itself. I like the simplicity of this better. It makes it easy to customize since you can literally just throw these files around, copy them from someone else, etc, without worrying about merge data into a central description file, in whatever format. Here’s our new item showing up: Practical Use One immediate thing I am using this for is to make it easier to add very commonly used scripts to my web projects. For example, uh, say, jQuery? :) All I need to do is drop jQuery-1.4.2.js and jQuery-1.4.2.min.js into the templates folder, provide the order, title, etc, and then instantly, I can now add jQuery to any project I have without even thinking about “where is jQuery? Can I copy it from that other project?”   Using the KEY There are two reasons for the ‘key’ portion of the item. First, it allows you to turn off the built-in, default templates, which are: FILE = Add File (generic, empty file) VB = Add VB Class CS = Add C# Class (includes some basic usings) HTML = Add HTML page (includes basic structure, doctype, etc) JS = Add Script (includes an immediately-invoking function closure) To turn one off, just include a file with the name “_<KEY>”. For example, to turn off all the items except our custom one, you do this: The other reason for the key is that there are new Visual Studio Commands created for each one. This makes it possible to bind a keyboard shortcut to one of them. So you could, for example, have a keyboard combination that adds a new web page to your website, or a new CS class to your class library, etc. Here is our sample item showing up in the keyboard bindings option. Even though the contents of the template directory may change from one launch of Visual Studio to the next, the bindings will remain attached to any item with a particular key, thanks to it taking care not to lose keyboard bindings even though the commands are completely recreated each time. The Icon Face ID Visual Studio uses a Microsoft Office style add-in mechanism, I gather. There are a predetermined set of built-in icons available. You can use your own icons when developing add-ins, of course, but I’m no designer. I just wanted to find appropriate-ish icons for the built-in templates, and allow you to choose from an existing built-in icon for your own. Unfortunately, there isn’t a lot out there on the interwebs that helps you figure out what the built-in types are. There’s an MSDN article that describes at length a way to create a program that lists all the icons. But I don’t want to write a program to figure them out! Just show them to me! Sheesh :) Thankfully, someone out there felt the same way, and uses a novel hack to get the icons to show up in an outlook toolbar. He then painstakingly took screenshots of them, one group at a time. It isn’t complete though – there are tens of thousands of icons. But it’s good enough. If anyone has an exhaustive list, please let me, and the rest of the add-in community know. Icon Face ID Reference Installing the Add-in It will work with Visual Studio 2008 and Visual Studio 2010. Just unzip the release into your Documents\Visual Studio 20xx\Addins folder. It contains the binary and the Visual Studio “.addin” file. For example, the path to mine is: C:\Users\InfinitiesLoop\Documents\Visual Studio 2010\Addins Conclusion So that’s it! I hope you find it as useful as I have. It’s on GitHub, so if you’re into this kind of thing, please do fork it and improve it! Reference: VSNewFile on GitHub VSNewFile release on GitHub Icon Face ID Reference

    Read the article

  • Oracle Launches Enterprise Manager Ops Center 12c at OpenWorld Japan

    - by Anand Akela
    Oracle Senior Vice President John Fowler and Oracle Vice President of Systems Management Steve Wilson unveiled Oracle Enterprise Manager Ops Center 12c at Oracle OpenWorld, Tokyo Japan on April 4th morning.  Oracle Enterprise Manager combines management of servers, operating systems, virtualization solution for x86 and SPRC servers, firmware, storage, and network fabrics with Oracle Enterprise Manager Ops Center. Available at no additional cost as part of the Ops Center Anywhere Program, Oracle Enterprise Manager Ops Center 12c allows enterprises to accelerate mission-critical cloud deployment, unleash the power of Solaris 11 — the first cloud OS, and simplify Oracle engineered systems management. Here are some of the resources for you to learn more about the new Oracle Enterprise Manager Ops Center 12c :  Press Release : Introducing Oracle Enterprise Manager Ops Center 12c White paper: Oracle Enterprise Manager Ops Center 12c - Making Infrastructure-as-a-Service in the Enterprise a Reality Oracle Enterprise Manager Ops Center web page at Oracle Technology Network Join Oracle Launch Webcast : Total Cloud Control for Systems on April 12th at 9 AM PST to learn more about  Oracle Enterprise Manager Ops Center 12c from Oracle Senior Vice President John Fowler, Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executive. Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • SQL Server 2008 R2 SQL Server has encountered x occurrence(s) of I/O requests taking longer than 15 seconds to complete on file

    - by Natalia
    When I alter or create a stored procedure directly on production or QA database, after a few seconds I start experience timeouts and application becomes unavailable. Log files shows this error: SQL Server has encountered 3 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\QA_Database.ldf] in database [QA_Database] (9). The OS file handle is 0x0000000000000568. The offset of the latest long I/O is: 0x0000002821a200 We have SQL Server 2008 R2 installed, including latest Service Pack. Production and staging environments are completely separated. I tried to reproduce it on QA, but to no avail. I have no clue what it could be. Appreciate your help.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >