Search Results

Search found 3147 results on 126 pages for 'career guidance'.

Page 114/126 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Can you write files in Chrome 8?

    - by greggory.hz
    I'm wondering if, with the new File API exposed in Chrome (I'm not concerned with cross-browser support at this time), it would be possible to write back to files opened via a file input. You can see an example of what I'm trying to accomplish here: http://www.grehz.com/ide. I know I can use server side scripts to dynamically create the files and allow the user to download them normally. I'm hoping that there's a way to accomplish this purely client side. I had read somewhere that you can write to files opened via a file input. I haven't been able to find any examples of this, though I have seen passing references to a FileWriter class. I would be completely not surprised if this wasn't possible though (it seems likely that there are security issues with this). Just looking for some guidance or resources. UPDATE: I was reading here: http://dev.w3.org/2009/dap/file-system/file-writer.html As I was playing around in Chrome, it looks like FileSaver and FileWriter are not implemented, but BlobBuilder is. I can call getBlob() on the BB object, is there any way I can then save that without FileSave or FileWriter?

    Read the article

  • jquery.clone() and ASP.NET Forms

    - by Jeff
    So I have a page where I would like to be able to add multiple, dynamic users to a record in a database. Here's the rough start page: <div id="records"> <div id="userRecord"> Name: <asp:TextBox runat="server" ID="objNameTextBox"></asp:TextBox> <br /> Phone Number: <asp:TextBox runat="server" ID="objPhoneNumberTextBox"></asp:TextBox> <br /> </div> </div> And the jquery: $(function () { $(".button").button().click(function (event) { addnew(); event.preventDefault(); }); }) function addnew() { $('#userRecord').clone().appendTo('#records'); } So my question is what do I use within ASP.NET to be able to poll all of the data in the form and add a unique record for each #userRecord div within the #records div? Yes - I should change the userRecord to a class - I will deal with that. This is just simple testing here. Should I look in JSON for this type of function? I'm not familiar with it but could figure it out if that is indeed my best option. Thanks for the guidance!

    Read the article

  • How to arrange HTML5 web page elements?

    - by Argus9
    I'm trying to make a sample web page to get acquainted with HTML5, and I'd like to try replicating Facebook's page layout; that is, the header that spans the entire width of the screen, a small footer at the bottom, and a three-column main body, consisting of a list of links on the left, the main content in the middle, and an optional section on the right (for ads, frames, etc.). It's neat and displays well in multiple window sizes. So far, I've tried to accomplish this with a <header>, <footer> and a <nav> and <section> block, respectively. There's a few anomalies with the page, however. The footer (which contains a simple text block with copyright info) appears at the top-right of the page below the header when the window is maximized. On the other hand, when there isn't enough space to display everything in the window, it places the main body text below the section. In other words, it keeps moving elements around to fit the window. Could someone please tell me how I'd achieve the look I'm going for? I've tried playing around with a few CSS attributes I read about through Google, but I'm pretty sure I don't know what I'm doing, and could really use some guidance. Thank you!

    Read the article

  • Javascript mouseover image failure

    - by CaptainTeancum
    This is what I have so far, I have been watching a ton of Javascript videos and I feel I mimicked them solid but this is still not functioning as I want. Than being, it changes from logo1 to logo2 on mousover. This is homework. However homework that is important to me so any help or guidance would be appreciated. </head> <body> <p> <div> <script type="text/javascript"> // Pre load images for rollover function imgOver(id) { document.getElementById(id).src="logo1.jpg"; } function imgOut(id) { document.getElementById(id).src="logo2.jpg"; } </script> <a href="#" onmouseover="imgOver('logo1');" onmouseout="imgOut('logo2')"> <img alt="logo" height="150" src="images/Logo1.jpeg" width="110" /> </a> </div> </body> </html>

    Read the article

  • Outlook Anywhere inconsistencies with authentication methods

    - by gravyface
    So I've read this question and attempted just about every other workaround I've found online. Problem seems completely illogical to me, anyways: SBS 2011, vanilla install; haven't touched anything in IIS or Exchange outside of what's been done through the checklist (brand new domain, completely new customer) except to import an existing wildcard certificate for *.example.com (which is valid, Remote Web Workplace and Outlook Web Access work fine). On the two test machines and one production machine running a mixture of Windows XP Pro, Windows 7 and Outlook 2003 through to 2010, I've had no problem saving the password after configuring Outlook Anywhere using the wrong authentication method. I repeat, I have had no issues using the wrong authentication method on these test machines; password saves the first time, no problem, can verify it exists in the credentials manager (Start Run control userpasswords2), close Outlook, reboot, go make a sammie, come back, credentials are still saved. When I say wrong, it's because I was choosing NTLM and Exchange (under Exchange Console Server Configuration Client Access) was set by default to use Basic. On two completely different machines setup by a co-worker, they had (under my guidance) used NTLM as well... except that frustratingly, Outlook would always ask for a password. One machine was Windows XP with Outlook 2010, the other was Windows 7 with Outlook 2003. When these two machines were set to use Basic -- the correct settings -- the option to save was there and now works without issue. Puzzled by how my machines could possibly work with the wrong authentication, I then went into one of them and changed the authentication method to Basic. Now here's where it gets a little crazy: if I go under Outlook and change the authentication to use the correct setting (Basic) it fails to save the password and Outlook prompts every time (without a "remember me" checkbox). I have not had a chance to change it to Basic on the other two machines to see if this is just a fluke or not, but something just isn't right here. My two hunches are either a missing/installed KB Update or perhaps a local security policy. I should add that none of the 5 test machines in the equation here have ever been joined to the domain.

    Read the article

  • How much abstraction is too much?

    - by Daniel Bingham
    In an Object Oriented Program: How much abstraction is too much? How much is just right? I have always been a nuts and bolts kind of guy. I understood the concept behind high levels of encapsulation and abstraction, but always felt instinctively that adding too much would just confuse the program. I always tried to shoot for an amount of abstraction that left no empty classes or layers. And where in doubt, instead of adding a new layer to the hierarchy, I would try and fit something into the existing layers. However, recently I've been encountering more highly abstracted systems. Systems where everything that could require a representation later in the hierarchy gets one up front. This leads to a lot of empty layers, which at first seems like bad design. However, on second thought I've come to realize that leaving those empty layers gives you more places to hook into in the future with out much refactoring. It leaves you greater ability to add new functionality on top of the old with out doing nearly as much work to adjust the old. The two risks of this seem to be that you could get the layers you need wrong. In this case one would wind up still needing to do substantial refactoring to extend the code and would still have a ton of never used layers. But depending on how much time you spend coming up with the initial abstractions, the chance of screwing it up, and the time that could be saved later if you get it right - it may still be worth it to try. The other risk I can think of is the risk of over doing it and never needing all the extra layers. But is that really so bad? Are extra class layers really so expensive that it is much of a loss if they are never used? The biggest expense and loss here would be time that is lost up front coming up with the layers. But much of that time still might be saved later when one can work with the abstracted code rather than more low level code. So when is it too much? At what point do the empty layers and extra "might need" abstractions become overkill? How little is too little? Where's the sweet spot? Are there any dependable rules of thumb you've found in the course of your career that help you judge the amount of abstraction needed?

    Read the article

  • How to properly add .NET assemblies to Powershell session?

    - by amandion
    I have a .NET assembly (a dll) which is an API to backup software we use here. It contains some properties and methods I would like to take advantage of in my Powershell script(s). However, I am running into a lot of issues with first loading the assembly, then using any of the types once the assembly is loaded. The complete file path is: C:\rnd\CloudBerry.Backup.API.dll In Powershell I use: $dllpath = "C:\rnd\CloudBerry.Backup.API.dll" Add-Type -Path $dllpath I get the error below: Add-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. At line:1 char:9 + Add-Type <<<< -Path $dllpath + CategoryInfo : NotSpecified: (:) [Add-Type], ReflectionTypeLoadException + FullyQualifiedErrorId : System.Reflection.ReflectionTypeLoadException,Microsoft.PowerShell.Commands.AddTypeComma ndAdd-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Using the same cmdlet on another .NET assembly, DotNetZip, which has examples of using the same functionality on the site also does not work for me. I eventually find that I am seemingly able to load the assembly using reflection: [System.Reflection.Assembly]::LoadFrom($dllpath) Although I don't understand the difference between the methods Load, LoadFrom, or LoadFile that last method seems to work. However, I still seem to be unable to create instances or use objects. Each time I try, I get errors that describe that Powershell is unable to find any of the public types. I know the classes are there: $asm = [System.Reflection.Assembly]::LoadFrom($dllpath) $cbbtypes = $asm.GetExportedTypes() $cbbtypes | Get-Member -Static ---- start of excerpt ---- TypeName: CloudBerryLab.Backup.API.BackupProvider Name MemberType Definition ---- ---------- ---------- PlanChanged Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.ChangedEventArgs] PlanChanged(Sy... PlanRemoved Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.PlanRemoveEventArgs] PlanRemoved... CalculateFolderSize Method static long CalculateFolderSize() Equals Method static bool Equals(System.Object objA, System.Object objB) GetAccounts Method static CloudBerryLab.Backup.API.Account[], CloudBerry.Backup.API, Version=1.0.0.1, Cu... GetBackupPlans Method static CloudBerryLab.Backup.API.BackupPlan[], CloudBerry.Backup.API, Version=1.0.0.1,... ReferenceEquals Method static bool ReferenceEquals(System.Object objA, System.Object objB) SetProfilePath Method static System.Void SetProfilePath(string profilePath) ----end of excerpt---- Trying to use static methods fail, I don't know why!!! [CloudBerryLab.Backup.API.BackupProvider]::GetAccounts() Unable to find type [CloudBerryLab.Backup.API.BackupProvider]: make sure that the assembly containing this type is load ed. At line:1 char:42 + [CloudBerryLab.Backup.API.BackupProvider] <<<< ::GetAccounts() + CategoryInfo : InvalidOperation: (CloudBerryLab.Backup.API.BackupProvider:String) [], RuntimeException + FullyQualifiedErrorId : TypeNotFound Any guidance appreciated!!

    Read the article

  • Persistent Issues on small business network using Cisco 871W and Catalyst Express 500

    - by Ben Campbell
    Being the most qualified (read: still not qualified) to solve our persistant network issues, I've turned to serverfault for guidance. I've done some searching, reading related documentation on cisco.com and tried a bit of troubleshooting. Here is the config: 100mb synchronous connection from a business internet provider (tested multiple times at 100meg at the source) Cisco 871W wireless point & router is where the WAN connection starts (this serves all our wireless). The only wired connection in the 871W is the Catalyst switch listed below. Cisco Catalyst Express 500 (24TT) is where all the wired connections terminate. About 20 Windows workstations and servers (AD/Webservers only). Some services in EC2 including mail and other web servers/apps. I've been TOLD cabling internally should be gigabit-ready. Here are the problems: generally slow download rates from the internet to the desktop/laptop frequent "page cannot be displayed" errors in browsers-sometimes 3 or 4 reloads are necessary... often times CSS wont load or other content requiring the browser to connect to a different server. slow speed within the LAN from workstation to workstation copying files. I would expect extremely fast data transfer workstation to workstation / server to workstation in this simple network. Several things I need to admit: I'm not primarily a network guy. Funding is relatively low, I need to be the guy that finds the solution. I understand most of the terminology and most of the technology. Implementation is where I fail due to lack of experience. Getting to the point: I'm wondering whether experienced network admins think that our small network should be sufficiently served with our current hardware if configured properly... or if we should purchase new equipment and start fresh? If starting fresh is the plan, whatever that new equipment may be is a likely different question entirely. If I haven't provided enough information, I will happily do some troubleshooting and update with the results. I have experience using wireshark and some other tools. Please let me know what you think would be most helpful and thanks in advance. EDIT: I forgot to add that the Cisco applicance will not finish loading the SDM Express console. It hangs every time at the "populating modules... DHCP". It eventually crashes and closes. I've rebooted the hardware and this still happens.

    Read the article

  • How to improve this bash shell script for turning hardlinks into symlinks?

    - by MountainX
    This shell script is mostly the work of other people. It has gone through several iterations, and I have tweaked it slightly while also trying to fully understand how it works. I think I understand it now, but I don't have confidence to significantly alter it on my own and risk losing data when I run the altered version. So I would appreciate some expert guidance on how to improve this script. The changes I am seeking are: make it even more robust to any strange file names, if possible. It currently handles spaces in file names, but not newlines. I can live with that (because I try to find any file names with newlines and get rid of them). make it more intelligent about which file gets retained as the actual inode content and which file(s) become sym links. I would like to be able to choose to retain the file that is either a) the shortest path, b) the longest path or c) has the filename with the most alpha characters (which will probably be the most descriptive name). allow it to read the directories to process either from parameters passed in or from a file. optionally, write a long of all changes and/or all files not processed. Of all of these, #2 is the most important for me right now. I need to process some files with it and I need to improve the way it chooses which files to turn into symlinks. (I tried using things like the find option -depth without success.) Here's the current script: #!/bin/bash # clean up known problematic files first. ## find /home -type f -wholename '*Icon* ## *' -exec rm '{}' \; # Configure script environment # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ set -o nounset dir='/SOME/PATH/HERE/' # For each path which has multiple links # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # (except ones containing newline) last_inode= while IFS= read -r path_info do #echo "DEBUG: path_info: '$path_info'" inode=${path_info%%:*} path=${path_info#*:} if [[ $last_inode != $inode ]]; then last_inode=$inode path_to_keep=$path else printf "ln -s\t'$path_to_keep'\t'$path'\n" rm "$path" ln -s "$path_to_keep" "$path" fi done < <( find "$dir" -type f -links +1 ! -wholename '* *' -printf '%i:%p\n' | sort --field-separator=: ) # Warn about any excluded files # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ buf=$( find "$dir" -type f -links +1 -path '* *' ) if [[ $buf != '' ]]; then echo 'Some files not processed because their paths contained newline(s):'$'\n'"$buf" fi exit 0

    Read the article

  • !! 0xc01a00d !! aka Vista won't boot

    - by Chris
    Answer: Parts of the hard drive are corrupted. All of my user's code was checked in, so I'm just going to format the box. One of my users has an HP DV5-1235dx laptop running Windows Vista Professional x64. Last night, our WSUS server pushed out a few updates including "Security Update for Windows Vista for x64-based Systems (KB960859)". When we try to boot the laptop today, a black screen with white text comes up displaying: xxx/169894 (something) Where xxx increments rapidly and something is some dll or registry key. Eventually that stops and the screen displays !! 0xc01a00d !! 35566/169894 (\Registry\Machine\COMPONENTS\DerivedDat...) No other computers that received this update are displaying the same error. So far I've tried running CHKDSK off of HBCD. It repaired a thing or two, but the computer still doesn't boot. I tried repairing the Windows install from the Vista CD, but I get a black screen with white text displaying something along the lines of: 0 No Emulation System Type 00 1 No Emulation System Type 00 Select one of the above Booting in Last Known Good Configuration doesn't work. Booting in Safe Mode freezes at Loading Windows Files [snip] Loaded: \windows\system32\drivers\crcdisk.sys Please wait... My next step is trying to boot Safe Mode with Command Prompt and try to run rstrui.exe. While I do that, does anybody have any guidance? Edit: Booting into Safe Mode with Command Prompt will not work. See Booting in Safe Mode above. Edit 2: I managed to boot from the Vista DVD. I ran the system repair, and now I get a black screen with white text saying: !! 0xc0000034 !! 290/169894 (_0000000000000000.cdf-ms) Edit 3: I ran the system repair again, and it attempted to repair my hard drive. It failed. Problem Signature: Problem Event Name: Startup Repair V2 Problem Signature 01: External Media Problem Signature 02: 6.0.6001.18000.6.0.6001.18000 Problem Signature 03: 4 Problem Signature 04: 196611 Problem Signature 05: CorruptVolume Problem Signature 06: NoBootFailure Problem Signature 07: 0 Problem Signature 08: 0 Problem Signature 09: unknown Problem Signature 10: 1168 OS Version: 6.0.6002.2.2.0.256.1 Locale ID: 1033 Answer: Parts of the hard drive are corrupted. All of my user's code was checked in, so I'm just going to format the box.

    Read the article

  • Uninstall php and nginx or fix setup

    - by jreed121
    First off, I'm a huge linux noob - sorry... I'm trying to setup nginx with php-fpm on debian and I'm pretty sure that I've completely screwed it up. nginx seems to be running fine because I can it it from a web browser and it load the stock "Welcome to nginx!" page. I'm not so sure about php-fpm though. When I try something like # restart php-fpm I get: bash: restart: command not found First off php-fpm some how got installed as php5-fpm when I do root@server:/etc/init.d# ls, which seems to contradict every tutorial and help doc I've read (supposed to be 'php-fpm'). I can restart it with this: service php5-fpm restart And just enter the package name 'php5-fpm' I get this: root@server:~# php5-fpm [17-Nov-2012 23:15:36] NOTICE: PHP message: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/suhosin.so' - /usr/lib/php5/20100525/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 [17-Nov-2012 23:15:36] ERROR: An another FPM instance seems to already listen on /var/run/php5-fpm.sock [17-Nov-2012 23:15:36] ERROR: FPM initialization failed The root for nginx is /usr/share/nginx/html, when I try to navigate to a .php file in there with my web browser, it tries to download the file instead of interpret it. I would like this folder to be in my user's home directory ie: /home/administrator/www or /home/nginx/www. I know in order to do this I need to modify nginx.conf, but I find that configuration file difficult to understand. I suppose the fact that my .php scripts aren't being handled is my bigger problem anyways. When I try to see what running on port 9000 (php-fpm default port) with lsof -i :9000 it returns nothing - I guess indicating that it isn't listening. then I head over to vim /etc/php5/fpm/php-fpm.conf and there is no where to designate a port number. So should I just uninstall everything and start from scratch? If so, how do I clean it all up? Any suggestions for a tutorial once I'm ready to try again? Should I attempt to troubleshoot this mess? If so where should I start? Sorry guys, I'm feeling pretty stupid and lost right now. I'm not sure what my next steps are in trying to resolve this issue are. I realize that this is a horrible question for this type of Q&A site, but I'd really appreciate any guidance.

    Read the article

  • Easiest Way To Get Started In Dot Net

    - by Avery Payne
    Ok, so the initial search in StackOverflow shows nothing related for this question. So here it goes: Let's pretend for a moment that you're just getting started in a career in computer programming. Let's say that, for whatever reason, you decide to use the .Net framework as a basis for your programming. Let's also say that you've been exposed to some programming background, but not one in .Net, so it seems foreign to you at first. And lastly, you don't have the benefit of 25 years of exposure to the Win32 API, which explains why it seems so foreign to you when you start looking at it. So the questions are: What is a comprehensive overview of what .Net is? It appears to be a combination of a runtime environment, a set of languages, a common set of libraries, and perhaps a few other things...so it's about as clear as mud. Specifically, what are the key components to .Net? What is the easiest way to understand .Net programming with regard to available APIs? Which language would best suit beginning programming out of the "stock" languages that Microsoft has to offer? (C++, C#, VB, etc.) What are some differences between .Net programming and programming in a procedural language (aka Pascal, Modula, etc.) What are some differences between .Net programming and programming in a "traditional" object-oriented language? (aka Smalltalk, Java, Python, Ruby, etc.) As I currently understand it, the CLR provides a foundation for all of the other languages to run on. What are some of the inherent limitations of the CLR? Given the enormous amount of API to cover, would it even be worth learning a .Net language (using the Microsoft APIs) given that you would not have prior exposure to Win32 programming? Let's say you write a for-profit program with .Net. Can you resell the program without running afoul of licensing issues? Let's say you write a gratis (free) program with .Net. Can you offer the program to the public under a "free" license (GPL, BSD, Artistic, etc.) without running afoul of licensing issues? Thank you in advance for your patience.

    Read the article

  • Is Ogre's use of Exceptions a good way of using them?

    - by identitycrisisuk
    I've managed to get through my C++ game programming career so far virtually never touching exceptions but recently I've been working on a project with the Ogre engine and I'm trying to learn properly. I've found a lot of good questions and answers here on the general usage of C++ exceptions but I'd like to get some outside opinions from here on whether Ogre's usage is good and how best to work with them. To start with, quoting from Ogre's documentation of it's own Exception class: OGRE never uses return values to indicate errors. Instead, if an error occurs, an exception is thrown, and this is the object that encapsulates the detail of the problem. The application using OGRE should always ensure that the exceptions are caught, so all OGRE engine functions should occur within a try{} catch(Ogre::Exception& e) {} block. Really? Every single Ogre function could throw an exception and be wrapped in a try/catch block? At present this is handled in our usage of it by a try/catch in main that will show a message box with the exception description before exiting. This can be a bit awkward for debugging though as you don't get a stack trace, just the function that threw the error - more important is the function from our code that called the Ogre function. If it was an assert in Ogre code then it would go straight to the code in the debugger and I'd be able to find out what's going on much easier - I don't know if I'm missing something that would allow me to debug exceptions already? I'm starting to add a few more try/catch blocks in our code now, generally thinking about whether it matters if the Ogre function throws an exception. If it's something that will stop everything working then let the main try/catch handle it and exit the program. If it's not of great importance then catch it just after the function call and let the program continue. One recent example of this was building up a vector of the vertex/fragment program parameters for materials applied to an entity - if a material didn't have any parameters then it would throw an exception, which I caught and then ignored as it didn't need to add to my list of parameters. Does this seem like a reasonable way of dealing with things? Any specific advice for working with Ogre is much appreciated.

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • RAID 50 24Port Fast Writes Slow Reads - Ubuntu

    - by James
    What is going on here?! I am baffled. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/dev/zero of=/Volumes/MercuryInternal/test/test.fs bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 57.0948 s, 735 MB/s serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/Volumes/MercuryInternal/test/test.fs of=/dev/null bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 116.189 s, 361 MB/s OF NOTE: My RAID50 is 3 sets of 8 disks. - This might not be the best config for SPEED. OS: Ubuntu 12.04.1 x64 Hardware Raid: RocketRaid 2782 - 24 Port Controller HardDriveType: Seagate Barracuda ES.2 1TB Drivers: v1.1 Open Source Linux Drivers. So 24 x 1TB drives, partitioned using parted. Filesystem is ext4. I/O scheduler WAS noop but have changed it to deadline with no seemingly performance benefit/cost. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 41020686336 sectors, 19.1 TiB Logical sector size: 512 bytes Disk identifier (GUID): 95045EC6-6EAF-4072-9969-AC46A32E38C8 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 41020686302 Partitions will be aligned on 2048-sector boundaries Total free space is 5062589 sectors (2.4 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 41015625727 19.1 TiB 0700 primary To me this should be working fine. I can't think of anything that would be causing this other then fundamental driver errors? I can't seem to get much/if any higher then the 361MB a second, is this hitting the "SATA2" link speed, which it shouldn't given it is a PCIe2.0 card. Or maybe some cacheing quirk - I do have Write Back enabled. Does anyone have any suggestions? Tests for me to perform? Or if you require more information, I am happy to provide it! This is a video fileserver for editing machines, so we have a preference for FAST reads over writes. I was just expected more from RAID 50 and 24 drives together... EDIT: (hdparm results) serveradmin@FILESERVER:/Volumes/MercuryInternal$ sudo hdparm -Tt /dev/sdb /dev/sdb: Timing cached reads: 17458 MB in 2.00 seconds = 8735.50 MB/sec Timing buffered disk reads: 884 MB in 3.00 seconds = 294.32 MB/sec EDIT2: (config details) Also, I am using a RAID block size of 256K. I was told a larger block size is better for larger (in my case large video) files. EDIT3: (Bonnie++ Results. Would love some guidance with this!)

    Read the article

  • Providing reverse records for records that map to ISP IP

    - by thejartender
    I have been instructed to use my ISP ip (as a temporary fix for mapping my name server and domain records as my router dishes out rfc 1918 adresses to devices in my network where I am running an Ubuntu server, my router and my development laptop andso I have fixed: $TTL 3H @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar.org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. To a temporary version of: $TTL 3H @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 88.89.190.171 @ IN NS ns.thejarbar.org. yuccalaptop IN A 10.0.0.19 ns IN A 88.89.190.171 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. I am using bind and when using named-checkzone on this file according to my zone configurations, this file has no errors. I then run dig thejarbar.org @88.89.190.171 and get an expected authorative reply. My issue is creating my reverse DNS SOA zone and I would gratly appreciate assistance and guidance. I am stuck on how to represent the reverse records correctly for the eddresses that map to my isp IP. I am trying: $TTL 3H 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 171.190.89.88. IN PTR thejarbar.org. 171.190.89.88. IN NS ns.thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. www IN PTR www.thejarbar.org. But running named-checkzone on this file leaves an erroneous return that IN: has no NS records I would greatly appreciate assistance

    Read the article

  • Is it normal for a programmer with 2 years experience to take a long time to code simple programs?

    - by ajax81
    Hi all, I'm a relatively new programmer (18 months on the scene), and I'm finally getting to the point where I'm comfortable accepting projects and developing solutions under minimal supervision. Unfortunately, this also means that I've become acutely aware of my performance shortfalls, the most prevalent of which is the amount of time it takes me to develop, test, and submit algorithms for review. A great example of what I'm talking about occurred this week when I was tasked with developing a simple XML web service (asp.net 3.5) callable via client-side JavaScript, that accepts a single parameter and returns a dataset output to a modal window (please note this is the first time I've had to develop a web service and have had ZERO experience creating/consuming them...let alone calling them from JS client side). Keeping a long story short -- I worked on it for 4 days straight, all day each day, for a grand total of 36 hours, not including the time I spent dwelling on the problem in the shower, the morning commute, and laying awake in bed at night. I learned a great deal about web services and xml/json/javascript...but was called in for a management review to discuss the length of time it took me to develop the solution. In the meeting, I was praised for the quality of my work and was in fact told that my effort was commendable. However, they (senior leads and pm's) weren't impressed with the amount of time it took me to develop the solution and expressed that they would have liked to see the solution in roughly 1/3 of the time it took me. I guess what concerns me the most is that I've identified this pattern as common for myself. Between online videos, book research, and trial/error coding...if its something I haven't seen before, I can spend up to two weeks on a problem that seems to only take the pros in the videos moments to code up. And of course, knowing that management isn't happy with this pattern has shaken me up a bit. To sum up, I have some very specific questions I'd like to ask, and would greatly appreciate your objective professional feedback. Is my experience as a junior programmer common among new developers? Or is it possible that I'm just not cut out for the work? If you suspect that my experience is not common and that there may be an aptitude issue, do you have any suggestions/solutions that I could propose to management to help bring me up to speed? Do seasoned, professional programmers ever encounter knowledge barriers that considerably delay deliverables? When you started out in the industry, did you know how to "do it all"? If not, how long did it take you to be perceived as "proficient"? Was it a natural progression of trial and error, or was there a particular zen moment when you knew you had achieved super saiyen power level? Anyways, thanks for taking the time to read my question(s). I don't know if this is the right place to ask for professional career guidance, but I greatly appreciate your willingness to help me out. Cheers, Daniel

    Read the article

  • Linux Fiber Channel Host Setup Basic

    - by Jim
    I've been googling for about 4 hours now with no luck. I am trying to setup a Linux server running Oracle Server 6.3 as a Fiber Channel host. And then connect it to a Dell Compellent Fibre Channel Host contain a 500GB Volume. The Oracle server itself contains two Brocade 815 FC HBAs. I've discovered their WWN(I think) via cat /sys/class/fc_host/host1/port_name 0x100000051efc3d85 cat /sys/class/fc_host/host2/port_name 0x100000051efc3d9f The next part is where I am at a loss. I've used iSCSI before...is FC the same deal where you have an initiator and a target? If so where do I specific that in linux? I'm also new to Fiber Channel as a protocol, so i am unsure what is needed to make a transaction? WWN and port ID? Similar to IP:Port combination in the Ethernet world. I've read alot regarding using systool, multipath, fc_transport commands, however none of these is recognized as a valid command from Oracle Server 6.3 Appreciate the guidance and assistance. I installed sccsi-target-utils and can now run rescan-scsi-bus and sg_map -x. rescan-scsi-bus.sh -l -w -r Host adapter 0 (megaraid_sas) found. Host adapter 1 ((null)) found. Host adapter 2 ((null)) found. Host adapter 3 (ata_piix) found. Host adapter 4 (ata_piix) found. Scanning SCSI subsystem for new devices and remove devices that have disappeared Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, LUNs 0 1 2 3 4 5 6 7 Scanning for device 0 2 0 0 .... OLD: Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning for device 0 2 1 0 ... OLD: Host: scsi0 Channel: 02 Id: 01 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 1 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 1 0 3 1 ... OLD: Host: scsi1 Channel: 00 Id: 03 Lun: 01 Vendor: COMPELNT Model: Compellent Vol Rev: 0505 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 2 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning host 3 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 3 0 0 0 ... REM: Host: scsi3 Channel: 00 Id: 00 Lun: 00 DEL: Vendor: TEAC Model: DVD-ROM DV-28SW Rev: R.2A Type: CD-ROM ANSI SCSI revision: 05 Scanning host 4 channels 0 for SCSI target IDs 0, LUNs 0 1 2 3 4 5 6 7 0 new device(s) found. 1 device(s) removed. and sg_map -x /dev/sg0 0 0 32 0 13 /dev/sg1 0 2 0 0 0 /dev/sda /dev/sg2 0 2 1 0 0 /dev/sdb /dev/sg4 1 0 3 1 0 /dev/sdc I'm not sure what this all means...

    Read the article

  • Apache + Tomcat error 120006 Using mod_proxy_ajp for Load Balance

    - by Wakaru44
    I have an apache 2 frontend with two nodes, and a backend with two instances of tomcat 6 balance with mod_proxy_ajp. The bbdd is in a separate machine. All machines use RHEL, 6.2 on the frontend, 5.5 on the backend. The infraestructure is virtualized using VMware. # This is the apache config in one of the virtualHost. ProxyPreserveHost On ProxyPass / balancer://liferay/ <Proxy balancer://liferay> BalancerMember ajp://lrab:8009 route=liferaya BalancerMember ajp://lrbb:8009 route=liferayb status=+H ProxySet lbmethod=byrequests nofailover=on </Proxy> The conector in tomcat is now configured like this: <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8" enableLookups="false" allowTrace="true" /> Do you think it could be useful to set a maxThreads parameter, like in this post?? in that case, How can i determine a proper number of threads? From time to time, we get errors like this [Tue Sep 18 17:57:02 2012] [error] ajp_read_header: ajp_ilink_receive failed [Tue Sep 18 17:57:02 2012] [error] (120006)APR does not understand this error code: proxy: read response failed from 192.168.1.104:8009 (lrab) And apache switches to the pasive node (if its active) or fails with 503. Some things i have tried so far: I think that i have some performance issues with one of the applications, Here you can see a threadDump But i'm not quite sure about it. I also started to monitor the network connection. I have noticed that there are some pings lost when i have a "ping -f " so maybe it could be a network issue, but the success rate is 100% (so the lost packets are only a few among the flood, but maybe, i don't know, enough to break the link betwen apache and tomcat). I wrote a python script to check connectivity with timestamps on the pings, so i can know when the network fails. After sniffing the network , i can also see some RST packets, but i don't know if that is a normal behaviour (some applications do that to end a network communication). I have also noticed that the applications have problems communicating with the database, but im not even sure if this could be related or not. If you think so, i can post more info about it. I changed the connector on the tomcats to use the native one, but still the same. I need not even a solution to this, but maybe some guidance on how can i troubleshoot this better ¿Analyze threads, monitor mysql performance, sniff the traffic between apaches and tomcats? Ultimately, all i need is to balance the tomcat instances in Active/pasive mode, so if there is another way to do it, i could give it a try.

    Read the article

  • PHP-FPM Pool, Child Processes and Memory Consumption

    - by Jhilke Dai
    In my PHP-FPM configuration I have 3 Pools, the eg: Config is: ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 1 ; ;;;;;;;;;;;;;;;;;;;;;;; [www1] user = www group = www listen = /tmp/php-fpm1.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 2 ; ;;;;;;;;;;;;;;;;;;;;;;; [www2] user = www group = www listen = /tmp/php-fpm2.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 3 ; ;;;;;;;;;;;;;;;;;;;;;;; [www3] user = www group = www listen = /tmp/php-fpm3.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 I calculated the pm.max_children processes according to some example calculations on the web like 40 x 40 Mb = 1600 Mb. I have separated 4 GB of RAM for PHP, now according to the calculations 40 Child Processes via one socket, and I have total of 3 sockets in my Nginx and FPM configuration. My doubt is about the amount of memory consumption by those child processes. I tried to create high load in the server via httperf hog and siege but I could not calculate the accurate memory usage by all the PHP processes (other processes like MySQL and Nginx were also running). And all the sockets were in use, So, I seek guidance from anyone who have done this before or know how exactly the pm.max_children in PHP Works. Since I have 3 Pools/sockets with 40 child processes does that count to 3 x 40 x 40 Mb of Memory usage ? or it is just like 40 Max. Child processes sharing 3 sockets (and the total memory usage is just 40 x 40 Mb) ?

    Read the article

  • What PHP, Xdebug and Eclipse configurations work on Windows 7 64 bit?

    - by thaddeusmt
    I have been mucking around for days, trying to find the right combination that lets me debug with breakpoints and variable viewing, in Eclipse, without crashing Apache. PHP 5.3? PHP 5.2? Eclipse Helios? Eclipse Galileo? One or the other with certain versions of xdebug or php? Or do I really need to use NetBeans or something else? Is my 64 bit OS the problem? Do need specific 64bit versions of PHP, Eclipse or Xdebug to work on Windows 7 64? Any special xdebug config options and tricks that I need in php.ini? Like turning off xdebug.profiler_enable or not using quotes around my zend_extension path to the xdebug dll? A Vhosts issue? Scrap the whole thing and go back to Win XP or Ubuntu? Here's what I've already been reading: http://stackoverflow.com/questions/4509245/so-eclipse-and-xdebug-walk-into-a-bar-and-then-my-apache-server-dies/4602473 http://stackoverflow.com/questions/206788/why-does-xdebug-crash-apache-on-every-xampp-install-ive-tried http://bugs.xdebug.org/view.php?id=459 https://bugs.eclipse.org/bugs/show_bug.cgi?id=312951#c8 http://stackoverflow.com/questions/2799936/xdebug-for-php-5-2-on-windows-7-64bit and so and so on... SO, xdebug bug tracker, eclipse bugzilla, etc, etc Basically what would be great is if folks could post their working (i.e. debugging with breakpoints and local variable viewing in Eclipse) Win7 64bit configurations, including: PHP version (5.3.1, 5.2.11, etc) Xdebug dll (2.1.0-5.3-vc6, etc) Xdebug php.ini config (zend_extension = "C:\xampp\php\ext\php_xdebug.dll", etc) Apache version (2.2.14, etc) Eclipse version Anything else important? The "secret ingredient"? Thanks! I miss my debugger since I got a new laptop with Win 7! Sadly it looks like some of the drivers (switchable graphics, multi-touch pad, etc) on my lappy don't work right with Ubuntu yet, so I feel a bit trapped on Win :( I know I will figure something out eventually, but I've been at this trial-and-error game a while and am seeking some guidance. (Originally posted on StackOverflow here, but moved to SuperUser:) http://stackoverflow.com/questions/4628215/what-php-xdebug-and-eclipse-configurations-work-on-windows-7-64-bit

    Read the article

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • My mental block - struggling to learn Objective C

    - by iqessar
    Hello people, this would be my first question after signing up! Anyway heres my question, I did Java at university and I was always told I am a good programmer. However I never pursued it as a career - I went into support and management instead. Im pretty much bored with my job, I have therefore started to learn Objective C so that I can develop apps for the iphone. I am currently watching several different Videos / Books. My problem is that when I go through the Apple documentation, although I understand most of it, sometimes I stumble. I believe that because you/we have the Apple documentation (i.e. Framework references) , everything should be clear, and therefore you should have no need to refer to a book or video (in order to learn how to use a particular class). But I alway do refer to a book and video and subsequently feel guilty as I believe the framework reference should be enough. (I therefore feel I am not up to being a programmer) I also believe that you shouldn't need example code in order to learn how to use a particular class because Apple provides documentation for each class, but AGAIN I find my self googling example code and I find my answer like that - again I feel guilty for doing this. Am I right in saying that Apple documentation is simply not clear? and that its ok to refer to a video/book or google? or forums for that matter? I have proffesional programmers who tell me that I am worrying too much and that I should get on with it and use all the resources that I have. I just cant seem to get round this mental block that I have in my head. When I start a programming project I am able to use the excellent search skills that I have to find the code I need, copy and paste it (yes I do understand it) BUT then I feel guilty telling myself that why didn't you think up the code yourself???? Therefore your not a real programmer, your just good at googling. Currently I am going through 20+ books so that I can learn most of the frameworks, syntax etc to develop iphone apps. I believe if I do this, then when I think of a project I can make it quickly. Should I read a few books, like 2-3 and then just start a project /app , and if I get stuck just google it and get the code I need? Can anybody please answer my questions?

    Read the article

  • Upgrading php from php 5.3 to 5.4 .7

    - by Takingsides
    So, quickly so to speak I have noticed this topic around, I have searched and there are plenty of solutions. However these solutions do not work for me, not only that but I'm intending to learn more about the Debian based OS. Questions I would like to know how to upgrade php5.3 to php 5.4.7 compiling it from source, myself without using a third-party ppa. Is the way (explained below) the correct way of configuring php5.4? I'm new to compiling from source. Set-up I run Ubuntu Server 12.04 64bit. I've currently got: PHP 5.3 MySQL-Server Apache2 Memcached The Problem So I initially installed php5.3 using apt-get. I now wish to upgrade the php 5.4 due to the advantage of traits in OOP and the struct with Arrays and all the other recent patches and such. Possible Solutions I've seen this ondrej/ppa repository, which I refuse to use, given the fact that it may work, but it's an unknown/untrusted source. ALso, i'm not learning how to administer from source, using configure, make and install accordingly. I've seen a solution compiling from source, which is essentially how I was hoping to go about it with some guidance. Conclusion So I didn't just expect to be spoon-fed, and I went out and did some manual reading and atleast started the ball rolling myself; this how far i've got. The first thing I did was su into root (to save the typing sudo all the darn time). $ sudo su The next thing I did was download the latest version of php (5.4.7) and extracted it's contents ready to configure before installing it. $ mkdir php5-new && cd !$ $ wget -O php-5.4.7.tar.bz2 http://php.net/get/php-5.4.7.tar.bz2/from/uk3.php.net/mirror $ bzip2 -d php-5.4.7.tar.bz2 $ tar xvf php-5.4.7.tar.gz $ cd php-5.4.7 $ ./configure --help Finally I decided to have a bash, I looked through the list of options and decided I needed to list ALL of the things I wanted to include in the configuration. $ ./configure --with-mysql --with-apache2 --with-libxml --with-openssl --with-zlib --with-bz2 --with-curl --with-dom --with-gd --with-imap --with-imap-ssl --with-mcrypt --with-mysqli --with-pdo-mysql --with-libxml --enable-ftp --enable-mbstring --enable-soap Finally, the results... When the configuration process had finished, it threw an error: configure: error: xml2-config not found. Please check your libxml2 installation.

    Read the article

  • Windows Server 2008 Software Raid 5 - Data integrity issues

    - by Fopedush
    I've got a server running Windows Server 2008 R2, with a (windows native) software raid-5 array. The array consists of 7x 1TB Western Digital RE3 and RE4 drives. I have offline backups of this array. The problem is this: I noticed a few days ago after copying a large file to the disk that there was an integrity issue with that file - it was a ~12GB file that I had downloaded via uTorrent. After moving it to the raid array, I used uTorrent to relocate the download location, and performed a re-check so I could seed it from that location. The recheck found that only 6308/6310 chunks of the copied file were intact. My next step was to write a quick powershell script that would copy files to the array, while performing a SHA1 hash of the original and resultant files and comparing them. Smaller files (100-1000MB) copied over just fine. When I started copying larger data (~15GB), I found that the hash check failed about 2/3rds of the time. The corrupt files had very, very small inconsistencies - less than .01%. I further eliminated the possibility of networking or client issues by placing this large file on the C:\ of the server, and copying it repeatedly from there to the array, seeing similar results. Copying the data via explorer, powershell, or the standard windows command prompt yield the same results. None of the copies fail or report any problems. The raid array itself is listed as healthy in disk management. After a few experiments, I shut down the server and ran memtest overnight. No errors were detected. A basic run of chkdsk found no problems, but I did not use the /R flag, as I was unsure how that might affect a software raid-5 volume. I next ran Crystal Disk Info to check the smart data on the drives - but found that CDI only detected 5 out of 7 of the disks in the array. I have no idea why. Nevertheless, CDI shows the following "caution" flags on a single one of the drives: 05 199 199 140 000000000001 Reallocated Sectors Count C5 200 200 __0 000000000001 Current Pending Sector Count Which is a little bit alarming, but I don't really know what to do with the information. I hardly feel like one reallocated sector could be causing this. At this point, I'm looking for some guidance on what to do next. I need to determine the cause of this issue, but I'm hesitant to run chkdsk /R or any bootable disk health checkers because I'm afraid they might break the array. I've considered triggering a re-sync of the array, but I'm not actually sure how to do that without doing something silly like manually dropping a disk and then restoring it. Any advice that could help me ferret out the precise cause of this issue would be greatly appreciated.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >