Search Results

Search found 94141 results on 3766 pages for 'server and storage systems'.

Page 34/3766 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • SQLAuthority News – SQL Server 2012 Upgrade Technical Guide – A Comprehensive Whitepaper – (454 pages – 9 MB)

    - by pinaldave
    Microsoft has just released SQL Server 2012 Upgrade Technical Guide. This guide is very comprehensive and covers the subject of upgrade in-depth. This is indeed a helpful detailed white paper. Even writing a summary of this white paper would take over 100 pages. This further proves that SQL Server 2012 is quite an important release from Microsoft. This white paper discusses how to upgrade from SQL Server 2008/R2 to SQL Server 2012. I love how it starts with the most interesting and basic discussion of upgrade strategies: 1) In-place upgrades, 2) Side by side upgrade, 3) One-server, and 4) Two-server. This whitepaper is not just pure theory but is also an excellent source for some tips and tricks. Here is an example of a good tip from the paper: “If you want to upgrade just one database from a legacy instance of SQL Server and not upgrade the other databases on the server, use the side-by-side upgrade method instead of the in-place method.” There are so many trivia, tips and tricks that make creating the list seems humanly impossible given a short period of time. My friend Vinod Kumar, an SQL Server expert, wrote a very interesting article on SQL Server 2012 Upgrade before. In that article, Vinod addressed the most interesting and practical questions related to upgrades. He started with the fundamentals of how to start backup before upgrade and ended with fail-safe strategies after the upgrade is over. He covered end-to-end concepts in his blog posts in simple words in extremely precise statements. A successful upgrade uses a cycle of: planning, document process, testing, refine process, testing, planning upgrade window, execution, verifying of upgrade and opening for business. If you are at Vinod’s blog post, I suggest you go all the way down and collect the gold mine of most important links. I have bookmarked the blog by blogging about it and I suggest that you bookmark it as well with the way you prefer. Vinod Kumar’s blog post on SQL Server 2012 Upgrade Technical Guide SQL Server 2012 Upgrade Technical Guide is a detailed resource that’s also available online for free. Each chapter was carefully crafted and explained in detail. Here is a quick list of the chapters included in the whitepaper. Before downloading the guide, beware of its size of 9 MB and 454 pages. Here’s the list of chapters: Chapter 1: Upgrade Planning and Deployment Chapter 2: Management Tools Chapter 3: Relational Databases Chapter 4: High Availability Chapter 5: Database Security Chapter 6: Full-Text Search Chapter 7: Service Broker Chapter 8: SQL Server Express Chapter 9: SQL Server Data Tools Chapter 10: Transact-SQL Queries Chapter 11: Spatial Data Chapter 12: XML and XQuery Chapter 13: CLR Chapter 14: SQL Server Management Objects Chapter 15: Business Intelligence Tools Chapter 16: Analysis Services Chapter 17: Integration Services Chapter 18: Reporting Services Chapter 19: Data Mining Chapter 20: Other Microsoft Applications and Platforms Appendix 1: Version and Edition Upgrade Paths Appendix 2: SQL Server 2012: Upgrade Planning Checklist Download SQL Server 2012 Upgrade Technical Guide [454 pages and 9 MB] Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • What route to take to become a systems developer?

    - by Ramin
    In the past I have done a lot of Java and Python coding. Mostly, I worked on web apps and some simple console or gui apps. I also have a formal education in computer science. What route should I take to become a systems developer? I always did like C++, but never had a chance to use it for anything. Would mastering C++ be one of the steps? If so what resources can you suggest? Also, I would like to know how much different is the work between plain old development and systems development. There seem to be a lot of overlapping between the two.

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • ????WebLogic Server?????Java+?????·??????????????????|WebLogic Channel|??????

    - by ???02
    ????????·???????????????????????????·???????????????????????????????????????WebLogic Server???????????????????????????? WebLogic Server??????·??????·???????????????????????????????????Publickey???????IT???????????????WebLogic Server??????????????????????????????????(???)??????·??????????Java?????????WebLogic Server――WebLogic Server??1990???????????·?????????????????????·????????????????????????????????????????????????????????????:WebLogic Server????????????????1998????1999???????4.5???????????J2EE(Java 2 Platform, Enterprise Edition)?????????????·???Java??????API???Java Servlet?JSP?JDBC?RMI(Remote Method Invocation)??????EJB 1.0??????????????????????????WebLogic Server?Java???????????????????????????????"??"????????? ????WebLogic Server 5.1??6.0???????????????·???????????????Java????????·?????????????????????????????????WebLogic Server?????Java??????????????????????????????????????????????????????――????????????·??????????Java???????????????????????:????????????????·?????????????Java????????????????????????? WebLogic Server?????2001???????6.1?????????????????????????????????????·????????????????·?????J2EE????????????????????????10????WebLogic Server??????????????????????????――WebLogic Server?????????·??????????????????????????????????? Java?????????????????????????????·???·???????????????????:???Java????????·?????C/C++?????COM?CORBA?????????????????????????????????????????WebLogic Server?JDBC???????????????Java?????????????????????????????WebLogic Server??????????????????????????? ????????????????????J2EE????????????????????????????????????????WebLogic Server?????????????? ?????????????????????????????J2EE??????????????????????????????????????????????????????????????????????????????????????????????????????????????WebLogic Server?????????J2EE????????????????????????????? ???????????????????????????????????????????????????????????:???????????????J2EE???????Java EE????????????????????????????????????????????????????????????WebLogic Server??????????? ?????Java EE??????????????????????WebLogic Server?????????????????????????????????/????????????·??????·???????????????????????????????????????――?????????WebLogic Server??????????????????????????????????????????????????????:???????????11g???????????·???????????????????????????????????? ??????????·??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????Java EE????????????????????:?????????????????????(??)????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????――Twitter?Facebook???????????????????????????????????????????????????????????????????????????????????????????????????????????????·?????????????????????????????????????????????????>

    Read the article

  • SQL Server Express performance issue

    - by Developer IT
    Hi folks ! I know my questions will sound silly and probably nobody will have perfect answer but since I am in a complete dead-end with the situation it will make me feel better to post it here. So... I have a SQL Server Express database that's 500 Mb. It contains 5 tables and maybe 30 stored procedure. This database is use to store articles and is use for the Developer It web site. Normally the web pages load quickly, let's say 2 ou 3 sec. BUT, sqlserver process uses 100% of the processor for those 2 or 3 sec. I try to find which stored procedure was the problem and I could not find one. It seems like every read into the table dans contains the articles (there are about 155,000 of them and 20 or so gets added every 15 minutes). I added few index but without luck... It is because the table is full text indexed ? Should I have order with the primary key instead of date ? I never had any problems with ordering by dates.... Should I use dynamic SQL ? Should I add the primary key into the url of the articles ? Should I use mutiple indexes for seperate columns or one big index ? I you want more details or code bits, just ask for it. Basicly, every little hint is much apreciated. Thanks.

    Read the article

  • Can't upgrade Windows Server 2012 Essentials to Windows Server 2012 R2 Essentials

    - by Magnus
    So today was RTM of Windows Server 2012 R2 Essentials released. I immediately tried to do an in-place upgrade of an existing RTM 2012 Essentials to RTM 2012 R2 Essentials, but get: Windows Server 2012 Essentials cannot be upgraded to Windows Server 2012 R2 Essentials. You can choose to install a new copy of Windows Server 2012 R2 Essentials instead, but this is different from an upgrade, and does not keep your files, settings, and programs. You’ll need to reinstall any programs using the original installation discs or files. To save your files before installing Windows, back them up to an external location such as a CD, DVD, or external hard drive. To install a new copy of Windows Server 2012 R2 Essentials, click the Back button in the upper left-hand corner, and select “Custom (advanced)”. This seems a bit odd, is this really the case? I can't find anything in the official documentation stating that this upgrade isn't possible.

    Read the article

  • Windows Server 2012 & Exchange Server 2013 CAL's

    - by Joey Harris
    Trying to build an Exchange server solution for a company and they want me to look into licensing for Windows. I'm going to be purchasing the necessary server licenses for Windows Server 2012 Standard and Microsoft Exchange 2013 Enterprise. I just have a few questions about the CAL's thing which seems like a complete ripoff to me. How are CAL's tracked for both Windows Server and Exchange? Is it tied to Active Directory profiles? What happens if I dont have the necessary CAL's for Windows Server? Will a client be denied access from Active Directory? Is there any enforcement for this? Also the same question for Exchange 2013 CAL's Thanks

    Read the article

  • cloning mac address of physical server converted into vmware server

    - by user24981
    We've recently converted a physical Windows Server 2003 into vmware using P2V. However, one of the pieces of software on the 2003 machine are still looking for the old server's network MAC address in order to run. I've read several articles where it's discussed that you can modify the last part of the generated address and set it to static, but I need to clone the whole mac address to mimic the one in the old server. We're running CentOS and VMware server 2.0 as the host system. I was told that maybe adding in a second network card in the host and setting the virtual system's nic to that card instead of "bridged" would allow me to edit the vmx file and clone the whole MAC address. I can't use the old network card from the physical server because it's ISA and our new bus is PCI Any ideas? Thanks, Mike

    Read the article

  • Virtualbox PXE Boot Failing with a Windows Server 2008 R2 Server

    - by Vbitz
    Some fast help on this would be good, I have been on this problem for 14 hours. In a Virtualbox test environment I have 2 virtual machines networked together using a internal network (no traffic runs though the host, it is all at a software level). One is a fresh client with 512mb of ram and a dual core set-up, the other is the server with 1.5GB of ram and running server 2008 r2. The server is configured as a dns server, dchp server, domain controller and also serves PXE booting though WDS (Windows Deployment Services). Both machines can see each other and I am able to start a network boot. The issue comes at the second to last stage of the pre windows PE install. On TFTP download of boot.sdi it starts it but stops during the boot process.

    Read the article

  • MS Word reports files read-only on Win Server 2003 file server

    - by Larry Hamelin
    I'm not a sysadmin, but I play one on TV: I'm trying to fix a problem for my mom's tiny non-profit company's server. I set up a Windows Server 2003 machine as a domain controller and file server. Everything has been working well for a few months, but lately when she tries to save changes to a Word (Office XP) document stored on the server, Word will intermittently report that the file is read-only. Saving to an alternate file in the same directory works, and when she closes Word and re-opens the original document, it'll save changes just fine. No one else ever has these files open. I've checked security and share permissions, and everything's OK. We've tried rebooting the server, but the problem continues, but intermittently. I have no clue what's going on. Help!

    Read the article

  • How can I launch RemoteApp on Windows Server from server itself at startup

    - by Rusted
    I have Windows Server 2008 R2 with RDS and custom desktop (GUI) application installed on the server. The app is started as RemoteApp on server by user from his desktop computer (or, sometimes, he can work from notebook over VPN). Some details about environment: the server automatically shuts down every evening and automatically power-on every morning (this is a requirement) desktop application do some precalculations/precaching on startup and it can take lot of time mentioned application have some memory leaks, so I can't use hibernate instead of shutdown When user launching this app from his computer, he can't start work with it until this app finishes pre-initialization. Is there any way to start RemoteApp session at the server startup (without actual user logon), so that the user could connect to this session from his computer later? I don't want to involve the user's computer to make it work. I have tried to do it by Windows startup script, but have no luck - starting RDP session requires actual user session.

    Read the article

  • Get a file from a load balanced server in Windows Server

    - by Leandro
    I've a load balanced server on production environment for my application. The server is on Windows Server 2008 R2. I'm running a web application that creates and save a file into a folder on the web path. So I need to create a job that copy this file into another server. The main idea is that a file watcher checks for the file and then copy it instantly. But how can I know in what server it's the file? Please avoid "why you don't" answers to get a directly answer, if it's someone.

    Read the article

  • Stored Procedure To Search the AccessRights given to the Users.

    - by thevan
    Hi, I want to display the Access Rights given to the Users for the particular module. I have Seven Tables such as RoleAccess, Roles, Functions, Module, SubModule, Company and Unit. RoleAccess is the Main Table. The AccessRights given will be stored in the RoleAccess Table only. RoleAccess Table has the following columns such as RoleID, CompanyID, UnitID, FunctionID, ModuleID, SubModuleID, Create, Update, Delete, Read, Approve. Here Create_f, Update_f, Delete_f, Read_f and Approve_f are flags. Company Table has two columns such as CompanyID and CompanyName. Unit Table has three columns such as UnitID, UnitName and CompanyID. Roles Table has four columns such as RoleID, RoleName, CompanyID and UnitID. Module Table has two columns such as ModuleID and ModuleName. SubModule Table has three columns such as ModuleID, SubModuleID, SubModuleName. Functions Table has five columns such as FunctionID, FunctionName, ModuleID and SubModuleID. At First, The RoleAccess Table does not contain any records. So I want to display the ModuleName, SubModuleName, FunctionName, CompanyID, RoleID, UnitID, FunctionID, ModuleID, SubModuleID, Create_f, Update_f, Delete_f, Read_f and Approve_f. If the AccessRights is assigned to the Particular RoleID means the flags in the search results will be 1 else it will be 0. I have witten one stored procedure but it displays the records based on the RoleID stored in the RoleAccess table. But I also want to display the Flags as 0 for the Roles not stored in the RoleAccess Table. I want the Stored Procedure for this. Any one please help me.

    Read the article

  • SQL Server 2005: Rename DB Server Instance Name?

    - by Code Sherpa
    Hi, Can somebody tell me how to rename the DB server instance name and a DB name in SQL Server 2005? Right Now I Have SERVER/OLDNAME -- oldnameDB I want to change the server instance and also change the db name. I have tried: EXEC sp_renamedb 'oldName', 'newName' and that has changed the dbname as it appers in the tree directory. But, when I do "select @@servername" it is the old name. Also, the MDF and LDF files are still the old name. How do change instance and db names as a clean sweep across the server? Thanks.

    Read the article

  • Windows Server 2003 stops sharing a folder when the server reboots [closed]

    - by evolvd
    I have a file server that will stop sharing a particular folder once the server is rebooted. This problem happened after the server went through a p2v migration. There are no events that seem to correlate to this issue. I was reading http://support.microsoft.com/default.aspx/kb/870964 but I don't think this applies and there are other shares on the same drive that are still there after a reboot. To be clear about the issue; When the server reboots share x is no longer shared any more but share y and z are still there. This has nothing to do with users not being able to see the share. On the server its self the folder that was shared is no longer shared. I wish I had more information to post but after hours on this issue I can't find a way to move forward.

    Read the article

  • Find out how much storage a row is taking up in the database

    - by Vaccano
    Is there a way to find out how much space (on disk) a row in my database takes up? I would love to see it for SQL Server CE, but failing that SQL Server 2008 works (I am storing about the same data in both). The reason I ask is that I have a Image column in my SQL Server CE db (it is a varbinary[max] in the SQL 2008 db) and I need to know now many rows I can store before I max out the memory on my device.

    Read the article

  • Restoring a backup SQL Server 2005 where is the data stored?

    - by sc_ray
    I have two Sql Server database instances on two different machines across the network. Lets call these servers A and B. Due to some infrastructural issues, I had to make a complete backup of the database on server A and robocopy the A.bak over to a shared drive accessible by both A and B. What I want is to restore the database on B. My first issue is to restore the backup on server B but the backup location does not display my shared drive. My next issue is that server B's C: drive has barely any space left and there are some additional partitions that have more space and can house my backup file but I am not sure what happens to the data after I restore the database on B. Would the backup data fill up all the available space on C:? It will be great if somebody explain how the data is laid out after the restore database is initiated on a target database server? Thanks

    Read the article

  • Updating a staging server (from a CI server) in a Vagrant box with Chef

    - by Tomas Brambora
    I'm using Vagrant + Chef (chef_client provisioner) to create & provision a staging environment for my server. And I have a Jenkins job set up that is run every time I push to my 'develop' branch. In the Jenkins job, I would like to update & rebuild the source code of the server in the staging box and restart it. I have already written the cookbooks that install the dependencies, configure the db etc. But I'm not sure how to run only the update & rebuild & restart stuff from the cookbooks. I understand I could always tear down the whole box and rebuild it, but provisioning the box is a lengthy process so I would like to do that as little as possible. I split my server cookbook into 3 recipes: dependencies, db_setup and server. What I want to run in the my Jenkins job is the "server" recipe only. But I dont' understand how can I do that... If I specify the run_list on my Chef server, then I lose the ability to provision the whole box from scratch. Basically, I would like to be able to tell Vagrant from the command line what recipes Chef should run. Is that possible somehow? Cheers!

    Read the article

  • Correct way to set up office network - 8 workstations, a file server and a staging server

    - by naunu
    Our office had this old school windows 2003 domain setup, our server caught fire, and now we are looking to do it right from scratch. Here is what we need: 5 PC and 3 Mac workstations for web development, they will each have WAMP/MAMP setup on them, managed by their developers. We will have a file server for assets, and a LAMP server with an external IP for staging. Here is what we have to work with: 5 IP addresses, brand new PC file server with windows 2008 SE, D-Link DSS-16+ 16 port switch, belkin 5 port wireless router, cable modem with 4 ports. How I have it set up now (this is a temporary makeshift setup): Cable modem = LAMP server, wireless router Wireless router = Switch = All of the workstations and file server (setup as a workgroup). We have noticed our internet is very slow with us all plugged in to the switch, and the switch plugged in to the router. I am not positive, but I think it is because our router does not have NAT. We are also having problems with the MACs connection to the network drive - it keeps disconnecting. I want this done right, and we have a ~$600 budget to buy anything else we need. Does anybody have any advice for me? Should I set up a domain or workgroup?

    Read the article

  • Windows Server 2008 (Web Server) Replication

    - by justjoshingyou
    We have a load balanced environment with Windows Server 2008. What are some best practices to setting up replication across the web servers? Do I only want to replicate the web folders? How about replicating IIS changes - or do I need to make IIS changes on every server? I've never, ever set up replication, but I have worked with a web farm that used it before. Basically, I only know the basics about how it works, and am looking for any advice, guides, warnings, etc on setting this up. If you'd like to offer any advice, I'll let you know how our environment is for now. We have 1 prod server up and the second is nearly ready to go. We are using a cloud system and all machines are VM's. I am in the process of setting up the domain controller now (as I need to have one for DFS). Any ideas on the best way to go about setting up replication? Should we just stick the prod server in from the start or set up using a test VM and our second server and then switch it up later? I do not want to risk overwriting our prod server. Thanks!

    Read the article

  • How to get an email notification when a USB storage device is inserted?

    - by karthick87
    We are running more than 600 Ubuntu systems in our company. It is a data centre so we have certain policies. We have disabled the usage of storage devices in all the Ubuntu systems. However we would like to configure email alerts. If someone inserts storage devices, we should get an email Alert with subject as below, Email Alert: STORAGE DEVICE FOUND on IP: 172.29.35.18 Note: Where as for Windows system, we have certain policies applied in our DC. So there is no problem with Windows system. We need to receive alerts for Ubuntu system also. Any way to accomplish the above task would be great.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >