Search Results

Search found 90468 results on 3619 pages for 'hyper v server 2012'.

Page 10/3619 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Windows Server 2012 Branchcache vs. DFS-R

    - by TheCleaner
    Warning, subjective question ahead! But hopefully a good one that won't get closed. SCENARIO: I have a branch office that currently has no on-premise server. They access everything including a DC across a 12Mbps WAN link (MPLS). The link isn't saturated, averaging around 20% utilization. The circuit is very stable and has a high SLA and excellent uptime. However, large file transfers (mainly reads, not writes) from the file server across the WAN can be slow. We don't currently utilize DFS. RESEARCH DONE: I'm aware of WAN acceleration, using either dedicated hardware (Riverbed) or a dedicated software VM (Silver Peak) for example. But the pricing is outside of our current budget and the need isn't quite there yet from our perspective (since the issue is mainly in a "pull" scenario not necessarily push/pull). I'm mainly looking at deploying a Windows server at this branch office and either utilizing DFS-R or BranchCache. Looking at a table comparison and assuming we are looking at a "hosted branchcache server" and not simply distributed: It would appear there are benefits to both, even if both are "hosted" on a server. QUESTIONS I ACTUALLY HAVE: In what scenarios do each of these techs shine and where do you choose one over the other? Looking at a hosted Branchcache server, can you set "pre-fetching" of certain folders/files on the central file server so that they are immediately accessible locally at the branch? Do you have to do this on a schedule (if it is possible)? Looking at DFS-R my concern (and apparently solved with 3rd party apps) is file locking and making sure the file gets updated properly during a write operation (ie, making sure if both copies are accessed and both are written to, which file takes precedence and what happens to the changes?). Ideal it would seem would be to lock any alternate replicas of the data, but is it really that big of an issue? Does Branchcache lock the central file for editing? Does branchcache only transmit the deltas back to the central file of what has changed? Would either technology be ill advised if the branch office server was going to be utilized as a domain controller as well?

    Read the article

  • Oracle Tuxedo at Oracle Open World 2012

    - by Deepak Goel
    Oracle Open World is almost here. There is quite a bit of Tuxedo to talk about at this year’s OOW. Primary focus will be on Tuxedo 12c, which was announced in August 2012 and is now generally available. Tuxedo 12c is a major release which many-many new and exciting features in almost all components of Tuxedo. You will not only hear about these features in various conference  sessions, you will also have an opportunity to see these features in action at demo grounds or play with these yourselves in hands-on-labs. Following is listing of Tuxedo related activities at OOW 2012: Conference Sessions Mon 1 Oct, 2012 10:45 AM - 11:45 AM, Oracle Tuxedo: What’s New in 12 c, Strategy, and Roadmap, Moscone South - 309 4:45 PM - 5:45 PM, Simplify Operations, Administration, and Management of Oracle Tuxedo Applications, Marriott Marquis - Golden Gate C3 Wed 3 Oct, 2012 3:30 PM - 4:30 PM, The Art and Practice of Mainframe Migration and Modernization, Moscone South - 309 Thu 4 Oct, 2012  2:15 PM - 3:15 PM, High-Performance, Scalable Enterprise Messaging for C/C++/COBOL Applications, Marriott Marquis - Salon 7 HOL (Hands-on Lab) Tues 2 Oct, 2012 5:00 PM - 6:00 PM Deploy, Manage, and Monitor Oracle Tuxedo Applications in the Enterprise Cloud, Marriott Marquis - Salon 3/4 Wed 3 Oct, 2012 1:15 PM - 2:15 PM, Develop C/C++ Applications for the Cloud with Oracle Tuxedo and Oracle Solaris Studio, Marriott Marquis - Salon 5/6 BOF (Birds-of-a-Feather) Mon 1 Oct, 2012 6:15 PM - 7:00 PM, Develop Scalable, Highly Available Enterprise Services in Java with Oracle Tuxedo, Marriott Marquis - Golden Gate C1 Demos Oracle Tuxedo: #1 Enterprise Cloud Platform for C/C++/COBOL Apps,  Moscone South, Right - S-215 Mainframe Rehost with Oracle Tuxedo Runtimes for CICS, IMS, and Batch, Moscone South, Right - S-216 Tuxedo Customer Appreciation Dinner Monday 1 Oct, 2012 7:30 PM - Please contact your Oracle Account Representative to attend. Limited seating. Deepak Goel Sr. Director, Software Development Oracle

    Read the article

  • Oracle Systems and Solutions at CloudExpo NY 2012

    - by ferhat
    Oracle's Larry Ellison and Mark Hurd just unveiled industy's broadest cloud strategy on June 6, with services based on industry standards, with 100+ enterprise applications live in the Cloud today!  The broadest strategy to support your journey along the cloud in any path chose, at any pace your business require and need. This is great assurance for your journey into the clouds as it is, at the same time, quite a temptation, don't you think? We will be at the Cloud Expo Conference to take place June 11-14 in New York. Oracle is Platinum Plus sponsor of 10th International  Cloud Computing Conference & Expo 2012 East. Oracle is also glad to offer complimentary VIP Gold Passes to the conference. We wish everyone a great and productive time with all  the fellow cloudsters.  We, the systems solutions group at Oracle, have prepared Oracle Optimized Solution for Enterprise Cloud Infrastructure to help you start your Infrastructure-as-a-Service with ease, confidence, speed, and savings.  In this solution we are now bringing together the power of Oracle Solaris and SPARC T4 servers. We will be at the Cloud Bootcamp on Wednesday June 13th discussing how this combination can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. We will also be at the Expo floor #511 throughout the Cloud Expo conference. Join us for the keynote, general session, and technical sessions with Oracle: Keynote Session: A Pragmatic Journey to the Cloud , Tuesday, June 12, 2012 General Session: Oracle Cloud - An Enterprise Cloud for Business-Critical Applications , Monday, June 11, 2012 Conference Session: Accelerate Enterprise Cloud Deployment and Gain Total Cloud Control, Monday, June 11, 2012 Conference Session: The Java EE 7 Platform: Developing for the Cloud, Monday, June 11, 2012 Conference Session: Integrating Big Data into Your Data Center: A Big Data Reference Architecture, Monday, June 11, 2012 Conference Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder, Tuesday, June 12, 2012 Conference Session: Building a Private, Public, or Hybrid Cloud? Simplify Your Cloud with Oracle’s Complete Cloud Solution,Tuesday, June 12, 2012 Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, Wednesday, June 13, 2012

    Read the article

  • Oracle OpenWorld Tokyo 2012:?????#OOWJP

    - by Yusuke.Yamamoto
    Oracle OpenWorld Tokyo 2012 ??2012?4?4?(?)·5?(?)·6?(?)??????? ??????????! ???????Oracle OpenWorld Tokyo 2012 ????????????????? ???? ?????????????????????????????????? ??????LAN????????????? ???????????????! ??4/4(?)????????9??????? ??????:???????|???? ????? ????? ??????? ????? Twitter Facebook ?????... ???????? ??????!Oracle ACE???Oracle Open World 2012 Tokyo - EnterpriseZine ????????????IT?????????! Oracle OpenWorld Tokyo 2012?????????????!! - WebLogic Channel "?????"???????????? ??????·??????????????Oracle OpenWorld Tokyo 2012????????????(2) - oracledatabase.jp "?????????"?????????????????????Oracle OpenWorld Tokyo 2012????????????(1) - oracledatabase.jp ???????????Oracle OpenWorld????? - oracletech.jp Oracle OpenWorld ?????????Oracle OpenWorld Unconference presented by JPOUG? - oracletech.jp ???????????Oracle Develop??? - Oracle OpenWorld Tokyo 2012 - oracletech.jp Oracle OpenWorld Tokyo 2012 ????

    Read the article

  • SSIS Catalog, Windows updates and deployment failures due to System.Core mismatch

    - by jamiet
    This is a heads-up for anyone doing development on SSIS. On my current project where we are implementing a SQL Server Integration Services (SSIS) 2012 solution we recently encountered a situation where we were unable to deploy any of our projects even though we had successfully deployed in the past. Any attempt to use the deployment wizard resulted in this error dialog: The text of the error (for all you search engine crawlers out there) was: A .NET Framework error occurred during execution of user-defined routine or aggregate "create_key_information": System.IO.FileLoadException: Could not load file or assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) ---> System.IO.FileLoadException: The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) System.IO.FileLoadException: System.IO.FileLoadException:     at Microsoft.SqlServer.IntegrationServices.Server.Security.CryptoGraphy.CreateSymmetricKey(String algorithm)    at Microsoft.SqlServer.IntegrationServices.Server.Security.CryptoGraphy.CreateKeyInformation(SqlString algorithmName, SqlBytes& key, SqlBytes& IV) . (Microsoft SQL Server, Error: 6522) After some investigation and a bit of back and forth with some very helpful members of the SSIS product team (hey Matt, Wee Hyong) it transpired that this was due to a .Net Framework fix that had been delivered via Windows Update. I took a look at the server update history and indeed there have been some recently applied .Net Framework updates: This fix had (in the words of Matt Masson) “somehow caused a mismatch on System.Core for SQLCLR” and, as you may know, SQLCLR is used heavily within the SSIS Catalog. The fix was pretty simple – restart SQL Server. This causes the assemblies to be upgraded automatically. If you are using Data Quality Services (DQS) you may have experienced similar problems which are documented at Upgrade SQLCLR Assemblies After .NET Framework Update. I am hoping the SSIS team will follow-up with a more thorough explanation on their blog soon. You DBAs out there may be questioning why Windows Update is set to automatically apply updates on our production servers. We’re checking that out with our hosting provider right now You have been warned! @Jamiet

    Read the article

  • Windows Server 2012 & Exchange Server 2013 CAL's

    - by Joey Harris
    Trying to build an Exchange server solution for a company and they want me to look into licensing for Windows. I'm going to be purchasing the necessary server licenses for Windows Server 2012 Standard and Microsoft Exchange 2013 Enterprise. I just have a few questions about the CAL's thing which seems like a complete ripoff to me. How are CAL's tracked for both Windows Server and Exchange? Is it tied to Active Directory profiles? What happens if I dont have the necessary CAL's for Windows Server? Will a client be denied access from Active Directory? Is there any enforcement for this? Also the same question for Exchange 2013 CAL's Thanks

    Read the article

  • Deleted printers keeps coming back - and multiply

    - by MojoDK
    My users are on 2012 R2 RDS Session Host servers. I've used "Deploy Printers" (from Print Manager) to deploy 4 printers. The last week, I've had a lot of problems where users can't print. If I deleted the printer and added it again, they could print just fine. Now I've removed all printer deploying from GPO - and I have no printers in any login scripts. I did a gpupdate /force, but all the 4 printers are now listed 3 times... If I delete the printers and log off and back on, all the printers are popping up again. Sigh! This is driving me nuts. This script doesn't show any of the "SVFREJA" printers... Set objWMIService = GetObject("winmgmts:\\.\root\cimv2") Set colPrinters = objWMIService.ExecQuery ("Select * From Win32_Printer") If colPrinters.Count <> 0 Then 'If there are some network printers Dim s s = "" For Each objPrinterInstalled In colPrinters ' For each network printer s = s + objPrinterInstalled.Name + chr(13) Next msgbox s End if It gives me this result... (sry for the big picture) My problem is not with the "redirected" printers, my problem is that I have several printers with the same name (on SVFREJA) and I can't get rid of them. Any idea why I can't get rid of the "ophaned" printers??

    Read the article

  • Hyper V Server 2012 Remote Management Using Workgroup

    - by Chris Kolenko
    I'm trying to remotely manage Hyper V server 2012 from a windows 8 pc, both client and server are on a workgroup. I've spent about 3-4 hours trying to get this working with no luck so far trying the following: Creating a new administrator on the server with the same details as the client ie. username / password. Add an entry into my hosts file to point to the remote ip by server name. Tried using HVRemote. Disabled both firewalls. The error that I'm getting is RPC Service Unavailable. How can I accomplish what I'm trying to do? Update Some of the operations on the Hyper-V Manager work. IE. Virtual Switch Works. I can open the New VM Wizard. I run into an error when creating a new Virtual Hard Disk tho. I've tried creating a VM without a hard disk, which works. Using the new hard disk wizard does not work either. I still can not see any Virtual Machines. RPC server unavailable. Unable to establish communication between 'ServerName' and 'ClientName'

    Read the article

  • Windows 2012 RDS Temporary profile for Administrator

    - by Fabio
    I've configured a Windows 2012 RDS Farm with two virtual servers (VMWare - each one on a different ESX server). Both servers have Licensing, Web Access, Gateway, Connection Broker and Session Host roles. High Availability is set up and it works fine. Remote Apps are working and even Windows XP clients have access to the web interface. User profile path is \vmfiles1\UserProfileDisks\App\ and almost everyone has full right access to it. The problem I have is that I would like to be able to access both servers at the same time with the Administrator account (console), but each time I try, the second server that I logon to give me access with a temporary profile. I tried to enable/disable multiple sessions per user and forced Admin logoff with the GPO but nothing changed. Another thing is that the server pool is not saved, so each time I restart the RDS server or I logoff from it, I have to add a server in the server manager. Do you have any idea? Sorry if my english is not perfect.

    Read the article

  • SQL Server 2012 memory usage steadily growing

    - by pgmo
    I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working). The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries. An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call). The database files are organized in only one filgegroup, recovery type is set to simple. Some questions to diagnose the problem: is that steadily growing memory normal? did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc) can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)? do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server? is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application) if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Read the article

  • Server 2012 AD-DS Setup Fails (Microsoft.Directory.Services.Deployment.DeepTasks.DeepTasks not found)

    - by Daniel Steiner
    Good Morning everyone, I am currently trying to promote my 2012 Server to a Domain Controller but when I am at the first step in the setup I get the Error Message (German, Original Message): [Bereitstellungskonfiguration] Fehler bei der Bestimmung, ob der Zielserver bereits ein Domänencontroller ist: Der Typ [Microsoft.Directory.Services.Deployment.DeepTasks.DeepTasks] wurde nicht gefunden: Vergewissern Sie sich, dass die Assembly, die diesen Typ enthält, geladen ist. (Translated to English): Error while determining, if the Targetserver already is a Domain Controller: The Type [Microsoft.Directory.Services.Deployment.DeepTasks.DeepTasks] was not found: Make sure, that the assembly, that contains this type, is loaded. Thus I can neither Configure the AD-DS nor deinstall them via Server Manager. Any Help how to fix that problem would be greatly appricieated.

    Read the article

  • How to handle external and internal DNS on windows 2012

    - by ThePopcorn
    I'm trying to setup an Active Directory network on Server 2012 R2, and want AD's DNS to only be used internally (Ex: domain-controller.company.com) as well as some records that need both internal and external accessibility (Ex: mail.company.com) that use internal IP's on the internal network and finally some records that only need external access. The only solutions i have been able to think of, or look up are to either use a sub domain that handles all internal records, and use the plain company.com domain for all external records. These both seem to mean i have to manage two DNS servers separately. Is either of these the best ways or am i messing up somewhere?

    Read the article

  • windows server 2012 remote desktop - Send messages between standard users

    - by Scott Kramer
    Does anyone know the policy, or registry change, etc. for allowing messages (on the same server) between standard users... an elevated cmd prompt or task manager works of course... but need it to work on standard accounts. H:\>msg scott hi Error sending message to session RDP-Tcp#0 : Error 5 Error [5]:Access is denied. (This is windows server 2012) Also I recall setting something on server 2008 r2, but just can't remember what it was, so it can be done-- Thanks!

    Read the article

  • Microsoft VDI 2012 - VDI Personal collection vs Session-based deployment

    - by Vazgen
    I have a small confusion about the differences between the 2 types of set ups: When deploying using Add Roles and Features the Wizard requests to choose from one of two Deployment Scenarios: Virtual machine-based desktop deployment : Virtual machine-based desktop deployment allows users to connect to virtual desktop collections that include published RemoteApp programs and virtual desktops. Session-based desktop deployment : Session-based desktop deployment allows users to connect to session collections that include published RemoteApp programs and session-based desktops. Although this seems intuitive now, if I continue with "Virtual machine-based desktop deployment" I later have another two options when creating a collection: Pooled virtual desktop collection Personal virtual desktop collection This is where my confusion lies. What is the differences between a Session-based deployment and Virtual machine-based deployment with Personal virtual desktop collections? I'm mostly finding information pertaining to Windows Server 2008 but I know there are some core improvements in VDI 2012 so would someone please comment on that. Thank you

    Read the article

  • Windows Server 2012 VPN Server on AWS VPC EC2 Instance

    - by abran
    I'd like to use window server 2012 VPN on a AWS VPC EC2 instance. The VPC has one public subnet and the EC2 instance has one network adapter. I've taken the following steps, but have been unsuccessful; am I missing a step or configuration? Thanks. Configured an elastic IP for the VPC Enabled protocols 47, 50, & 51 Added the RRAS role to the (EC2 instance) server Configured the RRAS for vpn only. Note: I'm able to RDP to the EC2 instance, but not able to ping the external IP.

    Read the article

  • Troubleshooting Windows Server 2012 storage spaces

    - by Iravanchi
    I'm trying the new "Storage Pools" feature on Windows Server 2012, and I've created several disks on the pool. When I restart the server, some of the disks (two, out of four) do not attach automatically, and don't show up in the list of disks. I can go to Server Manager File and Storage Services Storage Pools, and the faulty disks are listed with a yellow triangle beside them. The drive health in the properties are "unknown". But if I right-click and choose attach, the disk comes online, with all the content on it intact. But after another restart, it's the same story. I didn't find any relevant event in the event log, how can I find out why the drives are not attaching?

    Read the article

  • Server 2012 GPO: PowerShell Script on Computer Startup not running

    - by Alex
    I've got a couple of Server 2012 instances on Amazon EC2 and I'm in the process of setting up the GPOs. All of the settings of the GPOs are being applied fine, except none of the PowerShell scripts specified on computer startup are actually being executed. The scripts are sitting on a UNC share which has Authenticated Users applied to it with full permissions. I'm assuming it probably has something to do with the Execution Policy, but I'm not sure how to automatically bypass it. I could just go in each instance and bypass the Execution Policy, but that's obviously not a good idea, plus I'm eventually going to connect Windows 7 computers that will be running the same scripts. How can I get the scripts to actually run? Google searches hasn't yielded a whole lot...

    Read the article

  • Assign PowerShell script to run at startup using PowerShell on Window Server 2012

    - by James Toyer
    I'm trying to write a PowerShell script that will run when a Windows 2012 instance is created on AWS using the configuration tools provided by AWS. My problem is that I want to change the name of the machine once it has started up, restart the machine and carry on set up process after. The main reason for this is that one of the applications, Boundary, installed in the set up process takes the name of the server when first installed. It is then doesn't seem possible to change it's name in their portal. Ideally I would have two PowerShell scripts, one to start the set up process, initialised through AWS and another that runs the first time the machine restarts. This second script would ideally be queued to run on the next start by the initial set up script. So I guess my question are: Is this possible? How would I go about doing this. My Google foo is letting me down here so any answers would be appreciated.

    Read the article

  • Migrating SBS 2003 to 2012 standard

    - by AryaW
    My company is currently trying to migrate a Windows Small Business Server 2003 to Windows Server 2012. We know the general procedure, but we want to make sure we aren't going to mess anything up tremendously. Here's the steps we were planning on taking: 1. Uninstall exchange 2. Remove legacy GPO's 3. Demote the domain controller 4. Promote the new server to the primary domain controller. We have no mail servers to worry about. My question is, will the above method work or will we need to make a complete new domain? Thanks!

    Read the article

  • Can connect to shared folder on Windows Server 2012, but access denied when accessing

    - by Cylindric
    I have a Windows Server 2012 (non-domain) with a folder that's shared out as TestShare. The share permissions are Everyone has full access, and there is a local user TestUser that has full access to the actual folder. On GuestServer I can connect and/or map a drive to \HostServer\TestShare, specifying the username and password for TestUser. NTFS permissions: Share permissions Effective Access Report The problem is that when I try to access the folder, I get an "access denied" message. On the host server I can see the user connected to the share in the Sessions manager, so the password is correct and being recognised. If I use an incorrect password I don't get the "completed successfully" message, nor the 'open session'. What else can be blocking access to the shared files, when the share seems to be set, and the folder permissions seem to be set, and the connection seems to be okay? The network is recognised as "public", and the relevant firewall rules seem to be enabled - even disabling the firewall doesn't help.

    Read the article

  • Open Windows Server 2012/Windows 8 Start Menu

    - by bmccleary
    I realize that Windows Server 2012 (and Windows 8) removed the start menu button and replaced it with moving your mouse to the upper right corner of the screen. This works fine when the desktop is full screen. However, I access all my servers through windowed RDP connections (or through the Hyper-V console window) and in this case, the desktop is not full screen. Therefore, in order to open the new "start" menu, I have to slowly move my mouse very carefully within the window to just a few pixels within top right corner of the window in order to open the menu. Also, because the session is windowed, the default hot keys (Windows + D, etc.) won't work. There has got to be an easier way. Has anyone else experienced this frustration?

    Read the article

  • Windows Server 2012 - SSL Cypher Suite Order Not Long Enough

    - by Sam
    I want to re-order the cypher suites on our new Windows Server 2012 box to help mitigate the BEAST vulnerability for our clients. I went to Local Group Policy => Computer Configuration => Administrative Templates => Network => SSL Configuration Settings, opened SSL Cypher Suite Order, enabled it, and copied the values from the SSL Cypher Suites textbox. I pasted them into notepad, re-ordered them, then copied+pasted them back into the SSL Cypher Suites textbox. However, the box isn't long enough to hold them all, despite the fact that the length didn't change. I would have to drop the last 3 cyphers (SSL_CK_DES_192_EDE3_CBC_WITH_MD5,TLS_RSA_WITH_NULL_SHA256,TLS_RSA_WITH_NULL_SHA) in order for it to fit. Should I just drop them? Other ideas?

    Read the article

  • Create a share on iscsi drive in Windows Server 2012

    - by Crash893
    I am brand new to server administration but I don't belive im trying to do anything to exotic I have a windows server 2012 (standard) and a drobo 800i My goal is to setup company shares on the iscsi target I have setup on the drobo So far I have: initialized the iscsi and connected made the iscsi disk read/write(it default started as read only) formatted and mounted it (as drive E:) from the server local desktop I can see and write files to the E:\ drive in shares wizard I do not see it as a volume option when I view the volumes window I see (e: drobo Fixed 16tb 16tb) I'm new to everything but I would think since its a mounted drive I should be able to share a folder on it but it appears its not that straight forward suggestions?

    Read the article

  • 4TB HGST SATA drive only shows 1.62 TB in Windows Server 2012

    - by user136085
    I'm using a Supermicro X9SRE-3F motherboard with the latest BIOS and 2x 4TB drives connected to the on-board SATA controller. If I set the BIOS to RAID and create a RAID 1 array, the array shows up in the BIOS as 3.6TB. However when I boot Windows (on a separate RAID 1 array), the 4TB drives show up individually in disk manager as 2x 1.62TB drives. I could use Windows 2012 to set up software RAID 1, but when I set the BIOS back to 2x individual drives, they still show up in Windows as 2x 1.62TB drives. How do I access the full capacity of these drives? Thanks, Brian Bulaw

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >