Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 8/553 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • remote desktop computer viewer?

    - by Josh
    I would like to install a quad core computer in my dorm at college and use my much slower laptop to be able to control the quad core just as if I had a quad core laptop (control as in i see the gui, not command line control)! Both are on the same college network, though Im also interested in what would be necessary if the computers were on different networks. What would be the best method fot this? Im looking for non-lag communication.

    Read the article

  • Cannot move/drag/drop windows/items in remote VNC session

    - by hansioux
    I find it a little hard to believe that no one here has asked this question, I tried searching for it but it isn't asked, so here goes: I setup a Ubuntu desktop computer with VNC to use as a server. And use another Ubuntu desktop computer to VNC into it. The rest of the VNC works ok, but drag and drop with mouse is gone. Thus I can not move windows, or drag and drop items via VNC. I am using the default remote desktop in System - Preferences to setup my server. And use Remmina as my client. The same happens using MS Windows's VNC clients connecting to my Ubuntu desktop. I did a bit of searching on google, and there are actually a lot of reports regarding this issue. But, oddly there is no solution. There are even bug reports made for this since Ubuntu 9.10, yet here it still is in Ubuntu 11.04. There have been suggestions that the bugs is in gtk, as see in link below: http://ubuntuforums.org/showthread.php?t=1497635&page=2 libgtk2.0-0 stable(lenny) -> DnD works libgtk2.0-0 lenny-backport (libgtk2.0-0_2.18.6-1~bpo50+1_i386) -> DnD still works libgtk2.0-0 testing (libgtk2.0-0_2.20.1-2_i386) -> DnD broken please don't give answers such as "use NX", "use ssh -x" or "use x11vnc". I am aware that some people don't have this problem with x11vnc, and I have setup x11vnc before, but i can't for this setup. I am setting this up so Windows only friends/families can use it.

    Read the article

  • Oracle Database Machine and Exadata Storage Server

    - by jean-marc.gaudron(at)oracle.com
    Master Note for Oracle Database Machine and Exadata Storage Server (Doc ID 1187674.1)This Master Note is intended to provide an index and references to the most frequently used My Oracle Support Notes with respect to Oracle Exadata and Oracle Database Machine environments. This Master Note is subdivided into categories to allow for easy access and reference to notes that are applicable to your area of interest. This includes the following categories: • Database Machine and Exadata Storage Server Concepts and Overview• Database Machine and Exadata Storage Server Configuration and Administration• Database Machine and Exadata Storage Server Troubleshooting and Debugging• Database Machine and Exadata Storage Server Best Practices• Database Machine and Exadata Storage Server Patching• Database Machine and Exadata Storage Server Documentation and References• Database Machine and Exadata Storage Server Known Problems• ASM and RAC Documentation• Using My Oracle Support Effectively

    Read the article

  • Why is business-class storage so expensive?

    - by Mark Henderson
    This is a Canonical Question about the Cost of Enterprise Storage. See also the following question: What's the best way to explain storage issues to developers and other users Regarding general questions like: Why do I have to pay 50 bucks a month per extra gigabyte of storage? Our file server is always running out of space, why doesn't our sysadmin just throw an extra 1TB drive in there? Why is SAN equipment so expensive? The Answers here will attempt to provide a better understanding of how enterprise-level storage works and what influences the price. If you can expand on the Question or provide insight as to the Answer, please post.

    Read the article

  • Azure Storage Explorer

    - by kaleidoscope
    Azure Storage Explorer –  an another way to Deploy the services on Cloud Azure Storage Explorer is a useful GUI tool for inspecting and altering the data in your Azure cloud storage projects including the logs of your cloud-hosted applications. All three types of cloud storage can be viewed: blobs, queues, and tables. You can also create or delete blob/queue/table containers and items. Text blobs can be edited and all data types can be imported/exported between the cloud and local files. Table records can be imported/exported between the cloud and spreadsheet CSV files. Why Azure Storage Explorer Azure Storage Explorer is a licensed CodePlex project provided by Neudesic – a Microsoft partner.  It is a simple UI that requires you to input your blob storage name, access key and endpoints in the Storage Settings dialog. For more details please refer to the link: http://azurestorageexplorer.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=35189   Anish, S

    Read the article

  • Why does my MySQL remote-connection fail (VLAN)?

    - by Johannes Nielsen
    ubuntu-community! Again I have a problem with my special friend MySQL :D I have got two servers - a database-server and a web-server - who are connected via VLAN. Now I want the web-server to have remote access to the database-server's MySQL. So I created the user user in mysql.user. user's Host is xxx.yyy.zzz.9 which is the internal IP-address of the web-server. xxx.yyy.zzz.0 is the network. I also created user with Host % . As long as I use MySQL on the database-server logging in as user, everything works fine. But trying to log in as user from xxx.yyy.zzz.9 using mysql -h xxx.yyy.zzz.8 -u user -p (where xxx.yyy.zzz.8 is the database-server's internal IP), I get ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.yyy.zzz.8' (110) So I tried to activate Bind-Address in the my.cnf file. Well, if I use xxx.yyy.zzz.8, nothing changes. But if I try xxx.yyy.zzz.9 and try to restart MySQL, I get mysql stop/waiting start: Job failed to start I checked the log files and found - nothing. The database-server's MySQL doesn't even register, that the web-server tries to connect remotely. My idea is, that maybe I didn't configure the VLAN properley, even though I asked someone who actually knows such stuff and he told me, I did everything right. What I wrote into /etc/networking/interfaces is: #The VLAN auto eth1 iface eth1 inet static address xxx.yyy.zzz..8 netmask 255.255.255.0 network xxx.yyy.zzz.0 broadcast xxx.yyy.zzz.255 mtu 1500 ifconfig returns eth1 Link encap:Ethernet HWaddr xxxxxxxxxxxxxx inet addr:xxx.yyy.zzz.8 Bcast:xxx.yyy.zzz.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxxxxxx/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:241146 errors:0 dropped:0 overruns:0 frame:0 TX packets:9765 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17825995 (17.8 MB) TX bytes:566602 (566.6 KB) Memory:fb900000-fb920000 for the eth1, what is, what I configured. (This is for the database-server, the web-server looks similar). ethtool eth1 returns: Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000003 (3) drv probe Link detected: yes (This is for the database-server, the web-server looks similar). Actually I think, everything is right, but it still doesn't work. Is there someone with an idea? EDIT: I commented ou Bind-Address in my.cnf after it didn't work.

    Read the article

  • TP-Link storage server accessing from Mac OS

    - by coure2011
    I have a storage/print server by TP-Link http://www.tp-link.com/en/products/details/?categoryid=232&model=TL-PS310U I just connect usb-storage to the print server and on windows I can access that usb-device from my windows computers. But how can I access that device from Mac? I found an article that I can add usb printer to mac using that device but not able to find how to access the storage device. please help me out!

    Read the article

  • Win7 to Win7 Remote Desktop Not working, Xp to 7 working fine

    - by vlad b.
    Hello, I have a small home network and recently i tried to enable remote desktop for one of the pc's. I have a mix of Windows 7, Windows Vista and Xp runing alongside ubuntu, centos and others (some virtual, some real). I have a few Windows 7 pc`s that can be connected to using remote desktop from inside and outside the network (port redirects on routers, etc, etc) and some Xp ones. The trouble is when i tried to do the same thing to a Win7 laptop i discovered i can't connect to it from another win7 pc inside the home network. To sum it up Working: xp -- win7 not working: win7 -- win7 What i tried - disable and enable remote desktop (my computer - remote settings) - removing and adding users to the remote settings window - adding a new user to the machine, administrator or 'normal' user - checking the firewall settings on the machine and set 'allow' to remote desktop for both 'home/work' and 'public'networks Any tips on what should i do next? It displays ' .. secure connection' and after that the window with 'Your security credentials did not work' and it lets me try again with another user/password..

    Read the article

  • Windows API BackupRead failing with error 50 on Windows Storage Server 2008 R2

    - by Jason F
    We have a backup application that uses the Windows API BackupRead. It works correctly on Windows Server 2003, 2008, 2008 R2. It does not work on Storage Server 2008 R2. It always fails with error 50 - The request is not supported. The documentation for BackupRead gives no indication that it will not work with Storage Server 2008 R2. Anyone else have any experience using this API on Storage Server 2008 R2?

    Read the article

  • Remote Desktop app can't connect through VPN or through RDP load balancer

    - by nhinkle
    Using the regular Remote Desktop Client (in the desktop environment) I can connect just fine to remote servers when connected through Cisco VPN or when accessing a server behind a load balancer. When using the Remote Desktop app in the Modern UI, I can't do either of these things. Trying to connect to a remote server that's on a private network fails with: Can't find server, make sure the name and domain are correct and try again And connecting to a server that's behind an RDP load balancer fails with the following error, after accepting credentials: Because of a protocol error, this session will be disconnected. Please try connecting to the remote PC again Is there some way to use the Remote Desktop app in these situations, or am I just out of luck?

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • Remote access not working without connected monitor

    - by winSharp93
    I am trying to configure a Windows Server 2008 as a Home Server for my personal use (mainly for storing documents, hosting source-control, etc.). The "server" consists of an Intel Atom 2700DC board and an Intel SSD. Configuring remote access to the server, I am confronted with a very strange problem: As long as a monitor is connected to my server, remote access works without any problems. However, when no monitor is connected at boot-time, remote access simply won't work (I keep getting errors when trying to connect that the remote server was not found or that remote access is disabled). Windows definitely boots when no monitor is connected as I receive a message asking me whether to enter safe mode when booting after powering the server down by plugging the power cord. When I plug in a monitor after boot, it stays turned off and remote desktop connections still fail. Do you have any ideas about what I could try?

    Read the article

  • Shared storage solution for our sql server backups

    - by Gokhan
    We have 3 clustered sql servers. We have 5+ multi terrabyte databases and their backup files (compressed using quest litespeed) are hitting over 600gb each, We are required to keep at least a week or two weeks (if we can) of weekly full backups and then 6 days differential backups, and a week or 2 weeks worth of log backups local. We are currently limited to 2TB volumes from our san team, we can have multiple volumes but they are expensive ($200 per raw TB per month) and having to deal with many backup volumes instead of a single big volume is difficult. I think if we could have a shared network storage of 20TB+ raid 10 or so for all our servers for keeping the backups and another department will copy them to tape from the network storage and delete files according to the retention period would be good, if this box would be a build in operating system (even unix a complete file storage system) that would be good. What do you guys think, does this make sense to you, is there any manufacturer that sells a storage product like that which that work in a clustered environment? Thank you

    Read the article

  • Server 2012 Storage Pools, Raid Controller... can the Storage Pool deal with it?

    - by TomTom
    Before trying it out - I don't find any documentation. Given that Storage Pools have serious performance problems with parity, and do not rebalance data at the moment when you add discs, my preferred way to use them would be as think provisioned space, ISCSI targets - with every "Pool" running against 1 RAID that comes from a Raid controller (who also introduces SSD read and write caching - another thing missing from Storage Pools). The main question is - how does a Storage Pool handle the change in the underlying disc that can happen? I mostly talk about OCE (Online Capacity Expansion), where a disc after an expansion suddenly reports a larger space. Standard Windows allows you to use this additional space (and expand the partitions). How does a storage pool handle it?

    Read the article

  • ADO and Two Way Storage Tiering

    - by Andy-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model. ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails. In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Ping one remote server from another remote server

    - by user666254
    It's simple to ping a server in C#, but suppose I have servers A, B and C. A connects to B. A asks B to ping C, to check that B can talk to C. A needs to read the outcome. Now, first of all is this possible without installing an application onto B? In other words, can I perform the entire check from just running a program on A? If so, can anyone suggest the route I would take to achieve this? I've looked at sockets but from the examples I've seen these require a client AND server application to function.

    Read the article

  • Why is hosted storage so expensive?

    - by Mark Henderson
    There are many questions on Server Fault asking why server storage is so expensive. e.g. Why do I have to pay 50 bucks a month per extra gigabyte of storage or Our file server is always running out of space, why doesn't our sysadmin just throw an extra 1TB drive in there? These questions usually come from people who lack an understanding of how enterprise-level storage works and what influences the price. This question is designed to be the "question to end all questions" regarding the price of enterprise storage.

    Read the article

  • RDS, RDWeb, and RemoteApp: How to use public certificate for launching apps on session host?

    - by Bret Fisher
    Question: How do i tell RDWeb to launch apps from remote.domain.com rather then host.internaldomain.local? Environment: Existing org with AD forest. New single Server 2012 running all Remote Desktop Services roles for session host. Used the new 2012 wizard to setup "QuickSessionCollection" with roles: RD Session Host RD Connection Broker RD Gateway RD Web Access RD Licensing Everything works with self-signed cert, but we want to prevent those. The users are potentially non-domain machines so sticking a private root cert for on their machines isn't an option. Every part of the solution needs to use public cert. Added public remote.domain.com cert to all roles using Server Manager GUI: RD Connection Broker - Enable Single Sign On RD Connection Broker - Publishing RD Web Access RD Gateway So now everything works beautifully except the last step: user logs into https://remote.domain.com user clicks a app icon, which in background downloads a .rdp file that is signed by remote.domain.com. .rdp is set to use RD Gateway, which is remote.domain.com .rdp says app is hosted on internal host.internaldomain.local, which doesn't match the RDP-tcp TLS cert of remote.domain.com, and pops a warning. It's this last step that I'd like to fix. Is there a config option in PowerShell, WMI, or .config to tell RDWeb/RemoteApp to use remote.domain.com for all published apps so the TLS cert for RDP matches what the Session Host is using? NOTE: This question talks about this issue, and this answer mentions how you might fix it in 2008, but that GUI doesn't exist in 2012 for RemoteApp, and I can't find a PowerShell setting for it. NOTE: Here's a screenshot of the setting in 2008R2 that I need to change. It tells RemoteApp what to use for the Session Host server name. How can I set that in 2012?

    Read the article

  • VPN Authentication Credentials (Local/Remote Identifiers) For Remote Access VPN

    - by thatidiotguy
    So I am trying to set up a remote access VPN using the free ShrewSoft vpn client: https://www.shrew.net/software I want to use a PSK as the authentication mechanism combined with XAuth so that a connection requires a valid username/pass combo. Under the authentication tab this particular VPN Client however is asking for a Local Identity and a Remote Identity. The options for Local Identity Type are: Fully Qualified Domain Name User Fully Qualified Domain Name IP Address Key Identifier The options for Remote Identity are: Any Fully Qualified Domain Name User Fully Qualified Domain Name IP Address Key Identifier My current thinking is that I can use the Fully Qualifed Domain Name provided by the remote firewall for the Remote Identity, but I do not know what it wants for local identity. Just to stress: I am not trying to set up a site to site VPN. Can anybody shed any light on what I am missing here? A screenshot can be provided if that would be helpful. The current error I am getting during the connection is: IKE Responder: Proposed IKE ID mismatch

    Read the article

  • How to troubleshoot Linksys E4200 Remote Management

    - by Jordan
    My Linksys E4200 is configured for Remote Management, but the router is not accepting the connections. Here's the configuration under Administration Management Remote Management Access: Remote Management: Enabled Access via: HTTP Remote Upgrade: Disabled Allowed Remote IP Address: Any IP Address Remote Management Port: 8080 The router is setup to use 192.168.10.41 as its static Internet IP address, and 192.168.35.1 as its LAN IP address. I can access the router just fine via its LAN IP address, but I can't make a connection using http://192.168.10.41:8080. I've tried variations of the settings above (enabled HTTPS, enabled Remote Upgrade, set an IP range of 192.168.10.1-254) but nothing has worked yet. Hoping someone can at least point me in the right direction. Thanks. Update: To clarify, I have a wired router that connects straight to the T1 modem. It's configured to use 192.168.10.1-254 as its internal LAN range. The E4200 wireless router in question is on that LAN using 192.168.10.41 as its WAN IP address. The E4200's internal LAN range is 192.168.35.1-254. I'm not trying to access the E4200 from the Internet, I'm just trying to access it from its WAN IP address. Thanks.

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • remote desktop client with panning of large desktops?

    - by mikewse
    When using the standard Windows remote desktop client in Full Screen to connect to a large remote desktop, the result is usually that the remote desktop is resized to a smaller size matching the local display. Are there any RDP clients that in Full Screen instead leave the remote desktop at its original size and pan this larger area when the mouse reaches the border edges of the local display? Thanks Mike

    Read the article

  • How to check mysql remote connectivity

    - by Mirage
    I have enable the ips in WHM for remote mysql but still i am not able to connect it. Is there any shell command for mysql which i can type to check the remote connectivity I mean any command to execute on server itself where mysql is installed or Any command to execute of local linux to check connectivity on remote MYsql server OS remote :Centos 5.4 local OS : Ubuntu

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >