Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 164/553 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • UDISKS instead of HAL

    - by MeJ
    Does anybody have some expirence with udisks, because HAL won't be longer supported on the most linux distribution, so I am thinking of to use udisks for UDI in $(hal-find-by-property --key storage.bus --string usb) do HAL_TMP=`hal-get-property --udi $UDI --key storage.removable.media_available` if [ "$HAL_TMP" = "true" ]; then HAL_DEV=$(hal-get-property --udi $UDI --key block.device) HAL_SIZE=$(hal-get-property --udi $UDI --key storage.removable.media_size) HAL_TYPE=$(hal-get-property --udi $UDI --key storage.drive_type) How do I have to adapt the above mentioned commands but use udisks instead of hal Thanks!

    Read the article

  • Under which circumstances can a *local* user account access a remote SQL Server with a trusted connection?

    - by Heinzi
    One of our customers has the following configuration: On the domain controller, there's an SQL Server. On his PC (WinXP), he logs on with LocalPC\LocalUser. In Windows Explorer, he opens DomainController\SomeShare and authenticates as Domain\Administrator. He starts our application, which opens a trusted connection (Windows authentication) to the SQL Server. It works. In SSMS, the connection shows up with the user Domain\Administrator. Firstly, I was surprised that this even works. (My first suspicion was that there is a user with the same name and password in the domain, but there is no user LocalUser in the domain.) Then we tried to reproduce the same behaviour on his new PC, but failed: On his new PC (Win7), he logs on with OtherLocalPC\OtherLocalUser. In Windows Explorer, he opens DomainController\SomeShare and authenticates as Domain\Administrator. He starts our application, which opens a trusted connection (Windows authentication) to the SQL Server. It fails with the error message Login failed for user ''. The user is not associated with a trusted SQL Server connection. Hence my question: Under which conditions can a non-domain user access a remote SQL Server using Windows Authentication with different credentials? Apparently, it's possible (it works on his old PC), but why? And how can I reproduce it?

    Read the article

  • Why is SSH finding remote keys for other accounts?

    - by Brian Pontarelli
    This is a strange issue I'm having with SSH from my Macbook Pro to a Linux (Ubuntu 11.10) server. I have a DSA key setup on the remote Linux server under my home directory like this: /home/me/.ssh/authorzied_keys I also have the same DSA key setup for a few other accounts on the machine named "foo" and "bar". I can log into all of the accounts fine without any password. Therefore, the DSA keys are all setup correctly. The strange behavior I'm seeing is when debugging the SSH connection. During the connection, the SSH debug is outputting this: debug2: key: /Users/me/.ssh/id_dsa (0x7f91a1424220) debug2: key: /home/foo/.ssh/id_dsa (0x7f91a1425620) debug2: key: /home/bar/.ssh/id_rsa (0x7f91a1425c60) debug2: key: /Users/me/.ssh/id_rsa (0x0) This is strange for so many reasons, but essentially, why is SSH listing out keys on the server (/home/foo/.ssh/id_dsa and /home/bar/.ssh/id_rsa)? These files don't even exist on the server, so why are they listed? I'm not logging into the "foo" or "bar" accounts, so why is SSH even listing those? On my Macbook Pro, I only have a DSA key, but SSH is listing out an RSA key, what's that all about? Another user on the server doesn't get any of these messages when they log in and they have the exact same setup for their DSA key and the exact same Macbook Pro setup as mine? Does anyone know what these messages are and why SSH is outputting them?

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • Issue with SSH on Ubuntu - Local connection ok, remote connection - Is it me or my ISP?

    - by Benjamin
    I have an issue with a server running Ubuntu 12.04, I am trying to set up a remote connection so I can access the server at my work from out of town. I have installed the SSH server and all that stuff, and I have reassigned the default port from 22 to 3399. A local connection from any OS can connect on the 192.168... address, but in no way can I get a connection on the actual IP address. I believe my configuration is correct, and I will attach it. If I have done something wrong in the config, please tell me and I will make a change to it. I honestly think that the Router that my ISP provided is horrible, and although the port for ssh is forwarded, it might be stopping any traffic coming inbound. Is there anything I can try to verify this? /var/log/auth does not show any error when I connect VIA our static IP. I have included all values not commented out below: (sshd_config) Port 3399 ListenAddress 0.0.0.0 Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key UsePrivilegeSeparation yes KeyRegenerationInterval 3600 ServerKeyBits 768 SyslogFacility AUTH LogLevel INFO LoginGraceTime 120 PermitRootLogin yes StrictModes yes UseDNS no RSAAuthentication yes IgnoreRhosts yes RhostsRSAAuthentication no HostbasedAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no PasswordAuthentication yes GSSAPIAuthentication no X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Am I doing this wrong? port forwarding image

    Read the article

  • Does Xenapp require Windows Terminal Services (Remote Desktop) licenses?

    - by John Virgolino
    We have a Xenapp 5.x server running for over a year now. It does not have any purchased Terminal Services (Remote Desktop) licenses installed. It is running on a Windows 2008 Server box. I am aware that Terminal Services runs fine for about 3 months and then supposedly stops issuing licenses. On occasion, Xenapp stops working and we see lots of License errors in the event log, although not necessarily every time. In most cases, a reboot or 2 resolves the problem. We figured it was because of the lack of TS licenses. I spoke with Citrix and they said we had to have the licenses, but it begs the question that if we have to have the licenses, how does it work the majority of the time without them!!?? I have not received a straight answer yet and before I tell my client to shell out more money, I need to understand the technical reasoning for how this is actually working if we are breaking the rules here. We will buy the licenses if necessary, but there has to be an explanation for this. I am hoping the community can help where Citrix apparently cannot. Thanks much!

    Read the article

  • Is it possible to ack nagios alerts from the terminal on a remote workstation?

    - by cat pants
    I have nagios alerts set up to come through jabber with an http link to ack. Is is possible there is a script I can run from a terminal on a remote workstation that takes the hostname as a parameter and acks the alert? ./ack hostname The benefit, while seemingly mundane, is threefold. First, take http load off nagios. Secondly, nagios http pages can take up to 10-20 seconds to load, so I want to save time there. Thirdly, avoiding slower use of mouse + web interface + firefox/other annoyingly slow browser. Ideally, I would like a script bound to a keyboard shortcut that simply acks the most recent alert. Finally, I want to take the inputs from a joystick, buttons and whatnot, and connect one to a big red button bound to the script so I can just ack the most recent nagios alert by hitting the button lol. (It would be rad too if the button had a screen on the enclosure that showed the text of the alert getting acked lol) Make fun of me all you want, but this is actually something that would be useful to me. If I can save five seconds per alert, and I get 200 alerts per day I need to ack, that's saving me 15 minutes a day. And isn't the whole point of the sysadmin to automate what can be automated? Thanks!

    Read the article

  • Why do compiled Haskell libraries see invalid static FFI storage?

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. Given the following three modules: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • How to build a windows/web application that stores data locally and sync's with remote database?

    - by Jason
    Hello, I am needing to build an application that stores data locally and then synchronizes with a remote MS SQL database. I am not sure how to go about doing this. Enter data offline on a form and store the data. Synchronize the data with a remote MS SQL database when online. There will be many users who enter data offline, the local database on each pc needs to update when online and grab the 30 most recent records for use offline. Example: Each day users will enter data on their "form". The users will be offline. The users will return to the office and need to sync with the "online" database. The next morning the users will need to sync with the online database before going offline. They will need to have offline access to the 30 most recent records. (They will use the 30 records for charting/graphing while they are offline) I am very new to building apps. I have VS 2010. I am wondering where to start? What language to use? Is there a "framework" for doing this type of app? Any info or suggestions would be greatly appreciated. Thanks!

    Read the article

  • can JockerSoft.Media read/get video file from remote location?

    - by Lynx
    here is the code for JockerSoft.Media // Path of the video and frame storing path string _videopath = "http://www.test.com/Video/test.avi"; //"C:\\test.avi"; string _imagepath = "C:\\test.jpg" Bitmap bmp = FrameGrabber.GetFrameFromVideo(_videopath, 0.1d); bmp.Save(_imagepath, System.Drawing.Imaging.ImageFormat.Gif); // Save directly frame on specified location FrameGrabber.SaveFrameFromVideo(_videopath, 0.1d, _imagepath); it work perfectly is the video file is from my own computer, but when i try to get video file from remote location it not getting the frame. well, all the example is for windwos form app and i trying to use this for web-application. is there maybe an additional coding that enable me to use jockersoft to grab a video frame from remote location? here is the error that i got: Attempted to access an unloaded appdomain. (Exception from HRESULT: 0x80131014) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.AppDomainUnloadedException: Attempted to access an unloaded appdomain. (Exception from HRESULT: 0x80131014) New Learner, please guide me..

    Read the article

  • Whether to put method code in a VB.Net data storage class, or put it in a separate class?

    - by Alan K
    TLDR summary: (a) Should I include (lengthy) method code in classes which may spawn multiple objects at runtime, (b) does doing so cause memory usage bloat, (c) if so should I "outsource" the code to a class that is loaded only once and have the class methods call that, or alternatively (d) does the code get loaded only once with the object definition anyway and I'm worrying about nothing? ........ I don't know whether there's a good answer to this but if there is I haven't found it yet by searching in the usual places. In my VB.Net (2010 if it matters) WinForms project I have about a dozen or so class objects in an object model. Some of these are pretty simple and do little more than act as data storage repositories. The ones further up the object model, however, have an increasing number of methods. There can be a significant number of higher level objects in use though the exact number will be runtime dependent so I can't be more precise than that. As I was writing the method code for one of the top level ones I noticed that it was starting to get quite lengthy. Memory optimisation is something of a lost art given how much memory the average PC has these days but I don't want to make my application a resource hog. So my questions for anyone who knows .Net way better than I do (of which there will be many) are: Is the code loaded into memory with each instance of the class that's created? Alternatively is it loaded only once with the definition of the class, and all derived objects just refer to that definition? (I'm not really sure how that could be possible given that, for example, event handlers can be assigned dynamically, but no harm asking.) If the answer to the first one is yes, would it be more efficient to write the code in a "utility" object which is loaded only once and called from the real class' methods? Any thoughts appreciated.

    Read the article

  • What is the best way to create a running integer id on the AppEngine data storage?

    - by Freed
    For various reasons, I need a unique running integer id for my entities stored on the Google AppEngine. The automatically generated key sort of has this behaviour, but it doesn't start from 1 (or 0) and doesn't guarantee that the generated integer part will come from a continuous sequence. What would be the best way to efficiently implement this on AppEngine? Is there any support from the storage system? To add to the complexity, I might need to do this over entities from different entity groups, meaning I can't just get the highest id right now and save an entity with the next id in a transaction. Might memcache be the way to go..? Edit: I havn't yet implemented this, but to clarify on the memcache idea. I know memcache is unreliable, but in practice it probably won't lose data "too often" to hurt performance. Basically, I would have a memcache entry for the last used id, update it (somehow atomically) whenever I create a new entity and use that id. In the case of memcache not having a value for this entry, I'd get the highest id so far by doing a query on my entities sorted by the id and update memcache (unless someone else had already done so). The only problem I can see with this right now would be atomicity of the operation as a whole if the save of my new entity was also part of a transaction. Thoughts..?

    Read the article

  • Is it possible to reference remote content from chrome.manifest? (XULRunner)

    - by siemaa
    Hi, I have a xulrunner application and I've been trying to reference remote content from chrome.manifest file. Tt's an application for the company I work in; it's run on a number of computers (most of them are used by other employees as well) as a kind of an internet monitoring service. The problem I'd like to solve is this: updating the code of such application usually requires me to manually copy the modified files to every computer that the application is running on (I've had no luck trying to make automatic updates via xulrunner platform). This process has become very tedious. What I'd like to have is a web server, where all of the xul and js files would be accessible, so that every application could reference them from there. This would require me only to update the code on that server, and the applications (when restarted) would automatically get the latest code. What I managed to do: I can reference js scripts from a xul file using http based urls and everything works fine (I can use local, binary components etc.), although the xul file has to be local - that I'd like to change. But when I write in chrome.manifest a line like: content my_app http://path/to/app/files/ and then use the line in default/preferences/pref.js pref("toolkit.defaultChromeURI", "chrome://my_app/content/my_app.xul"); it just opens a console window (to test I manually run the application with the -console option) and no code gets executed. The file can be downloaded remotely using wget so I guess this isn't the web server issue. The applications work on Windows machines. Is there some kind of security issue causing such behavior or am I doing something wrong? Is it even possible to register remote, http based content as chrome?

    Read the article

  • VPN iptables Forwarding: Net-to-net

    - by Mike Holler
    I've tried to look elsewhere on this site but I couldn't find anything matching this problem. Right now I have an ipsec tunnel open between our local network and a remote network. Currently, the local box running Openswan ipsec with the tunnel open can ping the remote ipsec box and any of the other computers in the remote network. When logged into on of the remote computers, I can ping any box in our local network. That's what works, this is what doesn't: I can't ping any of the remote computers via a local machine that is not the ipsec box. Here's a diagram of our network: [local ipsec box] ----------\ \ [arbitrary local computer] --[local gateway/router] -- [internet] -- [remote ipsec box] -- [arbitrary remote computer] The local ipsec box and the arbitrary local computer have no direct contact, instead they communicate through the gateway/router. The router has been set up to forward requests from local computers for the remote subnet to the ipsec box. This works. The problem is the ipsec box doesn't forward anything. Whenever an arbitrary local computer pings something on the remote subnet, this is the response: [user@localhost ~]# ping 172.16.53.12 PING 172.16.53.12 (172.16.53.12) 56(84) bytes of data. From 10.31.14.16 icmp_seq=1 Destination Host Prohibited From 10.31.14.16 icmp_seq=2 Destination Host Prohibited From 10.31.14.16 icmp_seq=3 Destination Host Prohibited Here's the traceroute: [root@localhost ~]# traceroute 172.16.53.12 traceroute to 172.16.53.12 (172.16.53.12), 30 hops max, 60 byte packets 1 router.address.net (10.31.14.1) 0.374 ms 0.566 ms 0.651 ms 2 10.31.14.16 (10.31.14.16) 2.068 ms 2.081 ms 2.100 ms 3 10.31.14.16 (10.31.14.16) 2.132 ms !X 2.272 ms !X 2.312 ms !X That's the IP for our ipsec box it's reaching, but it's not being forwarded. On the IPSec box I have enabled IP Forwarding in /etc/sysctl.conf net.ipv4.ip_forward = 1 And I have tried to set up IPTables to forward: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [759:71213] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 25 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 500 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 4500 -j ACCEPT -A INPUT -m policy --dir in --pol ipsec -j ACCEPT -A INPUT -p esp -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -s 10.31.14.0/24 -d 172.16.53.0/24 -j ACCEPT -A FORWARD -m policy --dir in --pol ipsec -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT Am I missing a rule in IPTables? Is there something I forgot? NOTE: All the machines are running CentOS 6.x Edit: Note 2: eth1 is the only network interface on the local ipsec box.

    Read the article

  • pfsense multi-site VPN VOIP deployment

    - by sysconfig
    have main office pfsense firewall configured like this: local networks WAN - internet LAN - local network VOIP - IP phones need to connect remote offices (multi-users) and single remote users (from home) use IPSEC or OpenVPN to build "permanent" automatically connecting tunnels from remote location to main location. in remote locations, network will look like this: WAN - internet LAN - local network multiple users VOIP - multiple IP phones in order for the IP phones to work they have to be able to "see" the VOIP network and the VOIP server back at the main office for single remote users ( like from home ) the setup will be similar but only one phone and one computer so questions: best way to tie networks together? IPSEC or OpenVPN can this be setup to automatically connect ? any issues/suggestions with that design/topology ? QoS or issues with running the VOIP traffic over a VPN throughput, quality etc.. obviously depends on remote locations connection to some degree

    Read the article

  • pfsense multi-site VPN VOIP deployment

    - by sysconfig
    have main office pfsense firewall configured like this: local networks WAN - internet LAN - local network VOIP - IP phones need to connect remote offices (multi-users) and single remote users (from home) use IPSEC or OpenVPN to build "permanent" automatically connecting tunnels from remote location to main location. in remote locations, network will look like this: WAN - internet LAN - local network multiple users VOIP - multiple IP phones in order for the IP phones to work they have to be able to "see" the VOIP network and the VOIP server back at the main office for single remote users ( like from home ) the setup will be similar but only one phone and one computer so questions: best way to tie networks together? IPSEC or OpenVPN can this be setup to automatically connect ? any issues/suggestions with that design/topology ? QoS or issues with running the VOIP traffic over a VPN throughput, quality etc.. obviously depends on remote locations connection to some degree

    Read the article

  • .htaccess working on remote server but does not work on localhost. Getting 404 errors on localhost

    - by Afsheen Khosravian
    MY PROBLEM: When I visit localhost the site does not work. It shows some text from the site but it seems the server can not locate any other files. Here is a snippet of the errors from firebug: "NetworkError: 404 Not Found - localhost/css/popup.css" "NetworkError: 404 Not Found - localhost/css/style.css" "NetworkError: 404 Not Found - localhost/css/player.css" "NetworkError: 404 Not Found - localhost/css/ui-lightness/jquery-ui-1.8.11.custom.css" "NetworkError: 404 Not Found - localhost/js/jquery.js" It seems my server is looking for the files in the wrong places. For example, localhost/css/popup.css is actually located at localhost/app/webroot/css/popup.css. I have my site setup on a remote server with the same exact configurations and it works perfectly fine. I am just having this issue trying to run the site on my laptop at localhost. I edited my VirtualHosts file DocumentRoot and to /home/user/public_html/site.com/public/app/webroot/ and this reduces some errors but I feel that this is wrong and sort of hacking it since I didn't use these setting on my production server which works. The last note I want to make is that the website uses dynamic URLs. I dont know if that has anything to do with it. For example, on the production server the URLS are: site.com/#hello/12321. HERES WHAT I AM WORKING WITH: I have a LAMP server setup on my laptop which runs on Ubuntu 11.10. I have enabled mod_rewrite: sudo a2enmod rewrite Then I edited my Virtual Hosts file: <VirtualHost *:80> ServerName localhost DirectoryIndex index.php DocumentRoot /home/user/public_html/site.com/public <Directory /home/user/public_html/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Then I restarted apache. My website is using cakePHP. This is the directory structure of the website: "/home/user/public_html/site.com/public" contains: index.php app cake plugins vendors These are my .htaccess files: /home/user/public_html/site.com/public/app/.htaccess: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /home/user/public_html/site.com/public/app/webroot/.htaccess: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] </IfModule>

    Read the article

  • Plugin 'InnoDB' registration as a STORAGE ENGINE failed. On win 7

    - by NimChimpsky
    I have had to reinstall MySQL, however the service is failing to start with the above cause listed in evnt viewer. One solution is apparently to delete a couple of files prefixed with 'ib_logfile' which represent any old databases. However I do not have these files, and my service is still failing to start ... ? When I say I don't have these files I did a search using the windows search with zero results, and they are definitely not present in my mysql install directory. And I don't have the "documents and setting/appilcation data' folder referenced in link. In fact I only have only one mysql install directory, I know where that is - what do I need to delete/change ? The instance is configured OK, I ran that as administrator and it is listed in services, but the service itself fails to start Any tips, other than going over to postgresql ?

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • SSH tunnel RDP through gateway server outside the network?

    - by Mike
    I need to access a PC via RDP that is behind a firewall. There's no way to connect to it directly that I know of. What I'd like to do is SSH from that remote PC to my home Ubuntu server, then connect to the remote PC using my home PC with the Ubuntu server as a gateway. I've tried SSH from remote PC to Ubuntu server, tunneling remote port 3389 to 127.0.0.1:3389, then SSH from home PC to Ubuntu server, tunneling local port 13389 to remote port 3389. At that point I try to RDP into: 127.0.0.1:13389, 127.0.0.2:13389, :3389 - no dice. I suppose I could simply set up an SSH server on my home PC and SSH from remote PC into home PC and then establish the tunnel that way, but I'd rather not go through the hassle of installing and configuring an ssh server on my home PC. I know LogMeIn would work here, but I don't want to go that route for various reasons. Any ideas? Thanks!

    Read the article

  • Should I partition a 1TB Hard Disk whose primary use is media storage?

    - by Senthil
    I am going to get a 1TB hard disk. I will be storing 1080p or 720p movies, high-bitrate music and pictures in it. I use my PC 90% of the time only to play/listen/see those. I am running out of space in my current HD so I am getting another one. My specs are 2.7GHz Dual Core, 512MB GeForce 9400GT, 2GB DDR2 RAM and all the proper matroska codecs/players. I guess that is enough to play 1080p movies withough a glitch, given an ideal hard disk. I've read about proper partitioning giving performance improvement etc.. I don't want my hard disk to be the bottleneck. Can someone tell me whether I should partition my 1TB hard disk into many drives? If I should, what is the ideal size of each partition? Smooth playing of movies is very important to me. Once I start filling up the disk, there is no turning back. So I want to get it right before I start. Thanks.

    Read the article

  • rsync -c -i flags identical files as different

    - by Scott
    My goal: given a list of files on local server, show any differences to the files with the same absolute path on remote server; e.g. compare local /etc/init.d/apache to same file on remote server. "Difference" for me means different checksum. I don't care about file modification times. I also do not want to sync the files (yet); only show the diffs. I have rsync 3.0.6 on both local and remote servers, which should be able to do what I want. However, it is claiming that local and remote files, even with identical checksums, are still different. Here's the command line: $ rsync --dry-run -avi --checksum --files-from=/home/me/test.txt --rsync-path="cd / && rsync" / me@remote:/ where: "me" = my username; "remote" = remote server hostname current working directory is '/' test.txt contains one line reading "/etc/init.d/apache" OS: Linux 2.6.9 Running cksum on /etc/init.d/apache on both servers yields the same result. The files are the same. However, rsync output is: me@remote's password: building file list ... done .d..t...... etc/ cd+++++++++ etc/init.d/ <f+++++++++ etc/init.d/apache sent 93 bytes received 21 bytes 20.73 bytes/sec total size is 2374 speedup is 20.82 (DRY RUN) The output codes (see http://www.samba.org/ftp/rsync/rsync.html) mean that rsync thinks /etc is identical except for mod time /etc/init.d needs to be changed /etc/init.d/apache will be sent to the remote server I don't understand how, with --checksum option, and the files having identical checksums, that rsync should think they're different. (I've tried with other files having identical mod times, and those files are not flagged as different.) I did run this in /, and made sure (AFAIK) that it's run remotely in /, so even relative pathnames will still be correct. I ran rsync with -avvvi for more debug info, but saw nothing remarkable. I'm wondering: is rsync still looking at file mod times, even with --checksum? am I somehow not setting up the path(s) right? what am I doing wrong?

    Read the article

  • MySQL storage: how to manage a grow to infinite ?

    - by Dario
    Hi, I'm just thinking about famous internet services like facebook or twitter manage fast growing databases. Which could be a solution for this kind of problem? What about ids ? I read there is a limit in MySQL - 18446744073709551615 - in unsigned bigint... whow would you generate and manage a bigger value ? Just a theoric problem, but i'm curious about a possible solution. Thank you!

    Read the article

  • Ad Agency storage/file server +backup needed (NAS or something else?)

    - by Rob
    Looking for a "this is all you need" recommendation. We're a small ad agency with both mac & pcs that access and share files from a 3 yr old Windows 2000 box (no server software). We currently have 1TB on the "server" and back it up to 2 different Seagate Free Agent Pro 1TB external drives. But we're low on space and are looking for something that's bigger, that we can still access from Mac & PC, EASY backup system, secure from viruses, firewall enabled. Not sure if a NAS will work or if we should have a real server. We don't really get on that box except to restore files, or run Norton on it. I hope I've provided enough for a general recommendation. Thanks. Rob Phx

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >