Search Results

Search found 103782 results on 4152 pages for 'tath am'.

Page 217/4152 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • How do I deny access to everybody but me in Windows 7?

    - by GregH
    I am trying to set up a file server on my my Windows 7 Pro system at home. I set up one common "Share" folder that I have shared/published. Within the share folder I want to have individual folders for me and my wife...that is only I can read/write my folder and only my wife can read/write to her folder and neither of us can read the contents of the other person's folder. Then I want to have a "public" folder where we can both read/write to contents of the folder as well as any sub-folders created, but my "kids" account can only read from this folder and sub folders. It seems really confusing to set up something like this and it really shouldn't. I am really confused between the "allow", "deny", and dimmed check boxes in the security tab. It seems that if I "Deny" access to "Everyone" on my private folder, then I don't even have access to it. Windows security seems backwards from the rest of the world's security models. If I am in two groups and I deny access to one of the groups but allow access to the other group then Windows security denies me access as I am in one of the groups that has access disallowed. Very confusing.

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • Windows 8 & Hyper-V Can't Bridge Wifi Connection

    - by xinunix
    So I have an odd issue that I can't quite figure out... I am running Windows 8 Enterprise on a Dell 6420 laptop. I have a Broadcom 802.11n wireless adapter. I am connected to an home router (Netgear WNDR3700) that is connected to the internet. It is a very simple home network setup. I am trying to stand-up a few VMs in Hyper-V and want the VMs to be able to access the internet over my wireless connection. I have found numerous examples of how to set this up using both External and Internal Virtual Switches but have yet to be able to get it to work on my machine. I have narrowed the issue down to the fact that my host machine always loses internet connection when I bridge my wifi connection (both when it is bridged automatically by windows when I setup an external virtual switch bound to the wifi adapter or if I do it manually by creating an internal virtual switch, right click on it and my wifi network and select "Bridge Connections".) In both cases after the bridge is established, my host machine can no longer connect to the internet. I am not sure where to start with troubleshooting this problem. After the bridge is setup, an ipconfig shows all netowrk devices on the machine as "Media Disconnected". I do know that the wireless adapter is connected to the router b/c it shows the connection as active and full-strength. The only thing I can possibly think of is that this machine also has the Cisco VPN client installed on it which installs a Cisco Virtual Network Adapter. Is it possible that this Cisco Virtual Adapter is causing me issues when I try to bridge? I saw some people had a similar issue with a VirtualBox virtual adapter when trying to share via Hyper-V. Any thoughts or suggestions on how to troubleshoot?

    Read the article

  • apache-user & root access

    - by ahmedshaikhm
    I want to develop few scripts in php that will invoke following commands; using exec() function service network restart crontab -u root /xyz/abc/fjs/crontab etc. The issue is that Apache executes script as apache user (I am on CentOS 5), regardless of adding apache into wheel or doing good, the bad and the ugly group assignment does not run commands (as mentioned above). Following are my configurations; My /etc/sudoers root ALL=(ALL) ALL apache ALL=(ALL) NOPASSWD: ALL %wheel ALL=(ALL) ALL %wheel ALL=(ALL) NOPASSWD: ALL As I've tried couple of combination with sudoer & httpd.conf, the recent httpd.conf look something as follows; my httpd.conf User apache Group wheel my PHP script exec("service network start", $a); print_r($a); exec("sudo -u root service network start", $a); print_r($a); Output Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Without any surprise, when I invoke restart network services via ssh, using similar user like apache, the command successfully executes. Its all about accessing such commands via HTTP Protocol. I am sure cPanel/Plesk kind of software do use something like sudoer or something and what I am trying to do is basically possible. But I need your help to understand which piece I am missing? Thanks a lot!

    Read the article

  • DHCP Server on local machine

    - by EralpB
    Hello I am trying to setup a dhcp3-server on Ubuntu. But my question is more generic, if dhcp server is in a blockbox and all clients are connected to it I think I get what is going on but when dhcp server is installed on one of the "clients" that confuses me. When I send a dhcp packet from that client to the dhcp server, will my ethernet card read and write at the same time? Or will it handle it internally without writing any data to ethernet cable. It's the first time I am encountering these network things so I am a little bit confused. Also I wonder If I am in a big network with lan IP let's say 192.168.0.100 and I install a dhcp server to my computer, can any other computers accidentally get IP from my dhcp server? Every computer has one ethernet card (if that matters?). And every computer is connected to one router. I guess the answer is no because the broadcast message won't reach to my computer since when router receives a dhcp search packet it will answer and it won't let other computers know about it because they don't need to. And without router sending that packet one by one, it cannot travel further. I'd be glad if someone enlightens me. Thank you very much.

    Read the article

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • Connecting to server from remote machine

    - by Jannat Arora
    I wish to connect my machine to a server in some other city. For doing the same I am using the following command: mstsc -v ip_address_of_server remote desktop can't connect to remote computer for one of these reasons: 1) Remote access to server is not enabled. 2) Remote computer is turned off 3) Remote computer is not available on network. Make sure remote computer is turned on and connected to the network, and that remote access is enabled. As per previous posts I need to turn off my client computers firewall..which I have...but still it gives me the same message. Can someone please please help me out...so as to how i may resolve this?? I am really new to networking, etc. Also when i am pinging: ping ip_address_of server I am getting the following response: Reply from ip_address_of_server: destination host unreachable Also I did try on ubuntu with rdesktop...still its not been able to connect with it. Also i know there are other people who are able to connect their machines with the server remotely. So i guess its not working for me only. Also when I accessed the same machine through LAN I was able to do so.

    Read the article

  • Windows Server 2008 R2 DNS - One IP, multiple servers

    - by Blu Dragon
    I need opinions and examples on how to best to accomplish the setup I am looking for. I have a public-facing AD domain server with one public IP address. I have setup an external zone for example.com and I successfully have my own name servers pointing to it at ns0.example.com and ns1.example.com. I also have an internal zone for my private network at home.example.com. I am behind a router with the domain server in the DMZ. I want dev.example.com to be accessible from the outside world over https and to point to internal IP address 192.168.1.78. Likewise, I want www.example.com to be accessible from the outside world and point to internal IP address 192.168.1.79. Both dev and www servers are CentOS 5.6 VMs running inside of Hyper-V on the domain server (bad idea I know but I am limited on hardware atm). What is best way to achieve this? From what I have read and researched on Google, I may need to setup a reverse proxy but I am not sure how well that will work with SSL.

    Read the article

  • Setting up SSL for phpMyAdmin

    - by Ubuntu User
    I would like to run phpmyadmin using my SSL certificate. I read that if I placed the following within the file: /etc/phpmyadmin/config.inc.php, it would force it to use SSL. And now it does... $cfg['ForceSSL'] =true; However, my issue is when I did this, now I get an error stating "cannot connect to server." I do a port scan and my port 443 is closed for one, but I am connecting via https:// for my secure web based email admin panel. This tells me this may not be the issue. Second, is that I have a SSL certificate I purchased but I am not sure how to apply this cert. mydomain.com.crt is sitting on my desktop, how should I be utilizing this? I remember creating a self signed cert for my web-email access. Do I have to do this for phpmyadmin as well? At least this way, since I am the only one who will ever access the DB, it will never expire. Also the phpmyadmin used to come up as: http://mydomain/phpmyadmin/ of course I am now trying to get to https://mydomain.com/phpmyadmin/ however, I do not have any pages on my website that requires https:// currently. In the future I may add this. But for now, I only want to access phpmyadmin via ssl. I can use my own -- but if this causes problems with future ecommerce apps under mydomain.com I would rather use the SSL cert I already purchased. Thank you!

    Read the article

  • Connection established to google DNS, can't resolve any hosts

    - by Tar
    As you can see from the picture above, I am connected to google DNS but am unable to resolve any hostnames. When I try to ping sites like google.com, yahoo.com, etc, I get 'ping: unknown host'. Yes, I am able to ping localhost, I am able to ping hostname.domain.com, but not domain.com. I can't ping my nameservers. I can ping all hosts by IP address and that works. The output of my /etc/resolv.conf: nameserver 8.8.8.8 nameserver 8.8.4.4 Anyone know what the problem could be? 23:30:04.304955 IP my_server.44457 > 8.8.8.8.domain: 28349+ A? google.com. (28) 23:30:06.137985 IP 112.100.0.78.19781 > my_server.domain: 18717 [1au] A? www.my_domain.com. (46) 23:30:06.138286 IP my_server.domain > 112.100.0.78.19781: 18717*- 2/0/1 CNAME my_domain.com., A my_server (76) 23:30:06.686582 IP 112.100.0.74.19181 > my_server.domain: 65046 [1au] A? my_domain.com. (42) 23:30:06.686811 IP my_server.domain > 112.100.0.74.19181: 65046*- 1/0/1 A my_server (58) 23:30:07.043764 IP my_server.50465 > 4.2.2.1.domain: 13865+ PTR? 142.254.22.67.in-addr.arpa. (44) 23:30:09.065904 IP my_server.45242 > 8.8.4.4.domain: 29011+ PTR? 123.72.117.130.in-addr.arpa. (45) 23:30:09.310021 IP my_server.45440 > 8.8.4.4.domain: 28349+ A? google.com. (28)

    Read the article

  • How to verify a self-signed certificate from another server using openssl?

    - by ntsue
    I am new to openssl and I am having some trouble verifying (from a client machine) an ftp server using ssl with a self-signed certificate. I generated the .cer file by going to my server in IIS and exporting the certificate without the private key. I believe that this is all that I should need on the client side, right? I use the following code to verify the certificate openssl verify ftp.cer and the error that I get back is error 20 at 0 depth lookup:unable to get local issuer certificate I tried this as well: openssl verify -CAfile ftp.cer ftp.cer but received the same error. From what I understand about SSL, this is happening because I have no chain of trust that connects to this server. By default, openssl did not install any trusted CAs and this is fine. I would just like to tell it to trust this server. I tried various tutorials telling me how to add a certificate authority, including this one here, however the instructions are for linux and include adding a symlink and I am trying to do this in windows. If anyone could provide any guidance on how to do this, or enlighten me if I am not understanding something correctly, I would greatly appreciate it. Thanks!

    Read the article

  • 403 Forbidden when Deploying asp.net 4.0 site to IIS 7

    - by Jordan
    So I have an EC2 instance running, the URL NoWeatherSurprises.com I have the DNS pointing there, and I set up a new site in IIS 7 and pointed it to a folder. I used Visual Studios Web Developer 2010 express to publish to this folder. It now has the binaries and such. However if I go to NoWeatherSurprises.com I get the welcome to IIS 7 screen. I'd expect to go to my application If I navigate to http://noweathersurprises.com/weather/ [weather was the folder I published to under wwwroot] I get a 403 forbidden. I have no idea why, I am guessing that it is trying to do a directory listing or something instead of launching my MVC Application. So 2 problems in summary. It is not pointing the domain to the folder directly and I need to add /weather I am getting a 403 forbidden instead of the results of my home controller with the index action. I am new to IIS 7, I had been using IIS 6 and had a lot less trouble setting it up, but I suspect that's my own fault and i am just missing something. Thanks in advance for any help

    Read the article

  • Zero sized tar.gz file found inside a tar.gz file

    - by PavanM
    My current directory contains a single file like this- $ls -l -rw-r--r-- 1 root staff 8 May 28 09:10 pavan Now, I want to tar and gzip this file like $tar -cvf - * 2>/dev/null |gzip -vf9 > pavan.tar.gz 2>/dev/null (I am aware I am creating the zipped file in the same directory as the original file) When I run the above tar/gzip commands around 20 times, a few times I observe that the final tarred and zipped file pavan.tar.gz file has a ZERO sized pavan.tar.gz file. I am not sure from where is this zero sized file coming into the archive from. Note: I am NOT running tar/gzip commands on an already existing tar.gz file. I always make sure that the directory has only one file before running the commands On googling, as described here, I suspected that the tar.gz being created was also part of the file being archived. But in my case, gzip is the one who's creating the final file and by the time gzip runs, tar should be done tarring. This is happening on AIX but I've used Linux tag too, to draw more attention, as I guess the problem is platform independent.

    Read the article

  • can not connect to SQL running on amazon ec2 machine

    - by njj56
    I am using SQL managment studio 2008 running on an Amazon EC2 machine. I am unable to connect to the database in my asp.net application. The EC2 instance has been set to accept connections over the SQL port. I am also able to remote the machine as well as view websites hosted on the server. Listed below is part of the connection string relating to this instance. When the program is ran and this connection string is called, it returns tcp error 0 - no return response. it just times out. <add name="ProjectServer" connectionString="Data Source=*IP ADDRESS HERE*,1433;Initial Catalog=*Catalog Name*;User ID=IP-0A6ED514\Administrator;"/> I removed the ip and the catalog name for the example, but I am sure they are correct. The only thing that I could think may cause an error, is the differences in names between the user id and the server name - the server name is ip-0A6ED514\sharepoint but the user name is ip-0A6ED514\administrator when I log into the sql server manager on the EC2 instance. A password is not used. Not sure if I would need to leave in a blank string for password - also not sure if the difference between server name and user id to log in makes a difference. Any help is appreciated. Thank you. update - when this connection string is used with out the port, i get tcp provider error 40 - when the port is in there, i get error 0 edit- the sql server is using windows authentication - does this make a difference? Usually I always use SQL server authentication

    Read the article

  • LASER x Deskjet Printer

    - by Mike
    I am about to buy a color printer. I had a b&W laserjet printer in the past but since then I use deskjet for decades. I need a printer that can deliver high quality as these photo deskjet printers but I am tired of paying ink that costs $9,000 per gallon of ink (1 gallon = 3.785 liters = 300 cartridges = $9,000). So, I was pondering about buying a color laser, but I am not sure if these printers can deliver the same quality and worth the investment in terms of toner consumption. I remembered that my old laserjet printer was able to print 1100 pages per toner cartridge. The deskjet printers I have can print 500 pages per cartridge. Price by price, 2 deskjet cartridges have more or less the same cost as one toner cartridge and in theory prints almost the same. I am not sure if this is true for color lasers. What you guys can tell me about quality, toner cost and cost per page using laser or deskjet. Does it worths the change? Remember that a deskjet printer costs $50 and a laser printer costs $200. thanks.

    Read the article

  • WBadmin script to backup System State into another partition failed ?

    - by Albert Widjaja
    Hi Everyone, I tried numerous time but it is still failed like the following script on cmd prompt. Is there any way to create automated script to backup system state on my Windows Server 2008 using WBAdmin into D drive inside a directory ? any help would be greatly appreciated. C:\Users\Administratorwbadmin START SYSTEMSTATEBACKUP -backuptarget:D:\Admin\Backup wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Starting System State Backup [12/02/2011 4:22 AM] Retrieving volume information... This would backup the system state from volume(s) SYS(C:) to D:\Admin\Backup. Do you want to start the backup operation? [Y] Yes [N] No Y ERROR - Specified backup location could not be found. C:\Users\Administratorwbadmin START SYSTEMSTATEBACKUP -backuptarget:D:\Admin wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Starting System State Backup [12/02/2011 4:22 AM] Retrieving volume information... This would backup the system state from volume(s) SYS(C:) to D:\Admin. Do you want to start the backup operation? [Y] Yes [N] No Y ERROR - Specified backup location could not be found. C:\Users\Administratorwbadmin START SYSTEMSTATEBACKUP -backuptarget:"D:\Admin\Backup" -quiet wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Starting System State Backup [12/02/2011 4:23 AM] Retrieving volume information... This would backup the system state from volume(s) SYS(C:) to D:\Admin\Backup. ERROR - Specified backup location could not be found. C:\Users\Administratorwbadmin START SYSTEMSTATEBACKUP -backuptarget:D:\ -quiet wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Starting System State Backup [12/02/2011 4:23 AM] Retrieving volume information... This would backup the system state from volume(s) SYS(C:) to D:. ERROR - Specified backup location could not be found. C:\Users\Administrator

    Read the article

  • How to rewrite these URLs?

    - by Evik James
    I am brand new to URL rewriting. I am using an Apache rewriting module on IIS 7.5 (I think). Either way, I am able to do rewrites successfully, but am having trouble on a few key things. I want this pretty url to rewrite to the this ugly url: mydomain.com/bike/1234 (pretty) mydomain.com/index.cfm?Section=Bike&BikeID=1234 (ugly) This works great with this rule: RewriteRule ^bike/([0-9]+)$ /index.cfm?Section=Bike&BikeID$1 Issue #1 I want to be able to add a description and have it go to exactly the same place, so that the useful info is completely ignored by my application. mydomain.com/bike/1234/a-really-great-bike (pretty and useful) mydomain.com/index.cfm?Section=Bike&BikeID=1234 Issue #2 I need to be able to add a second or third parameter and value to the url to get extra info for the db, like this: mydomain.com/bike/1234/5678 mydomain.com/index.cfm?Section=Bike&BikeID=1234&FeatureID=5678 This works using this rule: RewriteRule ^bike/([0-9]+)/([0-9]+)$ /index.cfm?Section=Bike&BikeID=$1&FeatureID=$2 Again, I need to add some extra info, like in the first example: mydomain.com/bike/1234/5678/a-really-great-bike (pretty and useful) mydomain.com/index.cfm?Section=Bike&BikeID=1234&FeatureID=5678 So, how can I combine these rules so that I can have one or two or three parameters and any of the "useful words" are completely ignored?

    Read the article

  • Apache start failing after apache config modifications, showing syntax error, cannot load php5apache2_2.dll into server

    - by Sandeepan Nath
    I am stuck again with apache setup guys. I am working on a Windows 7 system. I copied the working php5 installation directory from teammates, copied the necessary .dll files from inside php5 installation folder (like they were in the working setup of teammates) to my windows/system32/. Apache server started successfully with the default apache config file. I was able to access localhost in browser. But php code was not parsing. I noticed no such line like the following in the apache config file:- # PHP5 module LoadModule php5_module D:/php5/php5apache2_2.dll If I add this line, apache server start fails. Running test configuration gives the following error - httpd.exe: Syntax error on line 60 of C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/httpd.conf: Cannot load D:/php5/php5apache2_2.dll into server: The specified procedure could not be found. But the dll file is there in the specified location and I have given all permissions to the current system user to the php5 installation directory. The same line also appears in the apache error log, though I am not sure when exactly logs are written to the log file. I am confused if log entries are not made if I have opened the log file for reading? lol ... because I could not observe a pattern in when entries are made. I saw some log entries being made, some not. Oh, why is apache setup such a headache always????

    Read the article

  • WD my cloud 4th is Super Slow

    - by Saduser
    I am using a WD my cloud 4Tb and I have read other posts about users complaining about getting only 10Mb per second. My problem is that I am getting about 100kb/s to transfer a 125gb iPhoto library. Estimated time is 11 days to transfer this file. This is unacceptable. On the back of the WD cloud I am getting a solid green light and from what I read this means that I am on a gigabyte network. I have mac book pro running Mac OS Mavericks. I have tried 4 different cables and turned off my router firewall. I don't run anti-virus nor any firewall on the mac. Other things I have checked: direct connection to both router and WD cloud device. Tried wireless but it is even slower. Previously I was able to transfer a 55Gb iPhoto library in 14 hours which I felt was acceptable. I figured it would take approximately double the time to transfer the 125gb file but 11 days is ridiculous. Any other suggestions? Anything else I can check (how to check it) what is the bottle neck?

    Read the article

  • SSH client and Command Prompt replacements Windows look-and-feel

    - by Oddthinking
    The Problem I've worked exclusively in Windows. I can handle that. I've worked exclusively in DOS (a long time ago!). I can handle that. I've worked exclusively in Unix. I can handle that. Right now, I am developing a command-line (python) application on a Windows machine, testing it in a DOS box (i.e. Windows' Command prompt), and then deploying it to Linux, and running it with PuTTY. I cannot handle that. My productivity drops dramatically when CTRL-C cuts in one window (Windows) and kills the process in another (DOS, Linux). My productivity drops dramatically when Enter copies the selection in one window (DOS), and deletes the selection in another (Windows), and runs the current half-edited command in the third (PuTTY). My productivity drops dramatically when I cannot hit Undo, Home or End. The Solution I am Seeking An SSH/Bash command-line client that runs on Windows and, to the extent possible, uses all the standard Windows shortcuts (Cut, Copy, Paste, Undo, Home, End, Insert, Shift-Arrows, etc.) work on a bash command line. Bonus points if it puts the cursor between letters, rather than on them. Plus, an equivalent DOS command-line drop-in that runs on Windows, and provides the same interface. I appreciate there may need to be special buttons to actually transfer CTRL codes (like CTRL-C) through in the cases I need them. I suspect the SSH client will need to be specific to a shell (so it knows when it is at the command prompt, and when it is inside a running app.) I know there are many SSH clients, but I am looking for advice for a particular need. PuTTY feels like an escape route for Unix programmers stuck on Windows. I am the opposite. Can anyone recommend one (or maybe a combination of an SSH client and an Command-Line replacement)?

    Read the article

  • Linux Sound Drivers - Recompiling kernel with ehcd-hcd module

    - by jhome
    I just installed Linux Mint and I am trying to get my usb soundcard to work. I am completely new to Linux so I have only been learning about all the important parts of the OS this afternoon. No one I know has compiled kernel so I can't ask anyone. I am following instructions on this site: http://wiki.ubuntuusers.de/Benutzer/BigMc#Final-setup-of-US-122L-US-144 "Recompile your Kernel with ehcd-hcd as a module: Instructions for Ubuntu (german) Change "Device Drivers - USB support - EHCI HCD (USB 2.0) support)" to "M" when configuring. Make sure you also install the kernel headers." I know how to do any of these processes. Someone mentioned a package manager. Is recompiling basically in layman's terms reinstallation? Where do I go to the change device drivers? Cheers Edit: I am attempting to be able to use my Tascam US-144 on Linux Mint. Apparently, to use it properly it has to regress to functionality of a former unit (US-122), so according to the instructions a USB port has to be USB 1.0, rather than USB 2.0. I've tried using that ndis** programme for wireless drivers but with no success.

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

  • Java applet class never found

    - by Andrew
    I am having a problem with the java plugins of all three major browsers: chromium, firefox and Internet Explorer. The issue is that applets will always fail to load and yield errors no matter what website or what applet I am trying to use. I can not see any of the example applets on oracle's website; I can not launch the minecraft browser applet; nor can I launch any applet across the web using any browser. I always receive a "class not found:AppletClassHere.class" error. I know that my browser plugin must be working correctly because I can launch the Cisco Nac applet each time I connect to my network and I can launch my own simple applets offline. I am connecting to the internet through a vpn in case that is important. It also seems like a lot of other people I know are having this issue. Re-installing the plugin does not appear to fix the issue. I am asking this question to find out the cause of this problem, and any possible solutions. Thank you.

    Read the article

  • online backup plan for a home office with servers

    - by TiernanO
    So, i am in the process of tweaking my spending and i need to change my backup plan... I am currently using a mix of JungleDisk and ZManda ZCB to backup files on my MacBook Pro, Main Windows Server Wrokstation, a dedicated Windows Server in a datacenter, and various other machines and file sources. The problem is the cost: this month, it has cost me about $90 to backup a little over 500Gb... This amount of data will increese over time too, since i am backing up Photos (24Mb RAW images + 4-8MB JPEGs), Videos (various cameras shooting 720p and 1080p), Music, Movies, TV shows and Apps from iTunes (though with iTunes cloud, this might not need to be backed up again) and source code... I have looked at the likes of Mozy, CrashPlan+ and Pro, Backblaze and Carbonite, but each have their problems: Mozy seems overly expenvice per gig at 50C Crashplan wont sell to me since i am outside the US (they hide it on their site... hidden in the FAQ section!) Backblaze dont support Windows Server Carbonite business pricing is $600 up front for 500Gb of storage... Fro $229, they will not backup Windows Servers. So, other than those, Jungle Disk (at 15c per Gig) or ZManda (also at 15c per Gig) what other options are there? what are other people using?

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >