Search Results

Search found 3203 results on 129 pages for 'transfer'.

Page 75/129 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Looking for Mini itx capable of running Photoshop + Illustrator (cs4 or higher)

    - by drozzy
    Ok, so I want to build a small pc for my gf that she can take with her instead of the crappy laptop that she has. Overall I think the complete system should not weight more than 10 lbs. The requirements are to run at least 4 applications simultaneously, and be able to switch between them with no problems: Photoshop Illustrator Word editor Browser Should be able to handle 1920x1200 resolution. I am currently looking at LGA775 socket as I can just transfer my desktop cpu Q6600 to it. Currently deciding between DQ45EK and DG41TX, but any other suggestions are welcome. So I am thinking something along the lines of: MINI-BOX M350 Case with 90 watt psu Q6600 cpu (my desktop cpu) 2x2GB kingston ram (or similar) Video? Need external or built-in G45 on DQ45EK will do? My primary concern is whether the 90WATT is sufficient of the Q6600? Thanks

    Read the article

  • Need to set mailx variable to specify the From address

    - by user256817
    Running Oracle Linux 5.8 (which is just re-branded RedHat EL 5.8) I must change the From address. But we have scripts that use mailx which cannot be re-written to use any extra flags, so I'd like to use internal variables instead, which I see on the linux.die.net manpage on mailx is an alternative to the -r flag: -r address Sets the From address. Overrides any from variable specified in environment or startup files. Tilde escapes are disabled. The -r address options are passed to the mail transfer agent unless SMTP is used. This option exists for compatibility only; it is recommended to set the from variable directly instead. (Source: http://linux.die.net/man/1/mailx) How can we use these mailx variables? I tried adding this to /root/.mailrc, no go: set [email protected] I also added that to /etc/mail.rc with no gold. So I am turning to you, SuperUsers...

    Read the article

  • Very High Network out in ec2 instance

    - by Jatin
    I launched an ubuntu-14.04-64bit instance in Amazon EC2 two days back. And I started Tomcat 7.0.54 in that instance and deployed my application war files. It has no other software installed other than tomcat and the default ones. In the past 2 days, its shows 858 GB of Data Transfer(Network Out) from that instance. I have attached a graph of Amazon CloudWatch Metric "Network Out" My application does not do any data download/upload. Its a Java Spring application and the front end is in HTML&Javascript. My application traffic was very low (less than 20 hits) in those 2 days. Is there a way to find out why these data transfers happened and also to find what data has been transferred. If you can see in graph, network out was 20gb per minute. Some more info: Network in was negligible CPU Utilization was very high Everything else was low

    Read the article

  • help building a PC that can image a dozen hard drives simultaneously

    - by Bigbio2002
    Not sure if this belongs on here or SuperUser, but here goes... I'm trying to figure out how to make a mass hard drive imaging PC out of COTS parts. A dedicated imaging device can do 10 drives at a time, but costs several thousand dollars. So far, I'm thinking to use several 3-port PCI-E Firewire cards, and use some kind of Firewire-to-IDE adapter to connect the drives themselves. The "software" would consist of scripting diskpart, or some other imaging utility. The problem is that I can't seem to find any sort of adapter. I could use standard external hard drive bays, but then I'd have a dozen power cables that I need to plug in. Ugly, messy, and inefficient. I picked Firewire over USB not only for better transfer speeds, but also because FW can deliver power over the bus (and could theoretically power a hard drive). Does anyone have any input on this?

    Read the article

  • Large file copy from NFS to local disk performance drop

    - by Bernhard
    I'm trying to copy a 200GB file from an NFS mount to a local disk. The local disk is an XFS filesystem on a LVM on top of a RAID 5 system (hardware RAID controler). I'm using rsync to monitor the transfer speed. At the beginning the IO speed is about 200MB/s, stable for the first 18GB. But then the performance drops by a factor of 10-20 and never recovers to the initial rate. Sometimes it reaches about 50-100MB/s but just for a few seconds and then the process seems to hang for a bit. At the same time all file-stat operations on the target filesystem are blocking for a long time (minutes). Also interrupting the copy process blocks for several minutes, a sub-sequent delete of the partly copied file takes also several minutes. Any ideas what could be causing this?

    Read the article

  • One NIC going to sleep on Centos system

    - by sbleon
    I have two Dell boxes with two ethernet ports a piece. I have a cable directly connecting two of these ports, creating a tiny LAN with 10.3.3.x addresses. The other port on each box is hooked up to a switch and has a DHCP-supplied address to talk to the outside world. I've noticed that when scp'ing large files from one box to the other over the private LAN, the transfers sometimes stall. It appears that any other network activity on either box will cause the transfer to resume. Wake-on-LAN is disabled on all interfaces according to ethtool. What else could be causing these stalled transfers?

    Read the article

  • vsFTPd and iptables - how to configure them in CentOS 5.5?

    - by Vincenzo
    I've installed vsFTPd in CentOS 5.5, on TWO servers, and added this rule to their iptable-s: -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT Looks like this is not enough, since when I'm trying to upload a file from one server to another, I'm getting this result (IP address is masked): # ftp 99.99.99.99 Connected to …com (99.99.99.99). 220 (vsFTPd 2.0.5) Name (99.99.99.99:root): vinny 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> ls 227 Entering Passive Mode (99,99,99,99,107,74) ftp: connect: No route to host I've found a few articles in the net about the second rule I have to add to iptables, but I didn't find the right syntax for it. Could you please help?

    Read the article

  • How to tell if you are connected to Wireless B, G or N?

    - by Raheel Khan
    I am using Windows 7 on all wired desktops and wireless laptops in my home network. I recently upgraded my Ethernet switch to Gigabit and instantly noticed an increase in throughput in wired devices. I also bought a Wireless-N WAP but with degredation in wireless file transfer speeds. I have been told that a number of reasons could affect wireless speeds including which WAP is used, how many wireless devices are connected, which security mode is used, etc. However, that remains irrelevant to my question. Each of my laptops claim to support Wireless-N but I cannot seem to figure out how to determine if the laptops are truly running Wireless-N or are connected to the WAP through some sort of mixed-mode. I do not have control of the WAP device so cannot tell what mode it is running in. Is there a way to tell which mode is being used and what the throughput is for each connected device without having access to the WAP interface?

    Read the article

  • AXFR problem using gradwell secondary DNS

    - by Roaders
    Hi All I use Gradwell.com to provide secondary DNS but I keep getting e-mails along the lines of the following saying that it's not working: You have asked us to provide a secondary DNS service for the following domain(s) Unfortunately, the primary DNS server(s) you specified are not permitting the necessary zone transfers from our servers, or they are not answering "SOA" queries for your domain correctly. I have gone through the support procedure and they weren't that helpful. They have suggested the following: Our secondline team have suggested setting the AXFR to use anouther machine. This will ensure that the transfer is not locked down to one machine and should allow any machine to make the request I don't really know what AFXR is and I only have 1 production machine so I can't set the AFXR to use another one! In previous support correspondence we confirmed that I am allowing transfers to the correct IP and that I have the correct ports open on the firewall. I am running Windows Server 2003. What can I do to try and get these zone transfers working? Thanks

    Read the article

  • Server 2003 answers ping, but wont serve http, ftp,smtp or pop3

    - by Manfred
    After reboot, my server wont respond to any incoming request until it is rebooted again. Then, about 5-6 hrs later, any website on it will return a ping, but it will not serve the page, nor will it serve ftp, pop3 or smtp requests. The System log shows W3SVC errors 1014 and 1074, which relate to an Application pool not replying; I have one phpAdmin app pool which I have stopped - it is showing a solitary website as the default App, but the server no longer serves php extensions, and I can't transfer the default website to another pool to kill the whole app pool. I would appreciate your help.

    Read the article

  • Nginx Cache-Control

    - by optixx
    Iam serving my static content with ngnix. location /static { alias /opt/static/blog/; access_log off; etags on; etag_hash on; etag_hash_method md5; expires 1d; add_header Pragma "public"; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } The resulting header looks like this: Cache-Control:public, must-revalidate, proxy-revalidate Cache-Control:max-age=86400 Connection:close Content-Encoding:gzip Content-Type:application/x-javascript; charset=utf-8 Date:Tue, 11 Sep 2012 08:39:05 GMT Etag:e2266fb151337fc1996218fafcf3bcee Expires:Wed, 12 Sep 2012 08:39:05 GMT Last-Modified:Tue, 11 Sep 2012 06:22:41 GMT Pragma:public Server:nginx/1.2.2 Transfer-Encoding:chunked Vary:Accept-Encoding Why is nginx sending 2 Cache-Control entries, could this be a problem for the clients?

    Read the article

  • Why would http & https be blocked, even in safe mode with firewall disabled?

    - by Cogwheel
    I have a windows 7 machine (dell studio xps). Everything on it seems to be in working order. The network device says it has internet connectivity, and indeed I can ping websites, transfer files via ftp, connect to vpns and remote desktop, but the web won't work. I've disabled the windows firewall and still no go. There are no other firewalls installed. The computer came with a trial of norton 360 so I also used the norton removal tool (which solved a similar problem on another computer for me previously). Any thoughts?

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • Linux QoS: bulk data transmission during idle times

    - by syneticon-dj
    How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X. The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times. Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.

    Read the article

  • YSLow says certain CSS are not gzipped

    - by rhand
    YSlow keeps on telling me files like http://www.example.com/wp-content/plugins/q-and-a/css/q-a-plus.css?ver=1.0.6.2 are not gzipped while the gzip test tool at Feed the Bot mentions I am all good: Compressed? Yes Compression type gzip Page size (Bytes) 32,493 Compressed size (Bytes) -1 Saving (Bytes) 32,494 Compression % 100% I added this to my .htaccess: # Gzip <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> #Deflate <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </ifmodule> The header for the file mentioned states: CF-Cache-Status MISS CF-RAY 13945df90a9a0c1d-AMS Cache-Control public, max-age=2592000 Connection keep-alive Content-Encoding gzip Content-Type application/javascript Date Thu, 12 Jun 2014 07:34:38 GMT Expires Sat, 12 Jul 2014 07:34:38 GMT Last-Modified Thu, 21 Feb 2013 01:29:18 GMT Server cloudflare-nginx Transfer-Encoding chunked Vary Accept-Encoding Any ideas what I am missing here?

    Read the article

  • How to connect a remote IP Phone to our VOIP Network?

    - by Mistiry
    We have an IP phone system in our office, and about 8 VoIP phones running on the system. We have a remote worker, who is literally states away. We'd like to connect his phone to our VoIP network, so that he has a business phone and an extension to which we could transfer calls. I was thinking, although I don't know for sure, that a pair of Cisco routers could be used in some way to make this work. I imagine a VPN solution, where I have one router connected to the phone network and the other router connected to the remote phone. Then have a site-to-site VPN set up so that the remote router... And that's where I'm stuck. I know the remote router will need to use the DHCP server of the phone system. I've never set up something like this, so I am seeking the help of the community here. What is the best way to get this remote VoIP phone RELIABLY connected to our internal VoIP network?

    Read the article

  • What's the easiest way to allow Exchange 2003 remote (no MSO client) users check their Mailbox size?

    - by Myrddin Emrys
    We are migrating from Exchange 2003 with no quota settings to Exchange 2010 with limited mailbox sizes. We are trying to get users to clean their mailboxes prior to the move to reduce the transfer load, as well as to comply with new quotas on the 2010 system. But many users access their mail through webmail only. I cannot see a way for users to access their mail store size in this manner. Has anyone else run into this problem? Is there a good way to easily let users check their own mailbox size? The only thing I've come up with as a workaround is a report that IT generates and mail-merge it out to users daily with their current mailbox size. This is cumbersome and time consuming compared to a way for them to check their own mailbox size however.

    Read the article

  • Amazon EC2- many micro-instances vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • TFTP PUT Failing Across Hosts

    - by Jason
    I have a TFTP server installed on a CentOS host. /etc/xinetd.d/tftp: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -c -s /var/lib/tftpboot per_source = 11 cps = 100 2 flags = IPv4 } If I try to PUT a file from a remote host to the host running the TFTP server, I get Transfer Timed Out - however, it does create the file in /var/lib/tftpboot but the file is empty. If I tftp from the tftp server to itself (localhost) and PUT a file, it works fine. I have verified that SELinux is disabled and IPTables are turned off. I can connect from the remote hosts with no issue - just seems to be the PUT I have issue with: [root@SVR01 TEST]# tftp 10.100.2.15 tftp> status Connected to 10.100.2.15. Mode: netascii Verbose: off Tracing: off Literal: off Rexmt-interval: 5 seconds, Max-timeout: 25 seconds tftp>

    Read the article

  • Amazon EC2- micro-instance vs single small/medium instance

    - by shashankaholic
    I have a chat application using stack of Openfire, Tomcat6 and MySQL. Currently, i have installed all these servers on single Linux micro-instance(613 MB memory). Even in low user base 10-20 i am encountering CPU overload which is quite obvious here. As, i am new to Amazon EC2 can somebody suggest me how to scale up my architecture according to traffic use? should i use separate micro instances for every app server(openfire,mysql,tomcat6) should i use single small or medium instance for whole server stack. Some factors in context: high reliance on MYSQL high memory usage due to file transfer web-application interacting with other Amazon service like S3,SES

    Read the article

  • How can I copy the output from a remote command into the local clipboard?

    - by cwd
    I use iTerm2 as my terminal client in Mac OS X. On the local system I can use pbcopy and pbpaste to transfer data between the system clipboard and the terminal, but of course this doesn't work when you're ssh'ed to another machine. Is there some way which I can take the result of a command and copy it to the clipboard automatically? Perhaps an applescript to grab the text on the iTerm windows, then get the next to last line? For instance, if I wanted to copy the current working directory: I run pwd, then use the mouse to select the text, and then press command + c. Is there any better / faster / automatic way of doing this? I'm not looking for a bulletproof solution that would work for every command (eg: might not work when there is a huge scrollback) - I'm just looking for something to make this task that I do quite often a little less tedious. Update I'm looking into using screen to do this, but I'm still not sure if it is possible.

    Read the article

  • Accidentally ejected my Verbatim drive and can't get the icon back

    - by Erin
    Hi, I have time machine running on my iMac OSX v10.5.8 and also have a Verbatim 1TB attached that I use as a workspace/scratchdisk so I can manipulate large music files before I transfer them. However, when cleaning behind my computer the other day I think I dislodged the connection (or maybe one of the kids hit the eject button, i don't know) however, I've re-booted many times and it's not reconnected. It doesn't appear in my disc utility windown and I don't know how to get the icon back! I've looked in time machine but it doesn't appear there at all (cos it's not supposed to I think - it's not connected - my mate hooked it up for me and he won't return my calls!). Help. I don't know how to get it back! Sorry for being a plank.

    Read the article

  • How To Create An FTP User That Has Permission To EVERYTHING

    - by Serg
    I've spent the last two hours trying to create an FTP user so I can transfer some files over to my Wordpress blog folder. /var/www/sergiotapia.me I'm using vsftpd and Ubuntu 12.04 for my FTP server and I've read tons of documentation, none of which seem to work. I still cannot log in with the FTP user, let alone test if I even have the read/write file permissions. Can a Linux guru here, help me out with a small step by step? I'm comfortable with the terminal and nano, so that's not an issue - I'll SSH into my box. Just tell me what to do and what commands to run. Specifically, this user needs to have read and write access to the /var/ folder and anything within it. I want to have 1 user that can do whatever the heck he wants on my Ubuntu 12.04 VPS machine.

    Read the article

  • Can I take a HDD in Raid1 and plug it straight into a different machine?

    - by jacko
    I would assume that I can just take my HDD out of my NAS (in raid1 mirror) and plug it into another enclosure and have it work off the bat but I'd like to make sure... Any ideas? Edit: My current setup is a Netgear ReadyNAS in (hardware) raid1. I'm hoping to replace this with a home theatre type PC (possibly running Ubuntu), and would like to migrate my data without having to do a bulk transfer over my network between the 2 machines. Can anyone confirm the case for the Netgear ReadyNAS?

    Read the article

  • Corrupted file, hard drive test?

    - by all-R
    Hi guys, I'm currently on a macbook with a 1TB external hard drive connected trough a USB hub wich is connected on my macbook. The problem is, my disk, wich is partitioned in 2 (one HFS+ and one NTFS) keeps getting corrupted, recently it was my HFS+ partition, I could not repair it using the Apple's Disk utility, but was able to backup my files. Is it synonym that my hard drive is failing? Is it because of my USB hub? I also keep all my iTunes library on my external HD (HFS+ partition), and did a lot of transfer lately, adding files, removing etc. the last time, my partition got corrupted after a lot of deleted items. If anybody has an idea of what to check first, what could cause the problem, I would appreciate it :) Thanks!

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >