Search Results

Search found 5709 results on 229 pages for 'behind the scenes'.

Page 166/229 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • HP Proliant DL380 G4 - Can this server still perform in 2011?

    - by BSchriver
    Can the HP Proliant DL380 G4 series server still perform at high a quality in the 2011 IT world? This may sound like a weird question but we are a very small company whose primary business is NOT IT related. So my IT dollars have to stretch a long way. I am in need of a good web and database server. The load and demand for a while will be fairly low so I am not looking nor do I have the money to buy a brand new HP Dl380 G7 series box for $6K. While searching around today I found a company in ATL that buys servers off business leases and then stripes them down to parts. They clean, check and test each part and then custom "rebuild" the server based on whatever specs you request. The interesting thing is they also provide a 3-year warranty on all their servers they sell. I am contemplating buying two of the following: HP Proliant DL380 G4 Dual (2) Intel Xeon 3.6 GHz 800Mhz 1MB Cache processors 8GB PC3200R ECC Memory 6 x 73GB U320 15K rpm SCSI drives Smart Array 6i Card Dual Power Supplies Plus the usual cdrom, dual nic, etc... All this for $750 each or $1500 for two pretty nicely equipped servers. The price then jumps up on the next model up which is the G5 series. It goes from $750 to like $2000 for a comparable server. I just do not have $4000 to buy two servers right now. So back to my original question, if I load Windows 2008 R2 Server and IIS 7 on one of the machines and Windows 2008 R2 server and MS SQL 2008 R2 Server on another machine, what kind of performance might I expect to see from these machines? The facts is this series is now 3 versions behind the G7's and this series of server was built when Windows 200 Server was the dominant OS and Windows 2003 Server was just coming out. If you are running Windows 2008 R2 Server on a G4 with similar or less specs I would love to hear what your performance is like.

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • Motherboard running rather hot while gaming

    - by I take Drukqs
    Case: Antec 1200 Mobo: Gigabyte GA-X58A-UD3R CPU: Intel i7 950 (stock cooler) GPU: EVGA GeForce 570 GTX RAM: 2x 2 GB (4 GB total) DDR3 dual-channel Corsair OS: Windows 7 Home Premium 64-bit This is my first build and it's brand new. I had no problems putting it all together in a few hours one evening and I consider myself to be pretty good with computers. Not to brag or anything like that! Just saying I've been fiddling with them since I was in diapers and I have a good amount of experience under my belt, just not with certain things yet. Recently while playing many of the latest games maxed out without a hitch my motherboard has been running hot and like anyone who's ever built a computer it scares the life out of me. I checked HWMonitor and saw that my motherboard sometimes reached temperatures of around 52 - 78c (the number 78 obviously being what's scaring me). I was wondering if such a temperature is normal and if not what the problem could be. Air flow in my case is phenomenal and besides having to ship back a faulty GPU and reseat my CPU my first build has been a very large success which I am enjoying tremendously. There is literally almost no dust in my case due to it being very new as previously mentioned and my RAM sticks are in the correct slots for dual-channel mode. My cable management is pretty great in my opinion with only cables from my PSU lingering in the bottom of the case. At any given opportunity I ran my cables behind my mobo. Air flow should definitely not be a problem because my CPU only goes up to about 60c and my GPU only goes up to about 80c. Thank you very much in advance.

    Read the article

  • How do I load balance between two Linux machines?

    - by William Hilsum
    Inspired by the Stack Overflow network, I am now obsessed with HAProxy and trying to use it myself. At the moment, each HAProxy box has got two network cards (well, two configured, I can have a maximum of 4 and wasn't sure if they needed their own one for management between the boxes). On both machines, the backend one (eth1) is a private IP that goes to a switch connected to the webservers, and the front facing one (eth0) has a public internet IP that is routed straight though. In addition, I have created an additional virtual ip for eth0 called eth0:0 which has got a third public ip address. I just about get how to use it for load balancing between multiple web servers that are behind it, but, I am failing to load balance between the two HAProxy boxes - they appear to fight for the virtual IP, but, this does not appear to be a smart solution. Now, by using the virtual shared IP address, this solution appears to work and does seem to give me maximum uptime, but, is this the correct way to do it, or is there a smarter way? I have been looking at other Linux packages such as keepalived, but, I have only been using Linux (server) for a week now and am at the limits of my understanding. Is there anyone who has done this before and can you advise anything for maximum uptime?

    Read the article

  • how to go about scaling a web-application ?

    - by phoenix24
    for someone whoes been primarily a web-application developer, and know not much about scaling/scalability techniques. I'll start by stating my application is written in Python, using Django; a fairly standard setup. I currently use Apache 2.2 for my webserver, and MySql for my database server; both running on the same vps server. Up until now, it was basically a prototype and merely 15-30 concurrent users at any given time; so I had no issues, but now since we'll be adding more users we'll have severe performance issues. So my question is how do i go about scaling my web-application? and my plan is as follows. Now I have just one vps server running, apache + mysql. Next, I plan to add another vps server, to run only MySql; so i'll have one web-server and one db server. Next, I'll add Memcache to the webserver for caching data; and taking some load off mysql. Next, another web-server for serving all the static content; Next, a vps server for load-balancing (nginx/varnish) behind which would be my two web-servers and then db-server. Does that sound like a workable strategy, please guide me around here.

    Read the article

  • Can two mocha for After Effects X-Spline Layers be merged ?

    - by George Profenza
    Hello, I'm new to mocha for After Effects, but like the auto tracking feature. There are a few markers I'm adding, but there's one which is giving me a bit of a headache. I'm tracking a circle that moves around a larger object, so it moves in front of it initially, then behind, being occluded by the larger object, then in front again. I tried to track the circle in the first part(when it's in front, before being occluded) and in the second part(when it's in front again, coming out on the other side of the large object) using a single X-Spline. The problem is the second part starts in a different location and I don't know how to 'move' the X-Spline to the new position and track from there, without affecting the previous keyframes. As a workaround I use two X-Spline Layers, export the data to .txt files, then manually merge the two files into a new one containing keyframes from both X-Spline Layers. Is there an easier way to do this (either merging two X-Spline Layer, or using a single X-Spline Layer that can move to a new location without affecting previous keyframes) ? Any suggestion would help.

    Read the article

  • Emptying Windows temp folder is a good idea?

    - by Siva Charan
    Am using DELL Inspiron with Windows 7. As far as I know emptying the windows temp folder would be good. But I faced a strange behaviour around 8 months back, when I clear my windows temp folder. The next day onwards, my laptop starting displaying daily one or other errors and one day OS got crashed. Till now I am not sure whether OS got crashed due to clearing the windows temp folder or something else is problem. Here Windows temp folder mean "C:\Windows\Temp" This is the behind the story. Today, this temp folder "C:\Windows\Temp" contains 102 GB Most of the space occupied by the files starts with etilqs_*.*. I came to know that these files are generated due to WD SmartWare. Now my problem is:- Actually I want clean up this folder, since it occupies lot of space. If I clean up "C:\Windows\Temp" folder, will my laptop face the same kind of problem which I faced earlier OR Any new problems will occur? Please suggest me a good solution.

    Read the article

  • FFmpeg overlay two videos, one input with transparency

    - by Gian B.
    I am trying to create a karaoke from a CD+G file (converted to AVI using FFmpeg) and add a video as a background of the lyrics. Here's a screenshot of a the output from CD+G conversion, for simplicity let's call this lyrics.avi http://imgur.com/wUwHUhV Now a have this video.mp4 file that I'd like to put behind this lyrics.avi Here's a sample of what I'm trying to achieve http://imgur.com/8GuWXtQ I'm sure most of you are familiar with karaoke. I haven't used FFmpeg much and I'm not really sure if what I want to achieve is possible with FFmpeg. Is it possible to overlay two videos, and add a transparency to one of the videos? In this case the colour black? How can I set the offset time of the lyrics.avi? Here's the command the I've tried so far: ffmpeg -i video.mp4 -i lyrics.avi -filter_complex "nullsrc=size=854x480 [base]; [0:v] setpts=PTS-STARTPTS, scale=854x480 [upperleft]; [1:v] setpts=PTS-STARTPTS, scale=854x200 [bottomleft]; [base][upperleft] overlay=shortest=1 [tmp1]; [tmp1][bottomleft] overlay=shortest=1:y=280" -c:v libx264 -y karaoke.mp4

    Read the article

  • Spring-mvc project can't select from a particular mysql table

    - by Dan Ray
    I'm building a Spring-mvc project (using JPA and Hibernate for DB access) that is running just great locally, on my dev box, with a local MySQL database. Now I'm trying to put a snapshot up on a staging server for my client to play with, and I'm having trouble. Tomcat (after some wrestling) deploys my war file without complaint, and I can get some response from the application over the browser. When I hit my main page, which is behind Spring Security authentication, it redirects me to the login page, which works perfectly. I have Security configured to query the database for user details, and that works fine. In fact, a change to a password in the database is reflected in the behavior of the login form, so I'm confident it IS reaching the database and querying the user table. Once authenticated, we go to the first "real" page of the app, and I get a "data access failure" error. The server's console log gets this line (redacted): ERROR org.hibernate.util.JDBCExceptionReporter - SELECT command denied to user 'myDbUser'@'localhost' for table 'asset' However, if I go to MySQL from the shell using exactly the same creds, I have no problem at all selecting from the asset table: [development@tomcat01stg]$ mysql -u myDbUser -pmyDbPwd dbName ... mysql> \s -------------- mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 Connection id: 199 Current database: dbName Current user: myDbUser@localhost ... UNIX socket: /var/lib/mysql/mysql.sock -------------- mysql> select count(*) from asset; +----------+ | count(*) | +----------+ | 19 | +----------+ 1 row in set (0.00 sec) I've broken down my MySQL access settings, cleaned out the user and re-run the grant commands, set up a version of the user from 'localhost' and another from '%', making sure to flush permissions.... Nothing is changing the behavior of this thing. What gives?

    Read the article

  • AWS elastic load balancer basic issues

    - by Jones
    I have an array of EC2 t1.micro instances behind a load balancer and each node can manage ~100 concurrent users before it starts to get wonky. i would THINK if i have 2 such instances it would allow my network to manage 200 concurrent users... apparently not. When i really slam the server (blitz.io) with a full 275 concurrents, it behaves the same as if there is just one node. it goes from 400ms response time to 1.6 seconds (which for a single t1.micro is expected, but not 6). So the question is, am i simply not doing something right or is ELB effectively worthless? Anyone have some wisdom on this? AB logs: Loadbalancer (3x m1.medium) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 11.668 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 4285.10 [#/sec] (mean) Time per request: 23.337 [ms] (mean) Time per request: 0.233 [ms] (mean, across all concurrent requests) Transfer rate: 1661.35 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 4.3 2 63 Processing: 2 21 15.1 19 302 Waiting: 2 21 15.0 19 261 Total: 3 23 15.7 21 304 Single instance (1x m1.medium direct connection) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 9.597 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 5210.19 [#/sec] (mean) Time per request: 19.193 [ms] (mean) Time per request: 0.192 [ms] (mean, across all concurrent requests) Transfer rate: 2020.01 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 9 128.9 3 3010 Processing: 1 10 8.7 9 141 Waiting: 1 9 8.7 8 140 Total: 2 19 129.0 12 3020

    Read the article

  • Why does my DD-WRT not accept SSH connections from my laptop?

    - by Vlad Seghete
    So, here is my system: I have a 2Wire AT&T modem/router which I use for wireless and a Buffalo router flashed with DD-WRT which is physically attached to the 2Wire and set in the DMZ. I set everything up on the DD-WRT to be able to connect to it using ssh and also so that it forwards ssh requests on a different port to one of the servers behind it. Now, when I am physically connected to the DD-WRT all this works great and as I would want it to. I ssh into the two different ports using the WAN IP of my network, and I get where I expect to land. If, however, I am connected using wi-fi to the 2Wire, the same commands do not work. I do not get an error, simply a timeout. I have trouble understanding this, since the DD-WRT is set in the DMZ and everything should pass to it. To further complicate the problem, I tried connecting to the same IP using my phone (wireless disabled, so really from the WAN) and surprise, it works! If I go back on the local network by enabling the wifi, the ssh connection times out. To make this even stranger, my WAN IP address always responds to pings (meaning in all the above situations). What could be going on here? I know what I should do, completely disable the 2wire as a router and use it strictly as a modem and them use all the routing capabilities of the dd-wrt. It's what I will probably end up doing anyway, but my question remains, because I really want to know what is happening here.

    Read the article

  • NTLM, Kerberos and F5 switch issues

    - by G33kKahuna
    I'm supporting an IIS based application that is scaled out into web and application servers. Both web and applications run behind IIS. The application is NTLM capable when IIS is configured to authenticate via Kerberos. It's been working so far without a glitch. Now, I'm trying to bring in 2 F5 switches, 1 in front of the web and another in front of the application servers. 2 F5 instances (say ips 185 & 186) are sitting on a LINUX host. F5 to F5 looks for a NAT IP (say ips 194, 195 and 196). Created a DNS entry for all IPs including NAT and ran a SETSPN command to register the IIS service account to be trusted at HTTP, HOST and domain level. With the Web F5 turned on and with eachweb server connecting to a cardinal app server, when the user connects to the Web F5 domain name, trust works and user authenticates without a problem. However, when app load balancer is turned on and web servers are pointed to the new F5 app domain name, user gets 401. IIS log shows no authenticated username and shows a 401 status. Wireshark does show negotiate ticket header passed into the system. Any ideas or suggestions are much appreciated. Please advice.

    Read the article

  • CloudFront for dynamic content CDN

    - by Elad Lachmi
    I would like to use CF as a CDN for my entire site, including static and dynamic content. I have been using CF for static content for a while and I am very happy with the results. I am now doing POC of putting the web server completely behind CF. For the dynamic content I created a new distribution and set the origin to be my web server. Right now I'm looking to test the solution, so I have the web server on the original domain and the CF distribution on the amazon domain. This works with the exception of HTTPS urls and POST requests. For HTTPS requests, I see the requests are forwarded to the original site domain for now, but how will CF handle them when I move the distribution to the www cname? What configuration changes should I make so that CF forwards HTTPS requests to the origin? For POST requests, I want the post to be made to the origin server. Can I set this up in CF? Finally, the site has membership. Can I configure CF to pull all content from the origin if the user is logged in? Sorry for the long question. I'm a little lost and documentation for dynamic CF is still kind of scarce. Thank you!

    Read the article

  • IIS6 Multiple SSL websites to a single HTTP website?

    - by docflabby
    Running a IIS6 server on Windows 2003. All the websites use ASP.NET I have a number of websites all running separate HTTP websites: www.domain1.com www.domain2.com www.domain3.com I have a separate HTTPS website www.secure.com These websites are all running on the same server. I now wish to intergrate the content of www.secure.com into each of the domains in a transparent way. Such that each website despite having its own SSL connection displays the same website. The complicatrion is www.secure.com needs to know which website the connection has come from to apply the appropriate branding. The idea behind this is to have only one website, and location, but it keeps the core website brand. https://domain1.com looks alot better from a marketing point of view (and avoids users getting confused about what our secure website is) SSL www.domain1.com/secure - displays www.secure.com (branded domain1) SSL www.domain2.com/secure - displays www.secure.com (branded domain2) SSL www.domain3.com/secure - displays www.secure.com (branded domain3) How would the best way of achieving this, i'm open to using additional software if necessery. Would a reverse proxy be sutible for this situation?

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. If changing the limit isn't possible, would it be possible for some programming genius to create a very small optimized light-weight non resource-hungry (you get the idea!) program whose sole purpose will be to automatically click the "Show more details..." link in the details pane after every file selection? For bonus points, the program should wait for a second or two after file selection is complete to do this, so that selecting multiple files in quick succession doesn't hit the HDD repeatedly. Any takers for the challenge? :)

    Read the article

  • port forwarding problem

    - by Claudiu
    I want to set up an svn server on my computer, so it's available from anywhere. I think I set up the repository correctly, using CollabSVN. If I go to Repo-Browser with TortoiseSVN and point it to svn://localhost:3690, it shows the proper repository. The problem now is that I'm behind a router. My local IP is 192.168.1.45 . Doing svn://192.168.1.45:3690 also works. My global IP is, say, x.x.x.x. Just doing svn://x.x.x.x:3690 doesn't work, which makes sense, since I have to set up port forwarding. I'm using a Verizon router. Using their web interface (on 192.168.1.1) I added the following port forwarding rule: IP Address forward to: 192.168.1.45 Source Ports: Any Dest Ports: 3690 Forward to: 3690 Protocol: TCP However, even after applying this rule, going to svn://x.x.x.x:3690 doesn't work. It takes a few seconds to fail, then says that the connection couldn't be established because the server connected to didn't respond properly after a period of time. What's interesting is that a random port, like svn://x.x.x.x:36904 fails immediately, saying that the target machine actively refused the connection. So I figure that the forwarding rule did something, but not fully what was necessary. Any ideas on how to get this working? The router model is MI424-WR and the firmware version is 4.0.16.1.56.0.10.12.3. UPDATE: I also tried setting destination port to 45000, and still forwarding to 3690, in case something was wrong w/ the lower-numbered ports, but to no avail. I also tried port 80 to port 3690, still all in vain.

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • Help about pure-ftp

    - by hai
    I setup pure-ftp on freebsd behind firewall. On pure-ftp setuped passsi mode ftp(rangle port 50400-50600) and firewall open port from 50400-50600 (include mode IN and out). But i try use ftp client connect but not connect. Nofinication error status: Connecting to 210.245.89.95:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 13:20. Server port: 21. Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER bk Response: 331 User bk OK. Password required Command: PASS Response: 230 OK. Current directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTA Response: PASV Response: EPSV Response: SPSV Response: ESTP Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (210,245,88,98,138,1) Command: MLSD Error: Connection timed out Error: Failed to retrieve directory listing Status: Connecting to 210.245.88.98:21... Status: Connection established, waiting for welcome message... Help me.

    Read the article

  • Can't install mysql 5.1 on a windows machine because the last install left artifacts.

    - by Zombies
    After uninstalling mysql 5.1 (64 bit version) I cannot install the win32 version! Apparently the devs felt it neccasery to leave helpful artifacts behind? I have rebooted my machine but no effect.. Running this: C:\Users\User1>net start mysql The MySQL service is starting. The MySQL service could not be started. A system error has occurred. System error 1067 has occurred. The process terminated unexpectedly. And ran this: C:\Program Files (x86)\MySQL\MySQL Server 5.1\bin>mysqld --console 100213 10:52:58 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Error: log file .\ib_logfile0 is of different size 0 10485760 bytes InnoDB: than specified in the .cnf file 0 25165824 bytes! 100213 10:52:59 [ERROR] Plugin 'InnoDB' init function returned error. 100213 10:52:59 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100213 10:52:59 [ERROR] Unknown/unsupported table type: INNODB 100213 10:52:59 [ERROR] Aborting 100213 10:52:59 [Note] mysqld: Shutdown complete Update: For some reason it looks like it is installing the 32bit DB into the old 64bit directoy.... will look into this... (the bin directory is going into the 32 bit program files directory).

    Read the article

  • Torrent upload ratio not updated on Synology DS212+

    - by user179271
    I have a Synology DS212+ NAS running DSM 4.2-3211 (current version). I use it for several purposes including torrent download using Download Station and a tracker that needs authentication. My problem is that my download/upload ratio isn't updated, so it constantly falls down. My NAS is behind a router, and I configured the NAT to forward ports 6890 to 6999 to the internal IP address of the NAS. Here are the Download Station settings : TCP port : 6990, Sharing ratio : 900%, Sharing time : infinite, max download speed : 0 (no limit), max upload speed : 0 (no limit), BT protocol encryption : checked, max numbers of peers allowed by torrent file : 4000, DHT : checked, with port 6889. When the DHT option is not checked, the NAS doesn't upload any files. I don't know what is this option for. Can someone help me to solve this problem ? Did I miss any step, or does it come from the NAT ? How is the authentication managed by Dowload Station ? (Sorry for my english) Thanks.

    Read the article

  • Configuring network route between two routers on home network

    - by Paul
    I have a home network - the main router connected to the internet (and has wifi) is a Netopia box. Connected to it is a Linksys router. Everything currently works - I can connect via the wireless network and get to the internet. Machines connected to the Linksys can connect with each other and connect to the internet. Both routers are configured to serve addresses via DHCP (Netopia 192.168.1.1 - 192.168.1.99), Linksys (192.168.0.1 - 192.168.0.100). Here's how they are connected: Internet <-> Netopia w/wifi (192.168.1.254) <-> Linksys (192.168.0.1) I decided I really need to allow wireless connections to also communicate with machines behind the Linksys router. Currently the Linksys is configured to obtain an IP address via DHCP. I thought this would be straightforward. I configured the Linksys to have a static IP address: IP: 192.168.1.100 Mask: 255.255.255.0 GW: 192.168.1.254 Then I configured a static route on the Netopia: Network: 192.168.0.0 Mask: 255.255.255.0 GW: 192.168.1.100 So it should now look like this: Internet <-> Netopia w/wifi (192.168.1.254) <-> (192.168.1.100) Linksys (192.168.0.1) I reset both routers. I cannot ping the Netopia (192.168.1.254) from inside the Linksys network, and if I attempt to ping 192.168.0.1 from a wifi connection I get a "Destination host not available" error. Obviously I'm missing something, but I'm not sure where. Any ideas on what I'm missing?

    Read the article

  • DDoS nulling to some ips and other options?

    - by Prix
    I am looking for some information in regards DDoS in the follow scenario: I have a server that is behind a Cisco Guard and it will be DDoS'ed, I only care about a set list of IPS that by not means are the attackers. Is it possible to null all other ips but this list to actually get any response to my server or in the long run no matter what I do if they have enough DDoS power I will just go down like a flie ? Is there any recommended company out there that can actually cope with a DDoS ? My server will mainly run several clients that will get connected to a external server and all it needs access to is my local MySQL the the private network so I can access it. There will be no other services runnings such as web or ftp etc at least not to the external ips of the server if i ever have to have any of these service they will be on the private network. The MySQL will be available externally only to 1 safe ip not known by anyone but me and internally at localhost + private network. Are there any solutions ?

    Read the article

  • Under FreeBSD, can a VLAN interface have a smaller MTU than the primary interface?

    - by larsks
    I have a system with two physical interfaces, combined into a LACP aggregation group. That LACP channel has two VLANs, one untagged (the "native vlan") and one using VLAN tagging. This gives us: lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4> ether 00:25:90:1d:fe:8e inet 10.243.24.23 netmask 0xffffff00 broadcast 10.243.24.255 media: Ethernet autoselect status: active laggproto lacp laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> vlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=3<RXCSUM,TXCSUM> ether 00:25:90:1d:fe:8e inet 10.243.16.23 netmask 0xffffff80 broadcast 10.243.16.127 media: Ethernet autoselect status: active vlan: 610 parent interface: lagg0 Is it possible to set a 9K MTU on lagg0 while preserving the 1500 byte MTU on vlan0? Normally I would simply try this out, but this is actually on a vendor-supported platform and I am loathe to make changes "behind the back" of their administration interface. This system is roughly FreeBSD 7.3.

    Read the article

  • Debian, 2 NICs load-balancing or agregating with one same gateway

    - by pouney
    Hi, I have one server, with double NICs connected to one switch with the same gateway. Behind the switch we have internet. |Debian| - eth0 - switch - internet - eth1 - same I don't understand how to load-balancing between eth0 and eth1. The inbound/outbound traffic always use eth1. This is the config: # The primary network interface allow-hotplug eth0 auto eth0 iface eth0 inet static address 192.168.248.82 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 allow-hotplug eth1 auto eth1 iface eth1 inet static address 192.168.248.83 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth1 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth0 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth0 Ips aren't real, it's just for the example. Anybody have an idea on correct routing to use eth0 on 192.168.248.82 and eth1 on 192.168.248.83 ? I have many example for multiple gateway but here it's the same. Thanks all. Regards

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >