In order to set proxy in Chromium browser, one needs to go to
Settings ? Under the Hood ? Change Proxy Settings ? Network Proxy.
It's too complicated. How do I set http_proxy in shell? I've tried
export http_proxy=http://127.0.0.1:8080/
But it doesn't seem to work.
Also, if you only want to set the proxy on the Chromium browser -- not your entire network -- the command line is the only way to set the proxy just for the browser. How can one set the proxy on Chromium -- using the command line -- to solve this problem?
<b>the linux experience:</b> "So I recently decided I wanted to find out more about Windows 7, have the opportunity to form an opinion about it. Having mostly heard good things, I wanted to give it a try and find out if the guys at Redmond finally got it right."
<b>HowtoForge: </b>"This document describes how to install a mail server based on Postfix that is based on virtual users and domains, i.e. users and domains that are in a MySQL database."
I set up an OpenVPN server on my VPS, using this guide:
http://vpsnoc.com/blog/how-to-install-openvpn-on-a-debianubuntu-vps-instantly/
And I can connect to it without problems.
Connect, that is, because no traffic is being redirected. When I try to load a webpage when connected to the vpn I just get an error.
This is the config file it generated:
dev tun
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
ca ca.crt
cert server.crt
key server.key
dh dh1024.pem
push "route 10.8.0.0 255.255.255.0"
push "redirect-gateway"
comp-lzo
keepalive 10 60
ping-timer-rem
persist-tun
persist-key
group daemon
daemon
This is my iptables.conf
# Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011
*raw
:PREROUTING ACCEPT [37938267:10998335127]
:OUTPUT ACCEPT [35616847:14165347907]
COMMIT
# Completed on Sat May 7 13:09:44 2011
# Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011
*nat
:PREROUTING ACCEPT [794948:91051460]
:POSTROUTING ACCEPT [1603974:108147033]
:OUTPUT ACCEPT [1603974:108147033]
-A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
-A POSTROUTING -s 10.8.0.0/24 -o eth1 -j MASQUERADE
-A POSTROUTING -s 10.8.0.0/24 -o venet0 -j MASQUERADE
COMMIT
# Completed on Sat May 7 13:09:44 2011
# Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011
*mangle
:PREROUTING ACCEPT [37938267:10998335127]
:INPUT ACCEPT [37677226:10960834925]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [35616847:14165347907]
:POSTROUTING ACCEPT [35680187:14169930490]
COMMIT
# Completed on Sat May 7 13:09:44 2011
# Generated by iptables-save v1.4.4 on Sat May 7 13:09:44 2011
*filter
:INPUT ACCEPT [37677226:10960834925]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [35616848:14165347947]
-A INPUT -i eth0 -j LOG --log-prefix "BANDWIDTH_IN:" --log-level 7
-A FORWARD -o eth0 -j LOG --log-prefix "BANDWIDTH_OUT:" --log-level 7
-A FORWARD -i eth0 -j LOG --log-prefix "BANDWIDTH_IN:" --log-level 7
-A OUTPUT -o eth0 -j LOG --log-prefix "BANDWIDTH_OUT:" --log-level 7
COMMIT
# Completed on Sat May 7 13:09:44 2011
i have a collection of word files with lots of formulas objects made in MathType.
i know there's a way to mass convert doc files to open office format, but it doesn't guarantee that formulas will be transfered smoothly.
i was wondering if someone already figured out how to do that
thanks in adnvace
I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.)
I have several other machines that do some work on these files.
I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it.
It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.)
Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something...
A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)
I found this in my log server:
sm-mta[11410]: r9BKb6YY021119: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=2+07:24:18, xdelay=00:00:01, mailer=esmtp, pri=29911032, relay=mail1.mkuku.com. [58.22.50.83], dsn=4.0.0, stat=Deferred: Connection refused by mail1.mkuku.com.
This message is repeated every 10-30 seconds with a different "to" address.
What is this? Is my server being used to send spam?
I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down.
Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py:
import djcelery
djcelery.setup_loader()
BROKER_BACKEND = 'djkombu.transport.DatabaseTransport'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_BACKEND = 'database'
BROKER_POOL_LIMIT = 2
CELERYD_CONCURRENCY = 1
CELERY_DISABLE_RATE_LIMITS = True
CELERYD_MAX_TASKS_PER_CHILD = 20
CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60
CELERYD_TASK_TIME_LIMIT = 6 * 60
Here's the details via top:
PID USER NI CPU% VIRT SHR RES MEM% Command
1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat
1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat
1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat
That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;)
Update: here's my upstart config:
description "Celery Daemon"
start on (net-device-up and local-filesystems)
stop on runlevel [016]
nice 10
respawn
respawn limit 5 10
chdir /home/wuser/wuser/
env CELERYD_OPTS=--concurrency=1
exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log
Update 2:
I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup.
root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd
wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1
wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1
wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1
I have project hosted with gitolite on my own server, and I would like to deploy the whole project from gitolite bare repository to apache accessible place, by post-receive hook.
I have next hook content
echo "starting deploy..."
WWW_ROOT="/var/www_virt.hosting/domain_name/htdocs/"
GIT_WORK_TREE=$WWW_ROOT git checkout -f
exec chmod -R 750 $WWW_ROOT
exec chown -R www-data:www-data $WWW_ROOT
echo "finished"
hook can't be finished without any error message.
chmod: changing permissions of `/var/www_virt.hosting/domain_name/file_name': Operation not permitted
means that git has no enough right to make it.
The git source path is /var/lib/gitolite/project.git/, which is owned by gitolite:gitolite
And with this permissions redmine (been working under www-data user) can't achieve git repository to fetch all changes
The whole project should be placed here: /var/www_virt.hosting/domain_name/htdocs/, which is owned by www-data:www-data.
What changes I should do, to work properly post-receive hook in git, and redmine with repository ?
what I did, is:
# id www-data
uid=33(www-data) gid=33(www-data) groups=33(www-data),119(gitolite)
# id gitolite
uid=110(gitolite) gid=119(gitolite) groups=119(gitolite),33(www-data)
does not helped.
I want to have no any problem to work apache (to view project), redmine to read source files for project (under git) and git (doing deploy to www-data accessible path)
what should I do ?
I just installed a fresh Kubuntu 12.10 on a machine beside Windows 7. After successful installation, I rebooted and wanted to login. But when I type my password and hit Enter, some command line screen shows up for the split of a second and then it thows me back to the login screen without any error message. It's hard to spot what the command line text says, but I couldn't see any error or something like that. Anyway, when I log in as guest (without password), everything works finely. Also, when going to a system command line (using Ctrl+Alt+F1), I can login with my account without any problems.
Does anyone have a clue what is going on and how to fix it?
Good day,
I am having problems manually extracting domains from Plesk 9.5 backup that was FTPed onto my back up server. I have followed the article http://kb.parallels.com/en/1757 using method 2. The problem is here:
zcat DUMP_FILE.gz DUMP_FILE
My backup file CP_1204131759.tar is a tar archive and zcat does not work with it. So I proceed to run the command: cat CP_1204131759.tar CP_1204131759.
But when I try # cat CP_1204131759 | munpack
I get an error that munpack did not find anything to read from standard input.
I went on to extract the tar backup file using the xvf flags and got a lot of files (20) similar to these ones:
CP_sapp-distrib.7686-0_1204131759.tgz CP_sapp-distrib.7686-35_1204131759.tgz CP_sapp-distrib.7686-6_1204131759.tgz
How best can I extract the httpdocs of a domain from this server wide Plesk 9.5.4 backup?
What I need to do is a program that given (as a command line argument) a directory with more directoreies inside, and 4 Pics inside of each dir, the program makes a thumbnail of the 4 files and glues them together (2 rows, 2 columns) and renames that image to the name of the directory.
I think it could be done with a combination of a program and shell scripting (I'm experienced in M$, but new to linux).
Some real examples would be great.
Thanx in advance
I'm trying to run mysqld inside chroot environment. Herez the situation.
When I run mysqld as root, I can connect to my databases. But when I run mysql using init.d scripts, mysql gives me an error.
$ mysql --user=root --password=password
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
So I guess, I need to change file permissions of some files. But which ones? Oh and in case you are wondering '/var/run/mysqld/mysqld.sock' is owned by 'mysql' user.
EDIT: strace output looks something like this
[pid 20599] <... select resumed> ) = 0 (Timeout)
[pid 20599] time (NULL) = 12982215237
[pid 20599] select(0, NULL, NULL, NULL, {1, 0} <unfinished ...>
<b>Tech Drive-in:</b> "But apart from these eye candy, in a more subtle way, a number of new applications are also in the pipeline. Let's explore these new comers."
Hello all,
How can i disable totally the prompts that appear while installing a Debian package, i've used all the options that i've found but there are some packages that are still prompting.
I'm using this command:
apt-get -y --allow-unauthenticated --force-yes -o DPkg::Options::="--force-overwrite" -o DPkg::Options::="--force-confdef" install x11-common
Why the x11-common package is still prompting? how can i get rid of these prompts?
Thanks in advance
--Victor
Edit: just to clarify, the prompts are not "yes/no" prompts, are open questions in a coloured screen (typical two color screen) but i want to set the default option of these questions
For reasons too long to explain, I reluctantly removed Ubuntu from my computer. After completely removing it and deleting the partition that it was installed onto, I discovered that I still had two Ubuntu entries in the boot order in my BIOS menu. I deleted them by following the instructions in this answer:
http://askubuntu.com/a/63613/54934
As I was doing it, everything appeared to go smoothly. However, upon reboot one of them came back. What's going on here? How do I delete it permanently?
I'll gladly provide any other information that may be needed to diagnose the problem.
Thanks.
I have an FTP server that's on a low bandwidth connection. We want to set it up with a second IP address on a much higher bandwidth connection. I set up the second interface with a static IP address on the faster connection. This unfortunately does not work. I can verify that the second IP address works perfectly when I disable the first IP address.
What do I need to do to get two separate interface IP addresses on different subnets working on the same server?
My Ubuntu is 12.04.
I have just started learning Linux and Ubuntu in particular.
To remember commands quicker, I'd like to decline GUI.
But there are some problems. I don't know where installed programs are to launch them.
For example, I have a pdf file. I know that there is a program to view such files.
Should it be the case of GUI, I would just click on the pdf-file, and have a look that I use Document Viewer 3.4.0.
Then I would like to launch Firefox Web Browser. Even if I know it is installed, how to find the file to be launched using just CLI is a mystery to me.
Could you suggest me anything.
I connect to a VPN using openVPN. Now, after the connection is established, all my traffic goes through tun0.
My LAN gateway is 10.100.98.4...
So, for apps to use my direct internet connnection I did
sudo route add default gw 10.100.98.4
But, I cant use tun0 now. I know this because
curl --interface tun0 google.com
doesnt give me anything..
How do I go about using both connections simultaneously. How can I achieve that?
ROUTING TABLES:-
Without VPN running:-
Destination Gateway Genmask Flags Metric Ref Use Iface
10.100.98.0 * 255.255.255.0 U 1 0 0 eth0
default 10.100.98.4 0.0.0.0 UG 0 0 0 eth0
With VPN:-
Destination Gateway Genmask Flags Metric Ref Use Iface
10.10.0.1 10.10.54.230 255.255.255.255 UGH 0 0 0 tun0
10.10.54.230 * 255.255.255.255 UH 0 0 0 tun0
free-vpn.torvpn 10.100.98.4 255.255.255.255 UGH 0 0 0 eth0
10.100.98.0 * 255.255.255.0 U 1 0 0 eth0
default 10.10.54.230 0.0.0.0 UG 0 0 0 tun0
After the route command-
Destination Gateway Genmask Flags Metric Ref Use Iface
10.10.0.1 10.10.54.230 255.255.255.255 UGH 0 0 0 tun0
10.10.54.230 * 255.255.255.255 UH 0 0 0 tun0
free-vpn.torvpn 10.100.98.4 255.255.255.255 UGH 0 0 0 eth0
10.100.98.0 * 255.255.255.0 U 1 0 0 eth0
default 10.100.98.4 0.0.0.0 UG 0 0 0 eth0
default 10.10.54.230 0.0.0.0 UG 0 0 0 tun0
So I ran apt-get install httperf on my system and I can now run httperf. But how can I run 'autobench'? I downloaded the file and unarchived it and if I go in it and run autobench it says -bash command not found
I think it's a perl script but if I run perl autobench, it says:
root@example:/tmp/autobench-2.1.2# perl autobench
Autobench configuration file not found
- installing new copy in /root/.autobench.conf
cp: cannot stat `/etc/autobench.conf': No such file or directory
Installation complete - please rerun autobench
Even if I run it again it says the same thing.
I am am running Kubuntu Hardy Heron, with a dual monitor setup, and have VirtualBox on it running Windows XP in seamless mode.
My problem is, I can't get virtualbox to extend to the second monitor. Has anyone been able to achieve this or know if it can be achieved?
I have a quick question. How do I setup postfix to send an email to another server (Exchange Server) when sending to an email address that has a sub-domain of our main server. For example, say our main server is mail.example.com and we have a Exchange server setup to receive emails from exchange.example.com. We have the MX records setup in our DNS and it receives correctly if we send from a GMail account. However, when we try to send an email from a @example.com account we get the following error:
Host or domain name not found. Name service error for name=exchange.example.com type=A: Host not found
I believe Postfix checks for local mailboxes first and if its setup with the domain it delivers to the local account, but in this case the sub-domain accounts are located in another server. Anyone have any thoughts on what I need to do within Postfix so it doesn't look locally for the exchange.example.com mailboxes?
I found relay_domains directive within Postfix but that doesn't seem to fix it when I add the sub-domain.
Thanks for your help.
There is a file available on the Intel web site with the file name "cdv-gfx-drivers-1.0.1_bee.tar.bz2" and a date of July 6, 2012. It can be found by searching the Intel Download Center for the filename or the string "Linux* PowerVR Graphics/Media Drivers". The download page links to the file, release notes and a link, Enabling hardware accelerated playback that takes one to a page containing links to two pdf documents titled "Enabling Hardware Accelerated Playback for Intel® AtomTM Processor N2000/D2000 Series", one for Ubuntu and one for Fedora.
The instruction and release notes speak to working with kernel 3.1.0 and since I do not feel I have the skills, knowledge or training to do anything else but, follow the instructions to the "T", I am very reluctant to try anything on my freshly updated 3.2.0 kernel. I would much rather use a Ubuntu supported kernel that applies these drivers and doesn't break anything in the process. Is it a case where this is so new that Canonical has not yet included these drivers but, soon will do so?