Search Results

Search found 5900 results on 236 pages for 'rest'.

Page 202/236 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • Allow anonymous upload for Vsftpd?

    - by user15318
    I need a basic FTP server on Linux (CentOS 5.5) without any security measure, since the server and the clients are located on a test LAN, not connected to the rest of the network, which itself uses non-routable IP's behind a NAT firewall with no incoming access to FTP. Some people recommend Vsftpd over PureFTPd or ProFTPd. No matter what I try, I can't get it to allow an anonymous user (ie. logging as "ftp" or "anonymous" and typing any string as password) to upload a file: # yum install vsftpd # mkdir /var/ftp/pub/upload # cat vsftpd.conf listen=YES anonymous_enable=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root #directory was created by root, hence owned by root.root anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #chroot_local_user=NO #chroot_list_enable=YES #chroot_list_file=/etc/vsftpd.chroot_list chown_uploads=YES When I log on from a client, here's what I get: 500 OOPS: cannot change directory:/var/ftp/pub/incoming I also tried "# chmod 777 /var/ftp/incoming/", but get the same error. Does someone know how to configure Vsftpd with minimum security? Thank you. Edit: SELinux is disabled and here are the file permissions: # cat /etc/sysconfig/selinux SELINUX=disabled SELINUXTYPE=targeted SETLOCALDEFS=0 # sestatus SELinux status: disabled # getenforce Disabled # grep ftp /etc/passwd ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin # ll /var/ drwxr-xr-x 4 root root 4096 Mar 14 10:53 ftp # ll /var/ftp/ drwxrwxrwx 2 ftp ftp 4096 Mar 14 10:53 incoming drwxr-xr-x 3 ftp ftp 4096 Mar 14 11:29 pub Edit: latest vsftpd.conf: listen=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root anonymous_enable=YES anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #500 OOPS: bad bool value in config file for: chown_uploads chown_uploads=YES chown_username=ftp Edit: with trailing space removed from "chown_uploads", err 500 is solved, but anonymous still doesn't work: client> ./ftp server Connected to server. 220 (vsFTPd 2.0.5) Name (server:root): ftp 331 Please specify the password. Password: 500 OOPS: cannot change directory:/var/ftp/pub/incoming Login failed. ftp> bye With user "ftp" listed in /etc/passwd with home directory set to "/var/ftp" and access rights to /var/ftp set to "drwxr-xr-x" and /var/ftp/incoming to "drwxrwxrwx"...could it be due to PAM maybe? I don't find any FTP log file in /var/log to investigate. Edit: Here's a working configuration to let ftp/anonymous connect and upload files to /var/ftp: listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • Ubuntu 12.04: apt-get "failed to fetch"; apt is trying to fetch via old static IP

    - by gabe
    Sample error: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/universe/i18n/Translation-en Unable to connect to 192.168.1.70:8118: Now this was working just fine until I changed the IP this morning. I have the server set to a static IP of 10.0.1.70 and for years it has been 192.168.1.70 - the IP apt-get is trying to use right now. I use privoxy and tor thus the 8118 port. Like I said it all worked until I changed the static IP from 192.168.1.70 to 10.0.1.70. I was forced to do so because of router issues. (Long and involved story, I didn't really want to change the IP because I know something like this would happen.) The setup for TOR/Privoxy requires that has you point Privoxy at TOR via 127.0.0.1:9050. Then point curl, etc to Privoxy via $HOME/.bashrc. Typically you would set the listen to IP for Privoxy to 127.0.0.1 but if you want it accessible to the rest of the LAN you set the IP to the server's LAN IP. Which I did a long time ago and was working fine until this morning. I have changed all instances of 192.168.1.70 to 10.0.1.70 in both /etc/privoxy/config and $HOME/.bashrc. What makes this really strange for me is that curl is working fine. I curl icanhazip.com and voila I get a new IP every 10 minutes or so. I curl CNN.com and I get the short but sweet permanently moved to www.cnn.com message I expect. Firefox works fine. Ping works fine. And I've tested all of this via Remote Desktop over my LAN. So the connection appears to be fine for everything except apt. I've also rebooted hoping that would clear 192.168.1.70 from apt. So the connection to the internet and DNS aren't an issue for these programs. And they are, as far as I can tell, using Privoxy/TOR just fine. The real irony here is that I've tried to open up Privoxy to go to Ubuntu's servers directly without going through TOR to speed up the downloads from Ubuntu (did this months ago). So somewhere that I have not been able to find, apt has stored the IP 192.168.1.70. And 192.168.1.70 is no longer valid. Thanks for the help

    Read the article

  • OpenVPN, Great on Windows, VERY slow on Mac...

    - by Phsion
    Hello, I'm not really an IT Pro, but this seemed like the best place to ask this question... I have setup VPN networks in the past, for fun, and everything was great, but now I've set one up for my boss, and while my computers all work great, his Mac machines are almost too slow to work with. Its pretty much vanilla configs all around, anyone have any ideas? Its a TUN routing setup over UDP. Back Story: My boss travels a lot, and wants to be able to access all his files from the road, and is also pretty paranoid about security (even though knows almost nothing about computers). SO i figured a VPN would be the answer. I went with OpenVPN, but there are some other issues. The only ISP we can get in our area besides Dial-UP is a crappy Satellite provider, that doesn't offer public IPs unless your willing to pay, so while the computers and VPN setup are pretty vanilla, the routing and structure is strange to get around this limitation. Specs: Its OpenVPN2, and there are six machines using it (only three actually use it, the rest are my test machines), one Windows 7 laptop, two XP Desktops, one OS X 10.5 Desktop, one 10.6 Desktop, and one 10.6 Laptop. One XP Desktop sits at my house and acts as the server (6Mbs/2Mbs FIOS connection). One XP desktop sits at the office and hosts a webpage that will wake up the Main Mac Desktop from sleep, and also ping all the machines on the VPN and show their status. The main office mac (10.6) stays in sleep mode until it gets the Wake-On-Lan packet from the Office XP, and then it auto connects to the VPN and opens itself up. The reason for all this is the Satellite private IP crap means i cant directly access the office machines outside of the LAN, so everyone connects to my house first, then they talk to each other from there. The Wake On Lan weirdness is because my boss doesn't want to leave the main Mac on all the time, and making a quick and dirty webpage was the easiest way to send a Magic Packet from inside the LAN without confusing my boss. The VPN uses Client Config files to make static IPs for the client. The only thing i found in google was some changes to the VPN MTU settings (down to 1400) but no real help. Oh, and i forgot...all the windows machines just have OpenVPN start as a service. The Mac laptop uses tunnelblick (an OpenVPN GUI) and the Mac Desktops use OpenVPN in normal command line mode. Server Config: tun-mtu 1500 fragment 1450 mssfix 1450 management localhost #### port #### proto udp dev tun ca ####### cert ####### key ###### dh ###### server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-config-dir ccd route 10.8.0.0 255.255.255.252 client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status log Client Configs (all are simple variations on this) tun-mtu 1500 fragment 1450 mssfix 1450 client dev tun proto udp remote ######## #### resolv-retry infinite nobind persist-key presist-tun ca ##### cert ##### key ##### ns-cert-type server comp-lzo verb 3

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • Port forwarding problem

    - by Steve
    I have a modem connecting to ADSL2 network and a router connecting to the modem. The rest of the machines all connect to the router. The modem has IP as 192.168.1.1 and the router's IP is 192.168.0.1. From the modem configuration, I can see that the modem thinks the router's IP is 192.168.1.2. I can visit the router by either using 192.168.0.1 or 192.168.1.2. Now I forward a port from the router to a private machine. It works. I can test it by typing 192.168.1.2 and it is redirected to the private machine. But if I use 192.168.0.1, it is still the router's configuration page. I also do a port forwarding on my modem. Since the modem sees only the router, I can only forward the port to the router's specific port. And I am thinking that by doing this, I can reach the private machine after two times port forwarding, once on the modem and once on the router. I also have a static public IP. I want to achieve the goal that when someone types the public IP, he will be redirected to the private machine. But when I use some online port forwarding tester, the result always says that the port is closed on the public IP. I have the questions: Why my router has two IPs? Why using one IP I can see the port forwarding result while using the other I cannot? I think the port forwarding only works when visiting from outside, rather than from both outside and inside. Otherwise, if I set port forwarding on my router/modem on port 80, I will never be able to see its original configuration page again. Everything is forwarded. Am I right? How can I achieve my goal described above? By achieve this, I will have a dedicated server of my own and the users can visit from the public IP. Anyone can correct me on any mistakes I made? I am using Netconn modem and D-Link DIR-300 router. Thank you very much for any help. Edit: Consider I have correctly setup the whole thing. Now I want to test my website by using public IP to visit it, but the port forwarding doesn't work. Does it consider that I am inside the local network and not using the port forwarding? If so, how can I do it? I ask my friends (outside my local network) to have a try and they can see the website. What should I do so that from the inside, I can do the testing? Thank you very much.

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • Share 3G connection over WiFi-LAN network

    - by kush.impetus
    This is how I have established network between my PC and my laptop at home (being novice in networking, it took me few days to achieve the feat). And it is working perfectly. I can easily share files between them. Laptop IP Address: 192.168.1.4 Subnet mask: 255.255.255.0 Default Gateway: 192.168.1.2 Desktop IP Address: 192.168.1.5 Subnet mask: 255.255.255.0 Default Gateway: 192.168.1.2 ASUS RT-N10+ Router IP Address: 192.168.1.4 Default Gateway: 192.168.1.2 I have connected the Desktop PC to the router using a LAN cable, and laptop to router over WiFi. Both, PC and laptop are running on Windows 7 OS, are on same HomeGroup, have same username / password. Also, I have connected the Ethernet cable to LAN port 1 of the router. Click here to view a graphical representation of the network. Can't post image here, because I don't have 10 reputation points. Now, what I want is use connect to Internet using a 3G USB modem on one device and share it over the network on the other. I tried Huawei and Micromax 3G USB modem. Both obtain a new IP address whenever I connect to Internet (means they have dynamic IPs). Rest, both have Subnet Mask as 255.255.255.255 and Default Gateway as 0.0.0.0. In that case, I cannot directly share Internet from the modem. Preferred DNS is blank for now in both, laptop and PC. What I am planning to do is to connect to Internet on laptop using the 3G modem and share the Internet connection over laptop's Wi-Fi (as hotspot) using Connectify, which I have done already. That, I suppose, will broadcast a static IP to connect to. Now what I can't figure out is that what changes should I make to the network settings of the router and the PC so that PC connects to the Internet broadcast by Connectify? Is that possible on the first hand? Please note that I am trying to implement the network without spending anything extra (for purchasing as USB WiFi adapter for PC, of course, which could have made the life lot easier for me). Thanks in advance

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • MySQL Execution Time Spikes

    - by Brett
    I am having issues with MySQL all of the sudden today. Details: OS: CentOS release 5.7 Server type: Parallels virtuozzo container running on mediatemple DV 4.0 package Average total memory usage: <500mb Total memory usage allowed: 1gb (part of shared pool for emergency only, users are only guaranteed 500mb) Processor: 1ghz Main database sizes with most usage: 275mb & 107mb server stack: nginx 1.0.10, mysql 5.1.54, php 5.3.8 with php-fpm innodb_buffer_pool_size=100M php-fpm max children: 5 Webapps: custom php-based sites, magento & drupal slow query timeout is set to 1 second Steps I completed towards diagnosis: Cannot restart container yet - I will try later tonight when our domestic traffic has dropped Enabled mysql and php-fpm slowlog. Found functions that did DB queries in php-fpm slowlog were taking over 1s to complete at times Found some simple queries in mysql slowlog taking well over 1s to complete that should take less than 1s. Most interesting - execution time seems to spike at times. A query will take .2s a couple times, then one time it will take 8s to run the same query. These results were verified by running raw SQL queries through mysql command line. Top does not reveal anything too interesting Only resource related thing i can see is load averages much higher than normal Up until today, mysql has been fine, there have been no major changes to the db since yesterday. Sometimes things are so bad, I am seeing bad gateway errors after 60s of execution time. Innodb is doing on average 300-1400 reads/sec. Mysql is doing 3-10 queries/sec slow query count in 2 hours uptime is 171 (with slow timeout at 1 second) Tried restarting mysql, nginx, php-fpm multiple times For example: UPDATE `catalogsearch_query` SET `query_text` = 'EW 90', `num_results` = '7532', `popularity` = '99180', `redirect` = NULL, `synonym_for` = NULL, `store_id` = '1', `display_in_terms` = '1', `is_active` = '1', `is_processed` = '1', `updated_at` = '2012-05-08 21:38:31' WHERE (query_id='31'); This query took 17sec to complete one time, rest of the time around .079 sec. But varies, sometimes 1sec, sometimes .004 sec. This is running the same query, over and over with a couple seconds time in between each. Most tables are innodb, and sometimes I noticed the lock time taking 90% of the query execution time, but most of the time lock time is insignificant. Any idea what's going on here?

    Read the article

  • Suggestions for sharing and using data between Ubuntu and Windows 7 dual boot

    - by wizjany
    Note: TL;DR, scroll to bottom for summary. I recently set up my computer for a dual boot between Ubuntu 9.10 and Windows 7. My current drive setup is as follows. | A1 | A2 | | B1 | B2 | B3 | A1: 100 mb, windows 7 "System Reserved" boot partition A2: 230 gb, data section, this one needs to be shared between the operating systems B1: 125 gb, windows 7 OS B2: 123 gb, ubuntu OS B3: 2gb, linux swap space Pretty much i want to have my documents, music, pictures, videos, etc accessible from both operating systems. My first attempt involved making the data (a2) partition NTFS, and moving my home folder from ubuntu to the data partition. However, as I read NTFS does not work nice with permission, and it messed up my home folder. My next idea is one of the following: 1) format the data partition to ext2/3/4 and move my home folder from linux there, and get a driver to read ext partitions in windows 7. The problem with that is that most of the ext drivers/software are not compatible with windows 7 or do not integrate with windows explorer (I really don't want to open a separate software window just to access my data, plus it's probably not compatible with other software.) http://www.fs-driver.org/ looks promising, but I'm not sure how it works with ext4 and windows 7 (not officially supported, when trying in vista compatibility mode, it tells me I need to format the ext drive to use it). My next idea, 2) keep the home folder in ubuntu where it is, but create symlinks for the Documents, Music, etc folders to an NTFS formatted Data (A2) partition, and add those locations to the windows 7 libraries. I'm not totally sure how the permissions would work out, but it should be fine since it's only the documents, music, etc and not the important config files in the rest of /home/user/. Correct me if I'm wrong. Currently, symlinks is my best idea, although i'm not sure how it will work. Any suggestions, additions to my ideas, links, pointers, whatever would be greatly appreciated. Even if it means i should reformat both my drives and repartition (2 250gb drives if you want to suggest a setup for that), I won't be too opposed if that's the best suggestion (I've gone through the format/install/format/reinstall process 5 times over the past 3 days, once more won't hurt me). TL;DR, summary: I have two hard drives. One is partitioned for Ubuntu and Windows 7, the second one I want accessible to both operating systems to store documents, music, pictures, videos, etc. Suggestions on how to set up the data drive please =) P.S. bonus if I can get an apache server document root folder working between the two OS's as well (permissions could become very complicated, so don't worry too much about that) P.P.S. Related question, but data viewing is one way: http://superuser.com/questions/84586/partition-scheme-and-size-for-dual-boot-windows-7-and-ubuntu-9-10-with-separate-p

    Read the article

  • WinXP - Having trouble sharing internet with 3G USB modem via ICS

    - by Carlos Nunez
    all! I've been banging my head against a wall with this issue for a few days now and am hoping someone can help out. I recently signed up for T-Mobile's webConnect 3G/4G service to replace the faltering (and slow) DSL connection in my apartment. The goal was to put the SIM in one of my old phones and use its built-in WLAN tethering feature to share Internet out to rest of my computers. I quickly found out that webConnect-provisioned SIMs do not work with regular smartphones, so I was forced to either buy a 4G-compatible router or tether one of my old laptops to my wireless router and share out that way. I chose the latter, and it's sharpening my inner masochistic self by the day. Here's the setup: GSM USB modem (via hub), ICS host - 10/100 Mbps Ethernet NIC, ICS "guest" - WAN port of my SMC WGBR14N wireless router in bridged mode (i.e. wireless access point). Ideally, this would make my laptop the DHCP server and internet gateway with the WAP giving everyone wireless coverage. I can browse internet on the host laptop fine. However, when clients try to connect, they get a DHCP-assigned IP from the laptop and are able to use the Internet for a few minutes before completely dying. After that happens, they are able to re-associate with the WAP and get IP addresses, but are unable to use Internet or resolve IP addresses until the laptop and router are restarted. If they do get access, it's very, very slow. After running Wireshark on the host machine, it turns out that this is because every TCP connection keeps getting RST. DNS seems to work. I would normally think the firewall is the culprit here, but when it drops packets, it drops them completely. The fact that TCP connections are being ACK'ed by the destination rules that out. Of course, none of the event Log isn't saying anything about what's going on. I also tried disabling power management on the NIC, since that's caused problems in the past; that didn't help either. I finally disabled receive-side scaling as per a Microsoft KB (that applied to Windows Server 2003, SP2) to no avail. I'm thinking of trying it with a different NIC (will be tough; don't have a spare Ethernet NIC around for the laptop), but I'm getting the impression that this simply doesn't work. Can anyone please advise? I apologise for the length of this post; all contributions are much appreciated! -Carlos.

    Read the article

  • Specifying network settings during SLES 11 auto installation

    - by banjer
    I'm setting up an autoinst.xml file for auto-installing SLES 11. I get prompted for the various interface settings per below, but they don't seem to stick once the server reboots. I don't think I have the xml defined correctly. I'm hoping someone has experience with this. <ask-list> <ask> <path>networking,dns,hostname</path> <question>Enter Hostname (server name)</question> <stage>initial</stage> <default>merkin</default> </ask> <ask> <path>networking,interfaces,interface,0,device</path> <question>Enter the primary ethernet device:</question> <stage>initial</stage> <default>eth0</default> </ask> <ask> <path>networking,interfaces,interface,0,ipaddr</path> <question>Enter the primary IP Address:</question> <stage>initial</stage> </ask> <ask> <path>networking,interfaces,interface,0,netmask</path> <question>Enter the Netmask Address:</question> <stage>initial</stage> </ask> <ask> <path>networking,routing,routes,route,0,gateway</path> <question>Enter the primary Gateway Address:</question> <stage>initial</stage> </ask> </ask-list> The first one for hostname seems to be sticking just fine, but the rest do not. As an alternative, is there a way to stop the autoinstall at the section where you configure the network devices so that the user can take over? I was able to show the partition proposal, but not sure how to do the same with the networking setup.

    Read the article

  • Optimal setup for ASUS P6X58D Premium BIOS (no OC)?

    - by rumtscho
    Normally, I'd trust the mainboard manufacturer to choose the best options as defaults. But I had trouble with the board, because even with Quick Boot enabled, it booted twice as slowly as a Pentium 4 Celeron. Then I changed lots of options at once (most of them weren't explained in the manual, just mentioned with a single sentence) and the boot time is only marginally worse than the Pentium 4 (54 sec against 46 sec from button to pw entering screen). Now I don't know if I have turned something off which should have stayed on. I guess I even won't be able to boot from a CD now, because even though it is present in the boot sequence, I took off a timeout I think it needs to check whether there is a disk in the drive. The second reason is that I don't have an internal HDD, only a SSD. I forgot my sources blush but I am under the impression that today's BIOS and OS options are geared toward booting from a HDD, which is often less than optimal when one boots from a SSD, especially when there are functions which cause avoidable writing cycles, as a SSD wears out after too many writing cycles. Most of the things I've read concern the OS, but there are some BIOS-relevant options too. I am especially confused about the disk mode. The board supports AHCI, IDE-simulation and RAID, but of the different articles I've read, there is a proponent for each and no clear arguments for any. So can one tell me which options are important in general and which are important for a SSD-only system? I don't want to overclock the CPU, so you don't have to say anything about this (yes I know the board is meant for OC:)). I am thinking of overclocking the RAM, since they sold me 1600er heatsinked modules which are running at 1066 now, but I'm not sure yet about that. The rest of the system: i7-930, Intel X25-m G2, 6 GB RAM, GTS 250, some no-name Blue-ray ROM. 2 external HDDs over USB 2.0. Lots of other USB-connected hardware (12 devices I think), no SATA 3 drives (will disabling the controller have an impact on performance?), no LAN, only WiFi. Lucid Lynx 64 bit, no dual boot, no virtual installations. The main uses of the system are: managing and playing/showing all the media stored on the external disks, lots of image manipulation, some video editing, a bit of (non-demanding) gaming, rarely development. Lots of Internet surfing too, but this shouldn't have much impact on performance.

    Read the article

  • Installing Cygwin C and C++ compilers for NetBeans IDE 7.2

    - by user1294663
    I am very new to Cygwin, C, C++ and NetBeans IDE 7.2. My PC is running MICROSOFT WINDOWS 7 OS. I have read the documentation on how to install the Cygwin C C++ compilers. http://netbeans.org/community/releases/72/cpp-setup-instructions.html#compilers I have tried to run Cygwin setup.exe that has the most recent version of the Cygwin DLL is 1.7.16-1. I am not very sure which exact package to install when the Cygwin setup.exe installer prompted for the selection of packages to download and install. I want to install the Cygwin C and C++ compilers so that i can create C and C++ projects using NetBeans 7.2 I selected those packages that has contains the following names gcc, g++, gdb and make. Then i proceed on to install the selected packages The installation took up a long time so i stopped after about 45 minutes or so. I browsed the installation folder and i saw some packages i selected were installed. I noticed that some packages came in some sort of "zip" file with tar.gz extension. i added the folder path into the PATH variable in the windows 7 environment variables window. I think this command works C: cygcheck -c cygwin but the rest doesn't work i think. C: gcc --version C: g++ --version C: make --version C: gdb --version I tried to create the C C++ project using the Netbeans IDE 7.2 and the IDE pops out a dialog message saying that there was no c c++ compilers found. Have i made some mistake here? like installing the wrong packages or something else??? Are there packages shown in the Cygwin setup.exe installer that contains exact names and exact version that is compatible with NetBeans IDE 7.2?? This i am not too sure. Because i i think i didn't really see some required packages with exact names and versions. My question is : Which exact packages do i install using the Cygwin setup.exe installer so that i can create C & C++ projects using Netbeans IDE 7.2? and what other steps do i have to take note to ensure complete successful installation? do i have to wait all the selected required packages to be installed? I WOULD LIKE TO KNOW THE EXACT NAMES AND THE VERSIONS FOR THE REQUIRED PACKAGES (NAMES AND VERSIONS DISPLAYED IN THE CYGWIN SETUP.EXE INSTALLER WHEN PROMPTED) NEEEDED FOR C & C++ PROGRAMMING USING NETBEANS IDE 7.2??

    Read the article

  • Linux not picking up new partition correctly on emc pseudo device

    - by James
    Hi We have a database server running oracle rac. We were recently running out of space on the main LUN that it is attached to. I created a new 100GB LUN and concatenated this onto the existing LUN creating a new MetaLUN. After some messing I managed to get linux to recognise the new space. I then created a new partition in on the pseudo device, to use the new space. Previously when I have done this on other system the next step is to create an ASM disk on the new partition and add this disk to the oracle disk group. This however fails. I am aware of various issues with ASM and powerpath, but I don't think this is the issue here. As on while investigating the issue I discovered that one of the underlying logical device is not reflecting the size change. See below; Powermt displays all of the underlying logical units [root@XXXXX~]# powermt display dev=emcpowerd Pseudo name=emcpowerd CLARiiON ID=CKM00091500009 [VFRAC2] Logical device ID=6006016030312200787502866C65DE11 [LUN 30] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 1 ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 3 qla2xxx sde SP A0 active alive 0 0 3 qla2xxx sdj SP B0 active alive 0 0 4 qla2xxx sdo SP A1 active alive 0 0 4 qla2xxx sdt SP B1 active alive 0 0 Fdisk on the pseudo device shows correct space. [root@XXXXX ~]# fdisk -l /dev/emcpowerd Disk /dev/emcpowerd: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/emcpowerd1 1 39162 314568733+ 83 Linux /dev/emcpowerd2 39163 52216 104856255 83 Linux fdisk on one of the logical units is wrong [root@XXXXX~]# fdisk -l /dev/sde Disk /dev/sde: 322.1 GB, 322122547200 bytes 255 heads, 63 sectors/track, 39162 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sde1 1 39162 314568733+ 83 Linux /dev/sde2 39163 52216 104856255 83 Linux fdisk on the rest of the units is fine [root@XXXXX ~]# fdisk -l /dev/sdj Disk /dev/sdj: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 39162 314568733+ 83 Linux /dev/sdj2 39163 52216 104856255 83 Linux Also when I created the the partition linux did not create the any entries in the /dev directory for the second partition so I created these manually [root@XXXXX dev]# mknod sde2 b 8 66 [root@XXXXX dev]# ls -al sd[ejot]? brw-r----- 1 root disk 8, 65 Dec 29 14:20 sde1 brw-r--r-- 1 root disk 8, 66 Apr 8 20:31 sde2 brw-r----- 1 root disk 8, 145 Dec 29 14:19 sdj1 brw-r--r-- 1 root disk 8, 146 Apr 8 20:33 sdj2 brw-r----- 1 root disk 8, 225 Apr 6 23:12 sdo1 brw-r--r-- 1 root disk 8, 226 Apr 8 20:33 sdo2 brw-r----- 1 root disk 65, 49 Dec 29 14:19 sdt1 brw-r--r-- 1 root disk 65, 50 Apr 8 20:33 sdt2 This is a production server that we cannot easily reboot. Any ideas would be much appreciated. J

    Read the article

  • Filtering Security Logs by User and Logon Type

    - by Trido
    I have been asked to find out when a user has logged on to the system in the last week. Now the audit logs in Windows should contain all the info I need. I think if I search for Event ID 4624 (Logon Success) with a specific AD user and Logon Type 2 (Interactive Logon) that it should give me the information I need, but for the life of my I cannot figure out how to actually filter the Event Log to get this information. Is it possible inside of the Event Viewer or do you need to use an external tool to parse it to this level? I found http://nerdsknowbest.blogspot.com.au/2013/03/filter-security-event-logs-by-user-in.html which seemed to be part of what I needed. I modified it slightly to only give me the last 7 days worth. Below is the XML I tried. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security">*[System[(EventID=4624) and TimeCreated[timediff(@SystemTime) &lt;= 604800000]]]</Select> <Select Path="Security">*[EventData[Data[@Name='Logon Type']='2']]</Select> <Select Path="Security">*[EventData[Data[@Name='subjectUsername']='Domain\Username']]</Select> </Query> </QueryList> It only gave me the last 7 days, but the rest of it did not work. Can anyone assist me with this? EDIT Thanks to the suggestions of Lucky Luke I have been making progress. The below is my current query, although as I will explain it isn't returning any results. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security"> *[System[(EventID='4624')] and System[TimeCreated[timediff(@SystemTime) &lt;= 604800000]] and EventData[Data[@Name='TargetUserName']='john.doe'] and EventData[Data[@Name='LogonType']='2'] ] </Select> </Query> </QueryList> As I mentioned, it wasn't returning any results so I have been messing with it a bit. I can get it to produce the results correctly until I add in the LogonType line. After that, it returns no results. Any idea why this might be? EDIT 2 I updated the LogonType line to the following: EventData[Data[@Name='LogonType'] and (Data='2' or Data='7')] This should capture Workstation Logons as well as Workstation Unlocks, but I still get nothing. I then modify it to search for other Logon Types like 3, or 8 which it finds plenty of. This leads me to believe that the query works correctly, but for some reason there are no entries in the Event Logs with Logon Type equalling 2 and this makes no sense to me. Is it possible to turn this off?

    Read the article

  • Remove DRM from *.pdb e-book that I own - while maintaining footnotes, etc.?

    - by ziesemer
    Background: I've already reviewed Remove DRM from ePub Files? and How can I remove DRM from Kindle books? - the answers to which have already brought me partial process. The challenge is that I have a few purchased *.pdb e-books that I purchased in years past, e.g. 2006. In particular, they were purchased from the palm eBook Store (ebooks.palm.com - now defunct, possibly part of http://www.ereader.com / Barnes & Noble?) - originally for use on a Palm Treo that has since died. Of particular note is that I have a revision / publication of a book that is no longer published, and not available as an e-book from anywhere else that I've been able to find. (I feel fortunate to have even found the *.pdb files on backup.) I have a copy of the electronic invoice for it - which includes the details necessary for unlocking - the "Purchaser's Name" and the "Unlock Code" - which is the digits of my credit card # that I had used to purchase it. Given the above information, I was surprised to be able to open the book using the Windows eReader software and unlock it. Here I am able to view the complete contents and functionality of the book as I had done on the Palm Treo - including viewing of linked annotations / footnotes, etc. Following the full spirit of Remove DRM from ePub Files?, I want to ensure that I can access this on any device of my choosing - especially now and in the future, and as new technologies arrive and disappear. Ideally, I'm just looking to accomplish the minimum necessary to allow import into calibre. Outstanding Issue: I've found a few solutions that have given me "90%" success - all based on various versions of some Python scripts - including versions 0.21 and 0.11 of "erdr2pml.py" (based on "ereader2html"). Unfortunately, unless I'm missing something, these programs are attempting to also "convert" - instead of just "decrypting". As such, the outputs are missing embedded images and/or footnotes. I.E., there is a linked, underlined, and super-scripted "a" after some text - but the content of the footnote no longer exists. I can validate this by inspecting the generated *.pmlz file, and nowhere does it contain the original footnotes that are still visible in the original *.pdb file. I'm hoping to find a process that focuses on the decryption only, instead of attempting any type of a content conversion - or if a content conversion is required / involved, that it maintains all of the features and content of the original. (Again, I'm confident that if/once a version is obtained that calibre can import, I'll be able to fulfill the rest of my requirements.)

    Read the article

  • Cisco IPSec, nat, and port forwarding don't play well together

    - by Alan
    I have two Cisco ADSL modems configured conventionally to nat the inside traffic to the ISP. That works. I have two port forwards on one of them for SMTP and IMAP from the outside to the inside this provides external access to the mail server. This works. The modem doing the port forwarding also terminates PPTP VPN traffic. There are two DNS servers one inside the office which resolves mail to the local address, one outside the office which resolves mail for the rest of the world to the external interface. That all works. I recently added an IPSec VPN between the two modems and that works for every thing EXCEPT connections over the IPSec VPN to the mail server on port 25 or 143 from workstations on the remote lan. It would seem that the modem with the port forwards is confusing traffic from the mail server destined for a machine on the other side of the IPSec VPN for traffic that should go back to a port forward connection. PPTP VPN traffic to the mail server is fine. Is this a scenario anybody is familiar with and are there any suggestions on how to work around it? Many thanks Alan But wait there is more..... This is the strategic parts of the nat config. A route map is used to exclude the lans that are reachable via IPSec tunnels from being Nated. int ethernet0 ip nat inside int dialer1 ip nat outside ip nat inside source route-map nonat interface Dialer1 overload route-map nonat permit 10 match ip address 105 access-list 105 remark *** Traffic to NAT access-list 105 deny ip 192.168.1.0 0.0.0.255 192.168.9.0 0.0.0.255 access-list 105 deny ip 192.168.1.0 0.0.0.255 192.168.48.0 0.0.0.255 access-list 105 permit ip 192.168.1.0 0.0.0.255 any ip nat inside source static tcp 192.168.1.241 25 interface Dialer1 25 ip nat inside source static tcp 192.168.1.241 143 interface Dialer1 143 At the risk of answering my own question, I resolved this outside the Cisco realm. I bound a secondary ip address to mail server 192.168.1.244, changed the port forwards to use it while leaving all the local and IPSec traffic to use 192.168.1.241 and the problem was solved. New port forwards. ip nat inside source static tcp 192.168.1.244 25 interface Dialer1 25 ip nat inside source static tcp 192.168.1.244 143 interface Dialer1 143 Obviously this is a messy solution and being able to fix this in the Cisco would be preferable.

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • List repositories from multiple projects in Trac using mod_python

    - by Steffen Eriksen
    Currently working on a customized webpage that shows the available projects I have in Trac (1.0.1). I am using mod_python to connect the trac interface. I found a standard page for this, but it didn't show listing of repositories. The page showed some variables to link to the different projects, but I can't find variables to the different repositories inside the projects. I have set up the webpage from reading this: http://trac.edgewall.org/wiki/TracInterfaceCustomization (under Site Appearance) Short summary; editing ../conf.d/trac.conf: PythonOption TracEnvParentDir /parent/dir/of/projects PythonOption TracEnvIndexTemplate /path/to/template And making a template file I can edit at /path/to/template: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:py="http://genshi.edgewall.org/" xmlns:xi="http://www.w3.org/2001/XInclude"> <head> <title>Available Projects</title> </head> <body> <h1>Available Projects</h1> <ul> <dl> <li py:for="project in projects" py:choose=""> <a py:when="project.href" href="$project.href" title="$project.description">$project.name</a> ## <dd> WANT TO ADD CODE HERE! </dd> <py:otherwise> <small>$project.name: <em>Error</em> <br /> ($project.description)</small> </py:otherwise> </li> </dl> </ul> </body> </html> So... The code I want to add is something like: <dd py:for="repos in project.repository" py:choose=""> <a py:when="repos.href" href="$repos.href"> $repos.name</a> </dd> I can't figure out where to add the variables, or if there already exists some variables I can use. After searching through the files it seemed like main.py had something to do with the variables (/usr/local/Trac-1.0.1/trac/web/main.py), but at first look it didn't seem easy to just add more variables. Is there a simple way to find the rest of the variables ? And how hard is it to add more variables? Will it perhaps be easier to do this an alternative way ? All I need is to link to the repositories dynamically

    Read the article

  • Secure openVPN using IPTABLES

    - by bob franklin smith harriet
    Hey, I setup an openVPN server and it works ok. The next step is to secure it, I opted to use IPTABLES to only allow certain connections through but so far it is not working. I want to enable access to the network behind my openVPN server, and allow other services (web access), when iptables is disabaled or set to allow all this works fine, when using my following rules it does not. also note, I already configured openVPN itself to do what i want and it works fine, its only failing when iptables is started. Any help to tell me why this isnt working will appreciated here. These are the lines that I added in accordance with openVPN's recommendations, unfortunately testing these commands shows that they are requiered, they seem incredibly insecure though, any way to get around using them? # Allow TUN interface connections to OpenVPN server -A INPUT -i tun+ -j ACCEPT #allow TUN interface connections to be forwarded through other interfaces -A FORWARD -i tun+ -j ACCEPT # Allow TAP interface connections to OpenVPN server -A INPUT -i tap+ -j ACCEPT # Allow TAP interface connections to be forwarded through other interfaces -A FORWARD -i tap+ -j ACCEPT These are the new chains and commands i added to restrict access as much as possible unfortunately with these enabled, all that happens is the openVPN connection establishes fine, and then there is no access to the rest of the network behind the openVPN server note I am configuring the main iptables file and I am paranoid so all ports and ip addresses are altered, and -N etc appears before this so ignore that they dont appear. and i added some explanations of what i 'intended' these rules to do, so you dont waste time figuring out where i went wrong : 4 #accepts the vpn over port 1192 -A INPUT -p udp -m udp --dport 1192 -j ACCEPT -A INPUT -j INPUT-FIREWALL -A OUTPUT -j ACCEPT #packets that are to be forwarded from 10.10.1.0 network (all open vpn clients) to the internal network (192.168.5.0) jump to [sic]foward-firewall chain -A FORWARD -s 10.10.1.0/24 -d 192.168.5.0/24 -j FOWARD-FIREWALL #same as above, except for a different internal network -A FORWARD -s 10.10.1.0/24 -d 10.100.5.0/24 -j FOWARD-FIREWALL # reject any not from either of those two ranges -A FORWARD -j REJECT -A INPUT-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT-FIREWALL -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT-FIREWALL -j REJECT -A FOWARD-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT #80 443 and 53 are accepted -A FOWARD-FIREWALL -m tcp -p tcp --dport 80 -j ACCEPT -A FOWARD-FIREWALL -m tcp -p tcp --dport 443 -j ACCEPT #192.168.5.150 = openVPN sever -A FOWARD-FIREWALL -m tcp -p tcp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -m udp -p udp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -j REJECT COMMIT now I wait :D

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >