Search Results

Search found 13451 results on 539 pages for 'physical environment'.

Page 491/539 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Installing and configuring Zend Framework 2 server-wide [Ubuntu] and test driving ZendSkeletonApplication

    - by kinologik
    I'm trying to have ZF2 installed for all my subdomains at once (Ubuntu 12.04). ZF2 just launched its first stable version, so I wanted to install it on my development server and finally get my hands dirty with it. I downloaded ZF2 and unzipped the files in /var/ZF2/ (which now contains Zend/[all components]). I then edited /etc/php5/apache2/php.ini and added the path to the ZF2 files: include_path = ".:/var/ZF2" I then downloaded the ZendSkeletonApplication and unzipped it in /var/www/skeleton. I know it is suggested to composer.phar to install ZF2 application, but: I don't want to make a local installation of ZF2... I want to make a server-wide installation be able to use my Zend components on all my domains/subdomains on my development server. Before using any automatic installation process, I'd really like to understand that process by doing it manually at first. Obviously, something goes wrong when I fire ZendSkeletonApplication, and I get the following when hit the following URL: http://www.myDevServer.com/skeleton/public/ Fatal error: Uncaught exception 'RuntimeException' with message 'Unable to load ZF2. Run `php composer.phar install` or define a ZF2_PATH environment variable.' in /var/www/skeleton/init_autoloader.php:48 Stack trace: #0 /var/www/skeleton/public/index.php(9): include() #1 {main} thrown in /var/www/skeleton/init_autoloader.php on line 48 I have skimmed through the docs, tutorials and the like, but there are no straight forward answer to this kind of configuration. In the official doc, in the (very short) installation chapter, I see a reference to adding an include path in PHP. But no example... http://zf2.readthedocs.org/en/latest/ref/installation.html Once you have a copy of Zend Framework available, your application needs to be able to access the framework classes found in the library folder. Though there are several ways to achieve this, your PHP include_path needs to contain the path to Zend Framework’s library. But then, when I get to the "Getting Started" chapter, it's all composer.phar and nothing else... http://zf2.readthedocs.org/en/latest/user-guide/skeleton-application.html I'm no sysAdmin, just a Zend enthusiast. I'm pretty sure this PEBKAC problem might be obvious for those who already got in ZF2 previous betas. Thanks for helping my out. EDIT: Problem was resolved, thanks to Daniel M. Just setting up ZF2_PATH in httpd.conf was all that was needed. SetEnv ZF2_PATH /var/ZF2 I also removed the include_path reference in php.ini and everything works just fine. So I have no idea why Zend suggested to include it there in their official docs.

    Read the article

  • SSH: Port Forwarding, Firewalls, & Plesk

    - by Kian Mayne
    I edited my SSH configuration to accept connections on Port 213, as it was one of the few ports that my work firewall allows through. I then restarted sshd and everything was going well. I tested the ssh server locally, and checked the sshd service was listening on port 213; however, I still cannot get it to work outside of localhost. PuTTY gives a connection refused message, and some of the sites that allow check of ports I tried said the port was closed. To me, this is either firewall or port forwarding. But I've already added inbound and outbound exceptions for it. Is this a problem with my server host, or is there something I've missed? My full SSH config file, as requested: # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Port 22 Port 213 #Protocol 2,1 Protocol 2 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 768 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication yes #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication mechanism. # Depending on your PAM configuration, this may bypass the setting of # PasswordAuthentication, PermitEmptyPasswords, and # "PermitRootLogin without-password". If you just want the PAM account and # session checks to run without PAM authentication, then enable this but set # ChallengeResponseAuthentication=no #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner /some/path # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • How to configure DD-WRT routing table when creating an isolated network segment for PCI C VT compliance

    - by tetranz
    I'm the volunteer support and system admin person at a small private school. We need to setup a PCI compliant Windows PC as a virtual terminal for credit card processing. I've read questionnaire SAQ C-VT and, to quote, this computer needs to be accessed: "via a computer that is isolated in a single location, and is not connected to other locations or systems within your environment (this can be achieved via a firewall or network segmentation to isolate the computer from other systems)" Our setup is as follows: DSL modem from ISP is setup to be a "transparent pipe" with no extra services. That goes into the WAN port of Linksys WRT54-GL running a DD-WRT. The LAN is 192.168.1.x. There are a couple of other WRT54-GL / DD-WRT devices. One is used as a wireless AP and another is a client bridge. To isolate the VT (virtual terminal) machine, I have another DD-WRT device. Its WAN is connected to a port on the 192.168.1.x LAN. The virtual terminal machine is connected to its LAN which is at 192.168.10.x. The SPI Firewall etc is turned on. It's basically the default DD-WRT gateway setup where the "ISP" is our own LAN. That's working. All incoming traffic to the VT machine is blocked, including from our own LAN. The VT can access the internet BUT, and here's the problem, it can also ping any of the computers on the 192.168.1.x LAN. I think I need to stop that. I'm guessing that I could do something with the Static Routing table in the VT machine's DD-WRT device. I need to route anything going to 192.168.1.x other than the gateway which is 192.168.1.1 to 0.0.0.0 or something like that. That's where I'm stuck at the end of my knowledge. Or ... do I need to get yet another DD-WRT so the network is "balanced". Maybe I need to have the internet from the DSL going into a DD-WRT which has only two devices on its LAN i.e., two other DD-WRTs, one for the main LAN and one for the VT. I think that would do but I'd like to avoid the extra cost and complexity if I don't need it. Thanks

    Read the article

  • $PATH is driving me nuts

    - by Chris4d
    OK, apologies if this is something dumb, but I'm running out of ideas. Goal: prepend /usr/local/bin to $PATH Problem: $PATH won't do what I want or expect How I got here: I want to start learning to program, so I'm getting comfortable messing around under the hood, but don't have a lot of experience. I installed the fish shell (because it's friendly!) using homebrew and set it as my default shell (under system prefs>users & groups>advanced). At some point, I ran brew doctor to see if my installs were all kosher, and it suggested I move /usr/local/bin to the front of $PATH so that I could use my installation of git rather than the system copy. Fine - but between path_helper and fish, something was happening to $PATH that was out of my control, and I could never get the paths arranged in the right way. Environment: OSX 10.8.2, upgraded from 10.7ish, with xcode and devtools installed, plus x11, homebrew, and fish More info: I've set my user's default shell back to bash, and tried a variety of shells thru terminal.app - bash, fish, sh. I moved /usr/local/bin to the top of /etc/paths but it didn't change anything. I looked thru the various config.fish files and commented out stuff that might mess with $PATH, didn't help. I have the following files in /etc/paths.d/: ./10-homebrew containing /usr/local/bin ./20-fish containing /usr/local/Cellar/fish/1.23.1/bin ./40-XQuartz containing /opt/X11/bin I added set +x to my profile and when I start terminal.app I get: Last login: Mon Oct 1 13:31:06 on ttys000 + '[' -x /usr/libexec/path_helper ']' + eval '/usr/libexec/path_helper -s' ++ /usr/libexec/path_helper -s PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Cellar/fish/1.23.1/bin:/opt/X11/bin"; export PATH; + '[' /bin/bash '!=' no ']' + '[' -r /etc/bashrc ']' + . /etc/bashrc ++ '[' -z '\s-\v\$ ' ']' ++ PS1='\h:\W \u\$ ' ++ shopt -s checkwinsize ++ '[' Apple_Terminal == Apple_Terminal ']' ++ '[' -z '' ']' ++ PROMPT_COMMAND='update_terminal_cwd; ' ++ update_terminal_cwd ++ local 'SEARCH= ' ++ local REPLACE=%20 ++ local PWD_URL=file://Chriss-iMac.local/Users/c4 ++ printf '\e]7;%s\a' file://Chriss-iMac.local/Users/c4 Chriss-iMac:~ c4$ So it looks like path_helper runs, but then running echo $PATH nets me /usr/bin:/bin:/usr/sbin:/sbin. So, it looks like path_helper isn't even doing what it's supposed to anymore? I'm sure there is some well-defined behavior here that I don't understand, or I borked something while trying to fix it. Please help!

    Read the article

  • Why do I sometimes get 'sh: $'\302\211 ... ': command not found' in xterm/sh?

    - by amn
    Sometimes when I simply type a valid command like 'find ...', or anything really, I get back the following, which is completely unexpected and confusing (... is command name I type): sh: $'\302\211...': command not found There is some corruption going on I think. I don't use color in my prompt, I am using the Bash shell in POSIX mode as sh (chsh to /bin/sh and so on - $SHELL is sh). What is going on and why does this keep happening? Anything I can debug? I think this is more of an xterm issue than sh, or at least a combination of the two. Files, for context: My /etc/profile, as distributed with Arch Linux x86-64: # /etc/profile #Set our umask umask 022 # Set our default path PATH="/usr/local/sbin:/usr/local/bin:/usr/bin" export PATH # Load profiles from /etc/profile.d if test -d /etc/profile.d/; then for profile in /etc/profile.d/*.sh; do test -r "$profile" && . "$profile" done unset profile fi # Source global bash config if test "$PS1" && test "$BASH" && test -r /etc/bash.bashrc; then . /etc/bash.bashrc fi # Termcap is outdated, old, and crusty, kill it. unset TERMCAP # Man is much better than us at figuring this out unset MANPATH My /etc/shrc, which I created as a way to have sh parse some file on startup, when non-login shell. This is achieved using ENV variable set in /etc/environment with the line ENV=/etc/shrc: PS1='\u@\H \w \$ ' alias ls='ls -F --color' alias grep='grep -i --color' [ -f ~/.shrc ] && . ~/.shrc My ~/.profile, I am launching X when logging in through first virtual tty: [[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && exec xinit -- -dpi 111 My ~/.xinitc, as you can see I am using the system as a Virtual Box guest: xrdb -merge ~/.Xresources VBoxClient-all awesome & exec xterm And finally, my ~/.Xresources, no fancy stuff here I guess: *faceName: Inconsolata *faceSize: 10 xterm*VT100*translations: #override <Btn1Up>: select-end(PRIMARY, CLIPBOARD, CUT_BUFFER0) xterm*colorBDMode: true xterm*colorBD: #ff8000 xterm*cursorColor: S_red Since ~/.profile references among other things /etc/bash.bashrc, here is its content: # # /etc/bash.bashrc # # If not running interactively, don't do anything [[ $- != *i* ]] && return PS1='[\u@\h \W]\$ ' PS2='> ' PS3='> ' PS4='+ ' case ${TERM} in xterm*|rxvt*|Eterm|aterm|kterm|gnome*) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; screen) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; esac [ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion I have no idea what that case statement does, by the way, it does look a bit suspicious though, but then again, who am I to know.

    Read the article

  • Specifying a Postfix Instance to send outbound email

    - by Catherine Jefferson
    I have a CentOS 6.5 server running Postfix 2.6x (the default distribution) with five public IPv4 IPs bound to it. Each IP has DNS and rDNS set separately. Each uses a different hostname at a different domain. I have five Postfix instances, one bound to each IP, like this example: 192.168.34.104 red.example.com /etc/postfix 192.168.36.48 green.example.net /etc/postfix-green 192.168.36.49 pink.example.org /etc/postfix-pink 192.168.36.50 orange.example.info /etc/postfix-orange 192.168.36.51 blue.example.us /etc/postfix-blue I've tested each IP by telneting to port 25. Postfix answers and banners properly with the correct hostname. Email is received on all of these instances with no problems and is routed to the correct place. This setup, minus the final instance, has existed for a couple of years and works. I never bothered to set up outbound email to go through any but the main instance, however; there was no need. Now I need to send email from blue.example.us that actually leaves from that interface and IP, such that the Received headers show blue.example.us as the sending mailhost, so that SPF and DKIM validate, etc etc. The email that will be sent from blue.example.com is a feedback loop sent by a single shell account on the server (account5), an account that is dedicated to sending this email. The account receives the feedback loop emails from servers on other networks, saves the bodies of those emails, and then generates a new outbound email header, appends the saved body, and sends the email. It's sending by piping each email to sendmail -oi -t. We're doing it this way to mask the identities of the initial servers. The procmail script that processes these emails works correctly. However, I cannot configure this account to send email through the proper Postfix instance/IP/interface. The exact same account and script sends email through the main Postfix instance /etc/postfix without any issues. When I change MAIL_CONFIG to point to /etc/postfix-blue in either .bash_profile or the Procmail script that handles this email, though, I get this error: sendmail: fatal: User account5(###) is not allowed to submit mail I've read the manuals on Postfix.org, searched Google, and tried the suggestions in three previous answers here on ServerFault.com: Postfix - specify interface to deliver outbound mail on Postfix user is not allowed to submit mail Postfix rejects php mails I have been careful to stop and restart Postfix after each configuration change, and tested the results. Nothing has worked. The main postfix instance happily accepts outbound email from account5. The postfix-blue instance continues to reject email from account5 with the sendmail error above. As tempting as it is to blame machine hostility, I know that I must be missing something or doing something wrong. Does anybody have any suggestions as to what it might be? Please feel free to ask for further information about my setup if you need it. =-=-=-=-=-=-=-=-=-= At the request of the responder, here are main.cf and master.cf for a) the main postfix instance ("red.example.com") and b) the FBL instance ("blue.example.us") [NOTE: All parameters not specified below were left at the default Postfix 2.6 settings] MAIN: master.cf smtp inet n - n - - smtpd main.cf myhostname = red.example.com mydomain = example.com inet_interfaces = $myhostname, localhost inet_protocols = all lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname, localhost.$mydomain, localhost local_recipient_maps = mynetworks = 192.168.34.104/32 relay_domains = example.com, example.info, example.net, example.org, example.us relayhost = [192.168.34.102] # Separate physical server, main mailserver. relay_recipient_maps = hash:/etc/postfix/relay_recipients alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases smtpd_banner = $myhostname ESMTP $mail_name multi_instance_wrapper = ${command_directory}/postmulti -p -- multi_instance_enable = yes multi_instance_directories = /etc/postfix-green /etc/postfix-pink /etc/postfix-orange /etc/postfix-blue FBL: master.cf 184.173.119.103:25 inet n - n - - smtpd main.cf myhostname = blue.example.us mydomain = blue.example.us <= Deliberately set to subdomain only. myorigin = $mydomain inet_interfaces = $myhostname lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname local_recipient_maps = unix:passwd.byname $alias_maps $virtual_alias_maps mynetworks = 192.168.36.51/32, 192.168.35.20/31 <= Second IP is backup MX servers relay_domains = $mydestination recipient_canonical_maps = hash:/etc/postfix-blue/canonical virtual_alias_maps = hash:/etc/postfix-fbl/virtual alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical mailbox_command = /usr/bin/procmail -a "$EXTENSION" DEFAULT=$HOME/Mail/ MAILDIR=$HOME/Mail smtpd_banner = $myhostname ESMTP $mail_name authorized_submit_users = multi_instance_name = postfix-blue multi_instance_enable = yes

    Read the article

  • volume group disappeared after xfs_check run

    - by John P
    EDIT** I have a volume group consisting of 5 RAID1 devices grouped together into a lvm and formatted with xfs. The 5th RAID device lost its RAID config (cat /proc/mdstat does not show anything). The two drives are still present (sdj and sdk), but they have no partitions. The LVM appeared to be happily using sdj up until recently. (doing a pvscan showed the first 4 RAID1 devices + /dev/sdj) I removed the LVM from the fstab, rebooted, then ran xfs_check on the LV. It ran for about half an hour, then stopped with an error. I tried rebooting again, and this time when it came up, the logical volume was no longer there. It is now looking for /dev/md5, which is gone (though it had been using /dev/sdj earlier). /dev/sdj was having read errors, but after replacing the SATA cable, those went away, so the drive appears to be fine for now. Can I modify the /etc/lvm/backup/dedvol, change the device to /dev/sdj and do a vgcfgrestore? I could try doing a pvcreate --uuid KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ /dev/sdj to make it recognize it, but I'm afraid that would erase the data on the drive UPDATE: just changing the pv to point to /dev/sdj did not work vgcfgrestore --file /etc/lvm/backup/dedvol dedvol Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. Cannot restore Volume Group dedvol with 1 PVs marked as missing. Restore failed. pvscan /dev/sdj: read failed after 0 of 4096 at 0: Input/output error Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. PV /dev/sdd2 VG VolGroup00 lvm2 [74.41 GB / 0 free] PV /dev/md2 VG dedvol lvm2 [931.51 GB / 0 free] PV /dev/md3 VG dedvol lvm2 [931.51 GB / 0 free] PV /dev/md0 VG dedvol lvm2 [931.51 GB / 0 free] PV /dev/md4 VG dedvol lvm2 [931.51 GB / 0 free] PV unknown device VG dedvol lvm2 [1.82 TB / 63.05 GB free] Total: 6 [5.53 TB] / in use: 6 [5.53 TB] / in no VG: 0 [0 ] vgscan Reading all physical volumes. This may take a while... /dev/sdj: read failed after 0 of 4096 at 0: Input/output error /dev/sdj: read failed after 0 of 4096 at 2000398843904: Input/output error Found volume group "VolGroup00" using metadata type lvm2 Found volume group "dedvol" using metadata type lvm2 vgdisplay dedvol --- Volume group --- VG Name dedvol System ID Format lvm2 Metadata Areas 5 Metadata Sequence No 10 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 5 Act PV 5 VG Size 5.46 TB PE Size 4.00 MB Total PE 1430796 Alloc PE / Size 1414656 / 5.40 TB Free PE / Size 16140 / 63.05 GB VG UUID o1U6Ll-5WH8-Pv7Z-Rtc4-1qYp-oiWA-cPD246 dedvol { id = "o1U6Ll-5WH8-Pv7Z-Rtc4-1qYp-oiWA-cPD246" seqno = 10 status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 physical_volumes { pv0 { id = "Msiee7-Zovu-VSJ3-Y2hR-uBVd-6PaT-Ho9v95" device = "/dev/md2" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 1953519872 # 931.511 Gigabytes pe_start = 384 pe_count = 238466 # 931.508 Gigabytes } pv1 { id = "ZittCN-0x6L-cOsW-v1v4-atVN-fEWF-e3lqUe" device = "/dev/md3" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 1953519872 # 931.511 Gigabytes pe_start = 384 pe_count = 238466 # 931.508 Gigabytes } pv2 { id = "NRNo0w-kgGr-dUxA-mWnl-bU5v-Wld0-XeKVLD" device = "/dev/md0" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 1953519872 # 931.511 Gigabytes pe_start = 384 pe_count = 238466 # 931.508 Gigabytes } pv3 { id = "2EfLFr-JcRe-MusW-mfAs-WCct-u4iV-W0pmG3" device = "/dev/md4" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 1953519872 # 931.511 Gigabytes pe_start = 384 pe_count = 238466 # 931.508 Gigabytes } pv4 { id = "KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ" device = "/dev/md5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 3907028992 # 1.81935 Terabytes pe_start = 384 pe_count = 476932 # 1.81935 Terabytes } }

    Read the article

  • I am starting to think that Prevx.com isnt a legit site...but heres my long-winded question

    - by cop1152
    I apologize in advance for the long-winded post. I posted it all because I believe its informative and may be useful. Also, I posted my question at the end. Moments ago I was RDC to a file server in my home (from inside my home). I had opened Firefox and Googled for a manufacturers website. Immediately after clicking the link, Firefox abruptly closed. This seemed odd to me to so I checked the running processes and discovered d.exe, e.exe, and f.exe running. I Googled these processes on a different machine and found them belonging to a key-logger/screen-capturer/trojan called defender.exe, which according to the Prevx lives in c:\documents and settings\user\local settings\temp. (Prevx link http://www.prevx.com/filenames/147352809685142526-X1/DEFENDER32.EXE.html) Simultaneously, an obviously-spoofed Windows Firewall popup appeared on the server asking me to click ‘yes’ to update Windows Firewall. At this time I ended all rogue processes, emptied the temp folder, removed defender.exe from startup, and checked my registry and a few other locations. Before deleting Defender.exe I noted that it was created moments ago, just before Firefox crashed. I believe that I was ‘almost’ infected with this malware. I believe that it needed me to click the phony popup in order to complete infection because it wasn’t allowed to execute processes from the temp folder. After cleaning the machine, I restarted it and have been monitoring it for over an hour. I am debating on whether or not to restore the Windows partition (a separate physical drive from the data) or to just watch it for awhle. I should mention that, because of the specs on this machine, I do not run antivirus software, but I know it well and inspect it regularly. It is a very old Compaq with a 400mhz processer and 512mb of ram. I have a static IP and the server is in the DMZ running an FTP client and some HTTP server software. All files transferred to and stored on this machine are scanned for malware before transferring. Usually the machine only runs 19 processes and performs pretty well for its intended purpose. I posted the story so that you could be aware of a possible new piece of malware and how it acts, but I also have a question or two. First, over the last few months I have noticed that PREVX is listed at the top of most of my Google searches when researching malware, especially for new or obscure malware…and they always want you to purchase something. I don’t think they are one of the top AV companies, so it seems odd that they are always the top Google result. Does anyone have any experience with any of their products? Also, what sites do you rely on for malware researching? Recently, I have found it difficult to find good info because of HijackThis-logs and other deadend info cluttering up my searches. And lastly, besides antivirus, third-party firewall, etc, what settings would you use to lock down a machine to make it more secure in instances where a stubborn admin like myself refuses to run AV? Thanks.

    Read the article

  • Lighttpd with FastCGI configuration running ViewVC - rewrite problems

    - by 0xC0000022L
    At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through. Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise): var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi" var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi" var.viewvcstatic = "/var/www/vcs/templates/docroot" var.vcs_errorlog = "/var/log/lighttpd/error.log" var.vcs_accesslog = "/var/log/lighttpd/access.log" $HTTP["host"] =~ "domain.tld" { $SERVER["socket"] == ":443" { protocol = "https://" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/ssl/..." ssl.ca-file = "/etc/lighttpd/ssl/..." ssl.use-sslv2 = "disable" setenv.add-environment = ( "HTTPS" => "on" ) url.rewrite-once += ("^/mercurial$" => "/mercurial/" ) url.rewrite-once += ("^/$" => "/viewvc.fcgi" ) alias.url += ( "/viewvc-static" => var.viewvcstatic ) alias.url += ( "/robots.txt" => var.robots ) alias.url += ( "/favicon.ico" => var.favicon ) alias.url += ( "/mercurial" => var.hgwebfcgi ) alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi ) $HTTP["url"] =~ "^/mercurial" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.hgwebfcgi, "socket" => "/tmp/hgwebdir.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } else $HTTP["url"] =~ "^/viewvc\.fcgi" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.viewvcfcgi, "socket" => "/tmp/viewvc.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } expire.url = ( "/viewvc-static" => "access plus 60 days" ) server.errorlog = var.vcs_errorlog accesslog.filename = var.vcs_accesslog } } Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame. What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional. So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.

    Read the article

  • WNDR3700 Router + Cisco SG200-08 + LACP + Dual Uplink

    - by kobaltz
    Background I have a storage server that has several virtual machine images stored on them. I would store them locally, but I have limited space on my desktop (using SSD storage). I would like to increase the bandwidth between the desktop and the storage server by using two NICs on each computer. My original configuration allowed about 55MBps between the desktop and storage server. This storage server also has several TBs of documents, pictures, movies, vms, and ISO/programs. The storage server has 8 1.5TB hard drives in a RAID 10 configuration with a hardware RAID controller. The benchmarks on the RAID 10 are about 300MBps. Configuration In short, I am trying to bridge my switch and router. The switch is a small 8 port Cisco smart switch that supports 802.3ad LACP. I have two computers plugged into the switch, each with 2 Intel Gigabit NICs. The first computer is a Windows 7 machine that has the Intel ANS software installed. I have LACP configured with the computer and now show 3 NICs (2 Physical + 1 TEAM Virtual @ 2Gbps). It looks like this computer is configured correctly. I trunked the two ports that this computer is plugged into with the switch's web interface. The second computer is a homebrew storage box running debian. I also have the bonding enabled on this machine and the switch configured with LACP. Without having the WNDR3700 router in the picture yet, I am able to communicate between the Windows 7 machine and the debian box since they both have static IP addresses. With LACP enabled on both machines I am getting about 106-108MBps speeds. Issue I plug in a network cable from the switch into the router and enable DHCP on the desktop. I saw no need to have a static address on the desktop. My transfer rates are still from 106MBps-108MBps. While this is still a boost, I am trying to figure out how to get about 140-180MBps. I am thinking that I need to increase the bandwidth from the router to the switch. My switch allows 4 groups for port trunking. I plugged in a second network cable from the router to the switch. My question is, what is the proper way to fix this issue. Should I port trunk the two ports that are going from the switch to the router? Keep in mind that the router is a WNDR3700 and is unsure whether or not it supports LACP. I do have OpenWRT installed on the router, but it still wasn't clear in any documentation that I found if it supported 802.3ad LACP standards. I am also wondering if there needs to be anything changed within the Cisco settings. [Edit] - Corrected some numbers, wasn't really paying attention. It looks like the speeds though at least two NICs are bonded with LACP is still reaching the max bandwidth of one port. Is there a way to configure the switch so that I can increase this bandwidth? Also, on the storage server, I had a couple of extra NICs laying around and threw them on there as well. Another EDIT and More Findings I happened to look at the traffic of each individual NIC and think that I see the problem. I tested with a simple transfer for a 4GB file. I noticed that only one of the NICs was taking the load of the traffic. I then copied the file back to the Storage Server and noticed that the other NIC was sending out the traffic. I have 802.3ad LACP enabled on the two NICs and I see that it gets enabled dynamically on the switch's interface. Should I be using Static Link Aggregation?

    Read the article

  • Managing hosts and iptables in scalable architecture

    - by hakunin
    Let's say I have a load balancer in front of 3 app servers. Let's say I also have these services available at certain IPs: Postgres server Redis server ElasticSearch server Memcached server 1 Memcached server 2 Memcached server 3 So that's 6 nodes at 6 different IP addresses. Naturally, every one of my 3 app servers needs to talk to these 6 servers above. Then, to make it a bit funkier, I also have 3 worker servers. And each worker also talks to the above 6 servers, but thankfully workers and apps never need to talk to each other. Now's the kicker. Everything is on Digital Ocean VPS. What that means is: you have no private network, no private IPs. You only have separate, random IP address on each machine. You can't mask them or anything. So in order to build a secure environment I would have to configure some iptables. For example: Open app servers be accessed by load balancer server Open redis, ES, PG, and each memcached servers to be accessed by each app's IP and each worker's IP This means that every time I add an app or worker I have to also reconfigure iptables in those above 6 servers to welcome the new app or worker. Is there a way to simplify this type of setup? I was thinking — what if there was a gateway machine between apps/workers and the above 6 machines. This way all the interaction would always happen via the gateway server, and when I add a new app or worker I wouldn't need to teach the 6 servers to let it in. If I went this route, then I'd hope a small 512mb server could handle that perhaps, and there wouldn't be almost any overhead. Or would there? Please help with best way to handle this situation. I would appreciate an answer as concrete as possible. I don't think this is too specific, because this general architecture is very common, and Digital Ocean is becoming increasingly popular. A concrete solution here would be much appreciated by many.

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • How to install Delorme StreetAtlas (any version) + GPS inside VirtualBox VM?

    - by hotei
    When I try to run the install program I get a popup message that says the installer program is not a valid executable. Background: I want a GPS with maps on my laptop running Ubuntu 10.4LTS. Unfortunately I can't find a decent native Linux GPS solution with 50 state US street level coverage. I have VirtualBox VMs available for WinXP and Win7 (among others). The VMs work fine with MicroSoft Streets and Trips (2010) and MapNGo 5 (a very! old Delorme product), but while both these products support GPS, they don't support the Earthmate LT-40 USB GPS I already have. I've got pretty much every Delorme Street Atlas they've released in the last decade and none of them will install in a VM. Any help would be much appreciated. Clarification: I've installed the Delorme products from these CDs before and the disks are fine - as long as installation is done on a "physical" machine. Added: I've tried install from an iso as well as the real CD. No difference in result (setup.exe is not a valid executable) The WinXP is SP-2 (held back on purpose at this point - I'll snapshot and fork a later SP to test). The Win2K is SP-6a. Win7(32) VM is whatever updates came out last week. The USB setup is working at least to the point where the GPS device is active in the device list (has an x in the box). At this point its not relevant because the program that needs to read it can't even be installed. Added 9-19: Added wine as harrymc suggested. Initial result was no change. Here's wines error message. The file '/media/Disk1/setup.exe' is not marked as executable. If this was downloaded or copied form an untrusted source, it may be dangerous to run. For more details, read about the executable bit. At first I thought the execute bit was the problem, but looking at several other windows CDs I see that the execute bit is not set on their exe files (which install to VM without error). Still it was worth a shot so I copied the StreetAtlas 9 DVD to my hard disk, changed the on-disk exe files to have the execute bit set and tried to install again. This time the install via wine got me through the installation process. When I start the program it bombs immediately, so we haven't made much real progress so far. I very much prefer the VM solution to wine, so I'm going back to that for now. To recap the VM situation, using an updated XP with SP3 and all recommended hotfixes: StreetAtlas 2009 USA fails with "not marked as executable". StreetAtlas 2007 USA fails with "not marked as executable". StreetAtlas 9 (copyright 2001) fails with "not marked as executable". SteeetAtlas (copyright 1991) fails with "not marked as executable" Delorme Topo 4 (copyright 2002) fails with "not marked as executable". Just about ready to give up. So I switched from XP VM to Win7 VM and tried StreetAtlas 2009 again. This time it installs. Earthmate USB GPS works. WTH? I feel like the monkey who just wrote a line of Shakespear. I'm smiling because it worked, but I have no clue why. I'm awarding the bounty to harrymc because wine did give some useful insight into the problem and a +1 to goyiux as thanks for helping.

    Read the article

  • If Nvidia Shield can stream a game via wifi, why can I not do the same via ethernet to any other PC?

    - by Enigma
    I think it absurd that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. I can only assume that there must be some reason behind this or a limitation preventing this, but what? 150mbps wifi is in no way superior to a 1000mbps LAN connection aside from well wireless mobility. Not only that but I have a secondary laptop and desktop which should by hardware comparison completely outperform anything the Tegra in the Nvidia Shield can do. Is this all just a marketing scheme to force people to buy the shield for the streaming benefit? Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable for me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? (*) - perhaps this isn't the first but afaik it is the first complete package.

    Read the article

  • How to use iptables to forward all data from an IP to a Virtual Machine

    - by jro
    OK, in an attempt to get some response, a TL;DR version. I know that the following command: iptables -A PREROUTING -t nat -i eth0 --dport 80 --source 1.1.1.1 -j REDIRECT --to-port 8080 ... will redirect all traffic from port 80 to port 8080. The problem is that I have to do this for every port that is to be redirected. To be future-proof, I want all ports for an IP to be redirected to a different (internal) IP, so that if one might decide to enable SSH, they can directly connect without worrying about iptables. What is needed to reliable forward all traffic from an external IP, to an internal IP, and vice versa? Extended version I've scoured the internet for this, but I never got a solid answer. What I have is one physical server (HOST), with several virtual machines (VM) that need traffic redirected to them. Just getting it to work with a single machine is enough for now. The VM's run under VirtualBox, and are set to use a host-only adapter (vboxnet0). Everything seems to work, but it is greatly lagging. Both the host (CentOS 5.6) and the guest (Ubuntu 10.04) machine are running Linux. What I did was the following: Configure the VM to have a static IP in the network of the vboxnet0 adapter. Add an IP alias to the host, registering to the dedicated (outside) IP. Setup iptables to allow traffic to come through (via sysctl). Configure iptables to DNAT and SNAT data from a given IP address to the internal address. iptables commands: sudo iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT sudo iptables -A POSTROUTING -t nat -j MASQUERADE iptables -t nat -I PREROUTING -d $OUT_IP -I eth0 -j DNAT --to-destination $IN_IP iptables -t nat -I POSTROUTING -s $IN_IP -o eth0 -j SNAT --to-source $OUT_IP Now the site works, but is really, really slow. I'm hoping I missed something simple, but I'm out of ideas for now. Some background info: before this, the site was working with basic port forwarding. E.g. port 80 was mapped to port 8080 using iptables. In VirtualBox (having the network adapter configured as NAT), a port forwarding the other way around made things work beautifully. The problem was twofold: first, multiple ports needed to be forwarded (for admin interfaces, https, ssh, etc). Second, it only allowed one IP address to use port 80. To resolve things, multiple external IP addresses are used for different (sub)domains. Likewise, the "VirtualBox" network will contain the virtual machines: DNS Ext. IP Adapter VM "VirtalBox" IP ------------------------------------------------------------------ a.example.com 1.1.1.1 eth0:1 vm_guest_1 192.168.56.1 b.example.com 2.2.2.2 eth0:2 vm_guest_2 192.168.56.2 c.example.com 3.3.3.3 eth0:3 vm_guest_3 192.168.56.3 And so on. Put simply, the goal is to channel all traffic from a.example.com to vm_guest_1 (of put differently, from 1.1.1.1 to 192.168.56.1). And achieve this with an acceptable speed :).

    Read the article

  • How to make XAMPP virtual hosts accessible to VM's and other computers on LAN?

    - by martin's
    XAMPP running on Vista 64 Ultimate dev machine (don't think it matters). Machine / Browser configuration Safari, Firefox, Chrome and IE9 on dev machine IE7 and IE8 on separate XP Pro VM's (VMWare on dev machine) IE10 and Chrome on Windows 8 VM (VMware on dev machine) Safari, Firefox and Chrome running on a iMac (same network as dev) Safari, Firefox and Chrome running on a couple of Mac Pro's (same network as dev) IE7, IE8, IE9 running on other PC's on the same network as dev machine Development Configuration Multiple virtual hosts for different projects .local fake TLD for development No firewall restrictions on dev machine for Apache Some sites have .htaccess mapping www to non-www Port 80 is open in the dev machine's firewall Problem XAMPP local home page (http://192.168.1.98/xampp/) can be accessed from everywhere, real or virtual, by IP All .local sites can be accessed from the browsers on the dev machine. All .local sites can be accessed form the browsers in the XP VM's. Some .local sites cannot be accessed from IE10 or Chrome on the W8 VM Sites that cannot be accessed from W8 VM have a minimal .htaccess file No .local sites can be accessed from ANY machine (PC or Mac) on the LAN hosts on dev machine (relevant excerpt) 127.0.0.1 site1.local 127.0.0.1 site2.local 127.0.0.1 site3.local 127.0.0.1 site4.local 127.0.0.1 site5.local 127.0.0.1 site6.local 127.0.0.1 site7.local 127.0.0.1 site8.local 127.0.0.1 site9.local 192.168.1.98 site1.local 192.168.1.98 site2.local 192.168.1.98 site3.local 192.168.1.98 site4.local 192.168.1.98 site5.local 192.168.1.98 site6.local 192.168.1.98 site7.local 192.168.1.98 site8.local 192.168.1.98 site9.local httpd-vhosts.conf on dev machine (relevant excerpt) NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost ServerAlias localhost *.localhost.* DocumentRoot D:/xampp/htdocs </VirtualHost> # ======================================== site1.local <VirtualHost *:80> ServerName site1.local ServerAlias site1.local *.site1.local DocumentRoot D:/xampp-sites/site1/public_html ErrorLog D:/xampp-sites/site1/logs/access.log CustomLog D:/xampp-sites/site1/logs/error.log combined <Directory D:/xampp-sites/site1> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> </VirtualHost> NOTE: The above <VirtualHost *:80> block is repeated for each of the nine virtual hosts in the file, no sense in posting it here. hosts on all VM's and physical machines on the network (relevant excerpt) 127.0.0.1 localhost ::1 localhost 192.168.1.98 site1.local 192.168.1.98 site2.local 192.168.1.98 site3.local 192.168.1.98 site4.local 192.168.1.98 site5.local 192.168.1.98 site6.local 192.168.1.98 site7.local 192.168.1.98 site8.local 192.168.1.98 site9.local None of the VM's have any firewall blocks on http traffic. They can reach any site on the real Internet. The same is true of the real machines on the network. The biggest puzzle perhaps is that the W8 VM actually DOES reach some of the virtual hosts. It does NOT reach site2, site6 and site 9, all of which have this minimal .htaccess file. .htaccess file <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] </IfModule> Adding this file to any of the virtual hosts that do work on the W8 VM will break the site (only for W8 VM, not the XP VM's) and require a cache flush on the W8 VM before it will see the site again after deleting the file. Regardless of whether a .htaccess file exists or not, no machine on the same LAN can access anything other than the XAMPP home page via IP. Even with hosts files on all machines. I can ping any virtual host from any machine on the network and get a response from the correct IP address. I can't see anything in out Netgear router that might prevent one machine from reaching the other. Besides, once the local hosts file resolves to an ip address that's all that goes out onto the local network. I've gone through an extensive number of posts on both SO and as the result of Google searches. I can't say that I have found anything definitive anywhere.

    Read the article

  • Need help recovering a corrupt SQL database

    - by user570079
    I have a very special case that I have been working on for several days. I have a very large SQL Server 2008 database (about 2 TB) that contains 500 filegroups to support very large partitioned tables. Recently we had a catastophic failure on one of the drive and lost several filegroups and the database became in-accessible. We have been doing filegroup backups on a daily basis, but due to other issues, we lost our most recent backup of the log and the primary filegroup. We have all the data backed up but the primary filegroup backup is old. There have been no schema changes since the primary filegroup backup, but the lsn's are now all out of sync and we cannot recover the data. I have tried everything I could think of (and have tried just about every trick and hack I could google) but I still end up at the same point where I get messages saying that the files for filegroup x do not match the primary filegroup. I am now at the point of trying to edit the system tables (we have a separate temporary environment to do this so we are not worried about corrupting any production databases). I have tried updated sys.sysdbreg, sys.sysbrickfiles, and sys.sysprufiles to try to trick SQL into thinking all the files are online, but a "Select * From OPENROWSET(TABLE DBPROP, 5)" shows a different database state from what I see in sys.sysdbreg. I am now thinking I need to somehow edit the headers of the actual data files to try to line up the lsn's with the primary. I appreciate any help anyone can give me here, but please do not respond with things like "you are not supposed to do edit mdf, ndf files...." or "see msdn article....", etc. This is an advanced emergency case and I need a real hack so we can just get to the data in this corrupt database and export to a fresh new database. I know there is a way to do this, but not knowing what the DBPROP system functions does (i.e. does it look at system tables or does it actually open the file) is keeping me from trying to figure out how to fool SQL into allowing me to read these files. Thanks for any help.

    Read the article

  • How does the Cloud compare to Colocation? And development too

    - by David
    Currently I/we run a SaaS web application where each subscriber has their own physical instance of the application in addition to their own database. The setup has each web application instance deployed on two different IIS boxes both for load-balancing and redundancy (the machines have their Windows Update install times 12 hours apart, for example). Databases are mirrored on two different SQL Server 2012 machines with AlwaysOn for uptime. I don't make use of SQL Server clustering (as it doesn't provide storage-level failover: we don't have a shared storage box). Because it's a Windows setup it means there are two Domain Controllers (we cheat: they're both Mac Minis, 17W each, which keeps our colo power costs low). Finally there's also an Exchange server (Mailbox, Hub Transport and Client Access). One of the SQL Servers also doubles-up as an Exchange Hub Transport. Running costs are about $700 a month for our quarter-rack colocation (which includes power and peering/transfer), then there's about $150 a month for SPLA licensing, so $850 a month in total. Then there's the hard-to-quantify cost of administration, but I reckon I spend a couple of hours a week checking-in on the servers: reviewing event logs, etc. I keep getting bombarded by ads and manufactured news stories about how great "the cloud" is. Back in 2008 when the cloud was taking off I was reading up about the proper "cloud" services like Google AppEngine, where you write in Python against Google's API and that's how they scale your application across servers and also use their database provider for scaling storage. Simple enough to understand. Then came along Amazon, and I understand how Amazon Storage works, but I'm not sure how Amazon Compute works: web application pages don't take much CPU time to compute, how do you even quantify usage anyway? Finally, RackSpace gets in the act and now I'm really confused. RackSpace advertise "Cloud" SQL Server 2012 available for about "$0.70 per hour", going by how they advertise it I thought the "hour" meant the sum of CPU time, IO blocking time, maybe time spent transferring data, so for a low-intensity application that works out pretty cheap then? Nope. I went on to a Sales Chat window and spoke to one of their advisors. They told me the $0.70/hour was actually for every hour the SQL Server is running... but who wants a SQL Server for only a few hours? You're going to need it available 24 hours a day for months on end. $0.70 * 24 * 31 works out at $520 a month, which is rediculously expensive for SQL Server. An SPLA license for SQL Server is only $50 a month or so. That $520 a month does not include "fanatical support", and you also need to stack on top the costs of the host Windows server instance too. From what I can tell, Rackspace's "Cloud" products seem like like an cynical rebranding of an overpriced VPS service, but priced by the hour. I have the same confusion about Windows Azure which uses similar terms to describe the products available, but I think that's because Azure offers both traditional shared webhosting in addition to their own APIs you can target for scalable applications.

    Read the article

  • cakephp & nginx config/rewrite rules

    - by seanl
    Hi somebody please help me out, I've asked this at stackoverflow as well but not got much of a response and was debating whether it was programming or server related. I’m trying to setup a cakephp environment on a Centos server running Nginx with Fact CGI. I already have a wordpress site running on the server and a phpmyadmin site so I have PHP configured correctly. My problem is that I cannot get the rewrite rules setup correct in my vhost so that cake renders pages correctly i.e. with styling and so on. I’ve googled as much as possible and the main consensus from the sites like the one listed below is that I need to have the following rewrite rule in place location / { root /var/www/sites/somedomain.com/current; index index.php index.html; # If the file exists as a static file serve it # directly without running all # the other rewrite tests on it if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/(.+)$ /index.php?url=$1 last; break; } } http://blog.getintheloop.eu/2008/4/17/nginx-engine-x-rewrite-rules-for-cakephp problem is these rewrite assume you run cake directly out of the webroot which is not what I want to do. I have a standard setup for each site i.e. one folder per site containing the following folders log, backup, private and public. Public being where nginx is looking for its files to serve but I have cake installed in private with a symlink in public linking back to /private/cake/ this is my vhost server { listen 80; server_name app.domain.com; access_log /home/public_html/app.domain.com/log/access.log; error_log /home/public_html/app.domain.com/log/error.log; #configure Cake app to run in a sub-directory #Cake install is not in root, but elsewhere and configured #in APP/webroot/index.php** location /home/public_html/app.domain.com/private/cake { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/private/cake/$1 last; break; } } location /home/public_html/app.domain.com/private/cake/ { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/public/index.php?url=$1 last; break; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/public_html/app.domain.com/private/cake$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Now like I said I can see the main index.php of cake and have connected it to my DB but this page is without styling so before I proceed any further I would like to configure it correctly. What am I doing wrong………. Thanks seanl

    Read the article

  • Local dns for testing websites using mobile devices

    - by Morpheu5
    Hi. I have no idea where to start from so sorry in advance if this topic has already been discussed. I usually develop web sites using my laptop as a development server, and recently I needed to test a web site using various mobile devices that can connect via wifi. Having no real AP, I set up a ad-hoc network using my laptop's wireless card and the devices can correctly browse the Internet and access the laptop's web server. The setup is as follows: subnet: 192.168.1.0/24 gateway to the Internet (wired adsl router/modem): 192.168.1.1 laptop: 192.168.1.64 (eth0, wired if connected to the gateway) and 192.168.1.32 (eth1, wifi if somewhat bridged to eth0) mobile devices (same for all, I only use one of them at any time for simplicity): 192.168.1.11 with default gw 192.168.1.1 Now, if I open either 192.168.1.32 or 192.168.1.64 from the mobile devices, I correctly get the default host of my Apache configuration. However I usually work with virtual hosts for many practical reasons, one of which being Drupal's peculiar implementation of multi-sites. For those who don't know how this works, Drupal takes the request's hostname and searches into its sites/ subdirectories for an appropriate configuration file. So, for example, suppose I request www.example.com, then Drupal would search for a config file in the following directories: sites/www.example.com/ sites/example.com/ sites/com/ sites/default/ So I decided to adopt the following style of virtual hosts: if the website I'm working on will be accessible using www.example.com I set up a sites/www.example.com/ directory and create a virtual host for local.www.example.com so Drupal have no trouble finding it. I've been told this is suboptimal from a dns point of view since I'd have to create an authoritative entry for example.com and turn Bind on only when I'm supposed to access the local copy, which is weird. However, if this is the only path I can follow, I still have some problems with Bind's configuration, as I couldn't find any guide that tells me in a clear, noob-friendly way, how to set up such an entry. On the other hand, I was wondering if I could set up an authoritative entry for local, so I could access www.example.com.local and tell in some way (which I don't even know if this is possible) Apache to put www.example.com instead of www.example.com.local in the relevant environment variable. Anyway, I have a last problem, sort of: when I launch Bind in debug mode with high verbosity, and make 192.168.1.32 as the primary dns for the devices, the output doesn't say anything about requests being made from the devices to Bind, so I'm not even sure it comes into play. As you can see, I'm a complete noob at these matters, but I'm eager to learn, so any help/pointer will be appreciated.

    Read the article

  • vmdk to live cd - VMware vmxnet virtual NIC driver Kernel panic

    - by ronalchn
    Task I am trying to convert a virtual machine to a live CD. Specifically, the virtual machine I am trying to convert is the IOI 2013 Competition Environment. In this task, I am aided by a guide Converting a virtual disk image: VDI or VMDK to an ISO you can distribute. Symptoms However, after getting through all the instructions, the live CD causes a kernel panic on boot on bare metal. In particular, the screen shows: [0.737348] cdrom: Uniform CD-ROM driver Revision: 3.20 [0.737503] sr 3:0:0:0: >Attached scsi CD-ROM sr0 [0.737638] sr 3:0:0:0: >Attached scsi generic sg2 type 5 [0.737771] Freeing unused kernel memory: 756k freed [0.738093] Write protecting the kernel text: 5960k [0.738155] Write protecting the kernel read-only data: 2424k [0.738224] NX-protecting the kernel data: 4280k Loading, please wait... [0.752252] udevd[100]: starting version 175 [0.768708] VMware vmxnet3 virtual NIC driver - version 1.1.29.0-k-NAPI [0.781204] VMware PVSCSI driver - version 1.0.2.0-k [0.789555] VMware vmxnet virtual NIC driver [0.799356] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000200 [0.799356] [0.799472] Pid: 1, comm: init Tainted: G 0 3.5.0-17-generic #28-Ubuntu [0.799549] Call Trace: [0.799603] [<c15bf0ec>] panic+0x81/0x17b [0.799654] [<c104a6a5>] do_exit+0x745/0x7a0 [0.799707] [<c104a9a4>] do_group_exit+0x34/0xa0 [0.799760] [<c104aa28>] sys_exit_group+0x18/0x20 [0.799813] [<c15cff5f>] sysenter_do_call+0x12/0x28 Possible problem I suspect that the problem is the VMware vmxnet virtual NIC driver - however, I do not know how I can uninstall it, and possibly install one for a bare metal machine. If anyone knows which packages needs installing/uninstalling at the .rootfs/ chroot directory stage, please let me know. Details on procedure Do note that after importing the .ova file into Virtualbox, the virtual machine is stored as a .vmdk file already, and not a .vdi file. I would like to point out some results of the procedure followed in case of any questions. This is after extracting the filesystem from the .raw file to the .rootfs/ directory mentioned in the blog. I changed the filesystem table as mentioned in the blog, then looked at the possible "kernel optimized for virtualization". However, I found that linux-image-generic was already installed. Also, when running the command dpkg-query --showformat='${Package}\n' -W 'vmware-tools*' (or dpkg-query --showformat='${Package}\n' -W '*-virtual'), no packages were found. Thus, I did not find any virtualization specific packages. I proceeded to generate the iso following the steps in the blog, and burned it to a DVD.

    Read the article

  • OpenSSH (Windows) does not forward X11

    - by Shulhi Sapli
    I'm running Ubuntu 13.04 in VM and I wanted to do X11 forwarding to my host (Win 8), so far it works fine using PuTTY and XMing server for Windows. But I am curious why it doesn't work if I use OpenSSH binaries (it comes together with Git for windows). This is what I've done so far: ssh -X [email protected] (also tried with -Y) then gedit but received error of Cannot open display. echo $DISPLAY came out as empty. So, I try to export DISPLAY=localhost:0.0 but it still won't work. The DISPLAY environment that I set is exactly as when it runs with Putty. I also try changing the DISPLAY to 192.168.2.3:0.0 and other display number as well, but still it won't work. Of course I could just use Putty to make it work, but I was wondering why OpenSSH binaries does not work. I have enabled all settings required in both /etc/ssh/ssh_config and /etc/ssh/sshd_config. If I run with -v option, this is what I get F:\SkyDrive\Projects> ssh -X -v [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to 192.168.2.3 [192.168.2.3] port 22. debug1: Connection established. debug1: identity file /c/Users/Shulhi/.ssh/identity type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_rsa type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.2.3' is known and matches the RSA host key. debug1: Found key in /c/Users/Shulhi/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Shulhi/.ssh/identity debug1: Trying private key: /c/Users/Shulhi/.ssh/id_rsa debug1: Next authentication method: password [email protected]'s password: It seems that there is no request for X11 (I'm not sure if there is should be one too here). Any pointers why it doesn't work?

    Read the article

  • Connecting PC to TV via HDMI/DVI: Windows XP doesn't allow the appropriate screen resolution

    - by Jørgen
    I have a computer that is connected to the living room TV (a Panasonic) via HDMI. There is no other monitor connected. My problem is that the computer, which is running Windows XP, does not allow me to set the proper resolution for the TV. Both the graphics adapter and the TV should support the 1280x720 resolution, but it cannot be selected - the only available options are 1280x600 and 800x600, both in the "native" Windows dialog box and the custom Intel graphics options dialog box. Do anyone have a suggestion for a solution for this? Things I've thought of: Setting the resolution directly in the registry (where?) Installing some "custom" monitor driver (the TV manufacturer does not appear to provide any, currently the "generic" one is used) Details on the setup: Connection: DVI output on the computer via a passive DVI-HDMI adapter to the HDMI input on the TV, audio is run on a separate link, the TV is able to combine video and audio without any problem, the problem is there regardless of whether or not the audio is connected. The connection is several meters long through some walls, for this reason using a VGA cable instead is not an option. Note that the report explicitly says that the TV supports 1280x720. Still, I am not allowed to select it in Graphics Options, only 1280x600 and 800x600 is available. For 800x600, there's a lot of black around the edges; for 1280x600, the screen is "zoomed" so the edges of the monitor image (like the taskbar) is not visible. Other: The computer is running Windows XP. More recent versions of Windows are not an option (I have no licence). Linux is probably not an option (some of the video streaming sites I plan to use do not support it, I think) I wrote the rest of the details below. Thanks for any help!! TV: Panasonic TX-L32X10Y, European version; a 720p 32" quite "regular" LCD TV. Allowed resolutions according to manual: Signal name: 640x480 @60HZ Horizontal frequency: 31.47 kHz Vertical frequency: 60Hz Signal name: 750/720) /60p Horizontal frequency: 45.00 kHz Vertical frequency: 60Hz Signal name: 1,125 (1,080) / 60p Horizontal frequency: 67.50 kHz Vertical frequency: 60Hz (this is exactly how the manual presents it. PC via D-SUB (VGA cable) and "regular" HDMI have more alternatives.) Messing with the "zoom" settings on the TV does not affect the available resolution options on the computer. Computer: The following is a printout from one of the graphics adapter option pages. I think it covers most of it. The computer is a Dell. INTEL(R) EXTREME GRAPHICS 2 REPORT Report Date: 04/17/2011 Report Time[hr:mm:ss]: 20:18:02 Driver Version: 6.14.10.4396 Operating System: Windows XP* Professional, Service Pack 3 (5.1.2600) Default Language: English DirectX* Version: 9.0 Physical Memory: 1021 MB Minimum Graphics Memory: 1 MB Maximum Graphics Memory: 96 MB Graphics Memory in Use: 6 MB Processor: x86 Processor Speed: 2593 MHZ Vendor ID: 8086 Device ID: 2572 Device Revision: 02 * Accelerator Information * Accelerator in Use: Intel(R) 82865G Graphics Controller Video BIOS: 2972 Current Graphics Mode: 1280 by 600 True Color (60 Hz) * Devices Connected to the Graphics Accelerator * Active Digital Displays: 1 * Digital Display * Monitor Name: Plug and Play Monitor Display Type: Digital Gamma Value: 2.20 DDC2 Protocol: Supported Maximum Image Size: Horizontal: Not Available Vertical: Not Available Monitor Supported Modes: 1280 by 720 (50 Hz) 1280 by 720 (60 Hz) Display Power Management Support: Standby Mode: Not Supported Suspend Mode: Not Supported Active Off Mode: Not Supported (disclaimer: this question was also asked at the Wikipedia Reference Desk some time ago and might show up in a Google search. I got no useful answers there.)

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >