Search Results

Search found 5435 results on 218 pages for 'ones and zeroes'.

Page 153/218 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • fwbuilder/iptables manually scripted + autogenerated rules at startup?

    - by Jakobud
    Fedora 11 Our previous IT-guy setup iptable rules on our firewall in a way that is confusing me and he didn't document any of it. I was hoping someone could help me make some sense of it. The iptables service is obviously starting at startup, but the /etc/sysconfig/iptables file was untouched (default values). I found in /etc/rc.local he was doing this: # We have multiple ISP connections on our network. # The following is about 50+ rules to route incoming and outgoing # information. For example, certain internal hosts are specified here # to use ISP A connection while everyone else on the network uses # ISP B connection when access the internet. ip rule add from 99.99.99.99 table Whatever_0 ip rule add from 99.99.99.98 table Whatever_0 ip rule add from 99.99.99.97 table Whatever_0 ip rule add from 99.99.99.96 table Whatever_0 ip rule add from 99.99.99.95 table Whatever_0 ip rule add from 192.168.1.103 table ISB_A ip rule add from 192.168.1.105 table ISB_A ip route add 192.168.0.0/24 dev eth0 table ISB_B # etc... and then near the end of the file, AFTER all the ip rules he just declared, he has this: /root/fw/firewall-rules.fw He's executing the firewall rules file that was auto-generated by fwbuilder. Some questions Why is he declaring all these ip rules in rc.local instead of declaring them in fwbuilder like all the other rules? Any advantage or necessity to this? Or is this just a poorly organized way to implement firewall rules? Why is he declaring ip rules BEFORE executing the fwbuilder script? I would assume that one of the first things the fwbuilder script does it get rid of any existing rules before declaring all the new ones. Am I wrong about this? If that was the case, the fwbuilder script would basically just delete all the ip rules that were defined in rc.local. Does this make any sense? Why is he executing all this stuff at startup in rc.local instead of just using iptables-save to keep the firewall settings at /etc/sysconfig/iptables that will get implemented at runtime?

    Read the article

  • Internet Explorer / Windows 7 does not want to show HTML file from local network drive

    - by Jaanus
    Setup: I have Windows 7 running inside VirtualBox on Mac OS X host. I have a shared drive with some HTML files, that I am mounting as a local drive W: in Windows, from the VirtualBox server \VBOXSVR. I want to look at them with a browser in Windows. Chrome in Windows 7 opens and shows those HTML files just fine (file:///W:/welcome.html). But Internet Explorer does not, and shows this error instead of the files: Internet Explorer cannot display the web page What you can try: [button Diagnose Connection Problems] More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For the internet zone in the status bar, it shows: Internet | Protected Mode: On IE settings are a mystery to me, and I could possibly get it to work by tweaking IE settings, but I don't know which ones. How do I make IE show the same files that Chrome is happy to show? (Chrome showing them means that the files themselves are fine, there is something about the setup that just makes IE be a diva.)

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • How can I remotely display images on my computer?

    - by Jakob
    What I Have: A laptop booted with Ubuntu and a stationary computer dual-booted with Ubuntu and Vista, both connected through a wireless ad-hoc network. What I Want: I want a way to display images in fullscreen on my stationary, using my laptop as a "remote control". I want to be able to choose another picture at any time and have my stationary computer remain in fullscreen mode at all times. Preferably, I should also be able to display just an empty (black) screen. How can I arrange for this? What I Have Tried: I have tried simply SSH:ing into my stationary computer and opening the image files using an image viewer, but all of the ones that I have tried (Eye of Gnome, Mirage, Gwenview, and others) open a new window for every new image. I don't know how to force them into using a single instance. I have tried using the VLC remote control command line interface, but apart from seeming somewhat unreliable (exiting with segmentation faults at one point), it also displays some images with a green border and forces me to pause playback in order for the image to remain on screen. Bonus Question: In my final setup, I also need to play music through my stationary computer's speaker and have the ability to switch to another track at any point, like with the images. Preferably, I would like to control the images and the audio through the same interface. How can I best achieve this?

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • few basic questions on webhosting (namservers & dns records)

    - by claws
    I bought a domain name on name.com & I want to use free webhosting on 110mb.com By default name.com integrates services of Google apps. Name server entries are ns1.name.com ns2.name.com ns3.name.com ns4.name.com When I registered on 110mb.com it gave me two addresses ns1.110mb.com ns2.110mb.com This is where I'm lost. The concept is that "Domain name should point to an address of the server where the website is hosted" right? Then why are these 4 entires by default. How exactly is it working? should I remove these 4 and then add 110mb.com servers or just append 110mb.com server addresses to name.com ones. I would like to use google apps. If I change these name server addresses would that remove google apps? I especially want to use email service of google. And I really don't understand what is CNAME, MX, or something something. I want to learn about these stuff & how it exactly works. When I search for webhost tutorial. I'm unable to find any fruitful results.

    Read the article

  • iptables -- OK, **now** am I doing it right?

    - by Agvorth
    This is a follow up to a previous question where I asked whether my iptables config is correct. CentOS 5.3 system. Intended result: block everything except ping, ssh, Apache, and SSL. Based on xenoterracide's advice and the other responses to the question (thanks guys), I created this script: # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP Now when I list the rules I get... # iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any anywhere anywhere state INVALID 9 612 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:http 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:https 0 0 DROP all -- any any anywhere anywhere Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 5 packets, 644 bytes) pkts bytes target prot opt in out source destination I ran it and I can still log in, so that's good. Anyone notice anything major out of wack?

    Read the article

  • USB question (how durable is it, how should I workaround this)

    - by Shiki
    The plot is quite simple. Got a Razer mouse. If I plug it in, it works. After a shutdown/hibernation, I have to replug it entirely at the back of the PC. (It works in my laptop even after severel shutdown, etc, so yes I guess it's my motherboard.. but it still got 2 years of warranty and it comes with quad SLI, its not an old motherboard at all. (MSI P7N SLI FI (bought it after a hungarian guy's recommendation)). So. I only could come up with one "solution". Get 3 USB cable (you know, USB-USB). If its possible the shortest ones (don't know if the responsibility/anything will worsen), AND replug only the middle+closest to the USB port junction, since those are replaceable. What do you think? Any other idea? (BIOS is updated, mouse driver ... doesn't really matter, the mouse won't even blink a bit after this happens. It lights up and goes totally dead.)

    Read the article

  • Debian 6: setting up FTP just for website editing

    - by David Oliver
    I have a VPS using Debian 6.0. Currently, SSH is set to not accept password logins, and only key-based ones. A person who needs to work on one particular website (a vhost) wishes to use FTP. He doesn't need/want SSH. How can I set up FTP access for him, enabling him to have write permissions for all files in the relevant directory, and only the relevant directory? The directory is /srv/www/domainname.com/public_html Currently, all directories and files in that directory belong to www-data:www-data and are 644/755. I've installed vsftpd and have been reading some guides, but they all seem to deal with allowing multiple users to have their own user-named directories which isn't what I'm after. I can't seem to work out how to simply define one FTP user with a password that has access to one directory of my choosing. This is my first experience of setting up an FTP server. Thanks. Edit: have also found this - maybe I should be using ProFTPd, or can vsftpd also do what I want?

    Read the article

  • I deployed Flash Player via a Software Installation policy. How to upgrade?

    - by eleven81
    I have a Windows Server 2008 machine as my DC. Earlier this year I created a Software Installation GPO to deploy Adobe Flash Player plugin MSI. I assigned the policy to the computers, about half run Windows XP x86 and the other half Windows 7 x64. That all works like clockwork. When I created the Software Installation Policy, I disabled the Flash Player plugin's automatic update feature by editing the MSI in Orca. I did this because I wanted all of my machines to run the exact same version of the plugin. Now, some time has passed and a newer version of the Flash Player plugin has been released. It is time for me to push out the updated version of the plugin. I already have the new MSI, but I am lost on what to do next. I see the upgrades tab in the Software Installation GPO, but everything there reads like that would be used for add-ons to a larger master program and not for updates that are released over time. I have read that it is best to create a new Software Installation policy with the new MSI, revoke the old GPO, and assign the new GPO. I feel as though, over time, I will wind up with more revoked policies than active ones. I have also read that some people have had success by replacing the old MSI with the new MSI and simply telling the GPO to redeploy. This seems like a backdoor method that will only get me in to trouble. In short, what is the correct, best-practice, or preferred way to roll out the new version via Group Policy?

    Read the article

  • why does the partition start on sector 2048 instead of 63

    - by gcb
    I had two drives partitioned the same and running 2 raid partitions on each. One died and I replaced it under warranty for the same model. While trying to partition it, the first partition can only start on sector 2048, instead of 63 that was before. Driver have different geometry as previous and remaining ones. (Fewer heads/more cylinders) old drive: $ sudo fdisk -c -u -l /dev/sdb Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000aa189 Device Boot Start End Blocks Id System /dev/sdb1 * 63 174080339 87040138+ 83 Linux /dev/sdb2 174080340 182482334 4200997+ 82 Linux swap / Solaris /dev/sdb3 182482335 3907024064 1862270865 fd Linux raid autodetect remanufactured drive received from warranty: $ sudo fdisk -c -u -l /dev/sda Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d0b5d Device Boot Start End Blocks Id System /dev/sda1 2048 ... why is that?

    Read the article

  • What are the replacement options for an IDE hard disk for a DOS based system?

    - by dummzeuch
    I have got a few "embedded" systems running MSDOS 6.2 which boot from and store data to IDE hard disks. Since these drives are nearing their end of life, the question arises how we can replace them. The requirements are: DOS must be able to install and boot from these drives. They must be able to sustain heavy (mostly) write access. If possible, they should be able to survive moderate vibrations (not too bad since the current hard disks have survived several years of that) I considered the following options so far: other ide hard drives: Unfortunately modern IDE drives are too large so DOS cannot boot from them even if I create small partitions. Older IDE drives are just that: old, so they are probably not the most reliable ones any more. SSDs: There are a few SSDs with IDE interface available. I have not yet tried them. Does anybody have any experience with them? They look like the ideal replacement provided that DOS can boot from them and that writing speed does not deteriorate too much (the old hard disks are no race cars either). Compact Flash: There are adapters for using CF with IDE controllers and they work fine. DOS can boot from them and they have no problems at all with vibrations. What I am not sure about is their durability. DOS uses FAT so some very few sectors are written every time the medium is being written to. IDE to SATA converters: I have no idea whether they are any good. Has anybody tried them? It might be an option to use one of these to connect an SATA SSD to the system. Are there any alternatives that I have missed? (We are working on replacing these systems, but it will still take a few years.)

    Read the article

  • Removing extended partition without deleting logical in it

    - by HisDudeness
    I'm running a Linux-based laptop, and in order to multi-boot several distros in it, I created an extended partition which contains a bunch of logical ones with GParted. Now, after quite a long time with this setup, I've changed my mind because of the consequent lack of storing space for my data partition. Now I want to keep one distro alone like it's normal, and eventually have some other operating systems stored in external supports to plug in and use if I want. Obviously, also this partition I want to keep (and to enlarge a little too) is just a logical inside the extended I want to keep. For what concerns the number I'm ok, meaning I currently have this big distro dedicated extended, the swap and the data partitions, so there's space for another primary before I delete the extended, but I don't know how to delete it without touching the logical in it, I don't want to reinstall the system losing all changes and settings, and I don't want to keep an extended partition for a logical alone. How can I do? Do I have to create a new primary, copy the logical content in it and then delete everything? Will the system boot and maintain exactly all the features it has now? Or is there a way to convert an extended into a primary once it contains just one logical? Or can I directly move a logical out of an extended turning it into a primary? Or, again, am I screwed?

    Read the article

  • Centos iptables configuration for Wordpress and Gmail smtp

    - by Fabrizio
    Let me start off by saying that I'm a Centos newby, so all info, links and suggestions are very welcome! I recently set up a hosted server with Centos 6 and configured it as a webserver. The websites running on it are nothing special, just some low traffic projects. I tried to configure the server as default as possible, but I like it to be secure as well (no ftp, custom ssh port). Getting my Wordpress to run as desired, I'm running into some connection problems. 2 things are not working: installing plugins and updates through ssh2 (failed to connect to localhost:sshportnumber) sending emails from my site using the Gmail smtp (Failed to connect to server: Permission denied (13)) I have the feeling that these are both related to the iptables configuration, because I've tried everything else (I think). I tried opening up the firewall to accept traffic for ports 465 (gmail smtp) and ssh port (lets say this port is 8000), but both the issues remain. Ssh connections from the terminal are working fine though. After each change I tried implementing I restarted the iptables service. This is my iptables configuration (using vim): # Generated by iptables-save v1.4.7 on Sun Jun 1 13:20:20 2014 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 465 -j ACCEPT COMMIT # Completed on Sun Jun 1 13:20:20 2014 Are there any (obvious) issues with my iptables setup considering the above mentioned issues? Saying that the firewall is doing exactly nothing in this state is also an answer... And again, if you have any other suggestions for me to increase security (considering the basic things I do with this box), I would love hear it, also the obvious ones! Thanks!

    Read the article

  • USB seems to pause system

    - by Marco van de Voort
    I've an application that does some simple measuring, for which it polls a few 100kbs several times a second (8-25 times) The behaviour is not really dependant on chipset (happens on several mobo's intel 965- P55) and OSes (XPsp3 and win7). Also the make of the USB keyboard doesn't seem to matter. I notice that sometimes when an USB kbd is plugged in, the system pauses for say 500-1000ms. (about 900-1000ms on disconnect, and 400-500 on the subsequent connect) It also happens for other USB devices (most notably mice and massstorage devices), but only the first time such device is connected to an installation. This disrupts the measurement and I really would like to get rid on this. I already tried to disable as much as possible. (powersave, teletubby mode (*) etc), and while this helped with the non-USB related disruptions of the measurement, it doesn't help with the USB related ones. (*) fyi, turning off themes (to resp. classic/non-aero), and turning off effects in system solved problems that occured when minimizing/maximizing the app. Any pointers to look into? I'm a bit stuck with this.

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • What is the max supported number of SATA devices (using cable adapters) on a Dell SAS 6/iR adapter?

    - by Zac B
    I've got a Dell SAS 6/iR PCI-E adapter. I don't have a multiplier backplane. I'm planning on connecting SATA (non SAS) drives. If I buy cable adapters only (ones that split a SAS connector on the card to a certain number of SATA cables), how many drives can I connect to this card? The way I see it, there are two limitations: a limitation imposed by the theoretical max number of devices supported on the card (which I've dug through the specs to find, but haven't seen yet), and a limitation imposed by the number of SAS plugs on the card multiplied by the number of SATA cables that come out of the highest-multiplying splitter I can buy. The answer to my question would be the minimum of those two limitations. I've seen 4x SATA coming out of some splitters; are there any that have more? Alternatively, if this is an RTFM question, does anyone have a good link to a "this is how SAS works, this is how you figure out the max number of devices, and this is how the concepts of 'ports', 'lanes', 'endpoint devices', and 'connectors' all relate in SAS-land" document? I've looked around on the Dell docs, but haven't found anything that explains this to someone at my level of understanding of SAN/enterprise storage technologies. Cheers!

    Read the article

  • Can IP address transfer from person to another after he disconnects from ISP or any other way?

    - by learner
    I have been checking this website that sells a product (health related) and trying to find out if it is a scam site. The site is something.blogspot.in (and not something.blogspot.com, which happens to be a different site altogether). So is it an Indian site? It has a CBox chat box where the owner communicates with customers (or potential ones) for information. The owner shows that his product has worked for people by providing links from a forum (created by him at network54.com) where people have posted positively. One doesn't have to be registered to post on there, but the IP address of the poster gets shown along with the post. According to the owner, IP address is basis of authenticity. I found that many people had different IP addresses on their different posts. The owner has declared the nationalities of the people who posted. When I traced the IP addresses of them with this site, I found that the nationalities provided by the owner were wrong. Is it possible that when a person disconnects himself from an ISP, another person from another country gets his old IP address?

    Read the article

  • How to install/configure ffmpeg to compress mp4 videos for flash player delivery?

    - by Andrew Fulton
    We have a flash web-app that created interactive video, and are using ffmpeg to do some compression/resizing when a user "publishes" their project. The user can upload flv files and mp4 files, both of which play fine in the Flash UI before publishing. After publishing the flv files work fine, but the mp4 files will not play in the flash player: Audio will play but video won't. The mp4 files will play fine if I download them and play them in the Quicktime player but if I attempt to open them in the Adobe Media Player it reports "The media file does not contain a supported video track". If I open the Movie inspector in quicktime it tells me that the original file is an "h264" video and the ffmpeg-processed ones are "mpeg-4". I have tried forcing it to h264 by adding flags like -f h264 and -vcodec h264 but I get a screenfull of errors (no frame, illegal POC type, sps_id out of range) ending with Could not find codec parameters (Video: h264) h264 will show up if I run ffmpeg -formats and ffmpeg -codecs, and as I said it will play fine in Quicktime. Is there anything else I need to do to convince the flash player to play them? Is there anything else I need to tell you about the server that will help?

    Read the article

  • Anonymous Login attemps from IPs all over Asia, how do I stop them from being able to do this?

    - by Ryan
    We had a successful hack attempt from Russia and one of our servers was used as a staging ground for further attacks, actually somehow they managed to get access to a Windows account called 'services'. I took that server offline as it was our SMTP server and no longer need it (3rd party system in place now). Now some of our other servers are having these ANONYMOUS LOGIN attempts in the Event Viewer that have IP addresses coming from China, Romania, Italy (I guess there's some Europe in there too)... I don't know what these people want but they just keep hitting the server. How can I prevent this? I don't want our servers compromised again, last time our host took our entire hardware node off of the network because it was attacking other systems, causing our services to go down which is really bad. How can I prevent these strange IP addresses from trying to access my servers? They are Windows Server 2003 R2 Enterprise 'containers' (virtual machines) running on a Parallels Virtuozzo HW node, if that makes a difference. I can configure each machine individually as if it were it's own server of course... UPDATE: New login attempts still happening, now these ones are tracing back to Ukraine... WTF.. here is the Event: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB4FEB30C) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: REANIMAT-328817 Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 94.179.189.117 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Here is one from France I found too: Event Type: Success Audit Event Source: Security Event Category: Logon/Logoff Event ID: 540 Date: 1/20/2011 Time: 11:09:50 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: QA Description: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB35D8539) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: COMPUTER Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 82.238.39.154 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • What's wrong with my .htaccess? Trying to simplify actual code

    - by AlexV
    This is my actual .htaccess: #If the requested URI does not end with an extension RewriteCond %{REQUEST_URI} !\.(.*) #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] #If the requested URI ends with .php* RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Since all conditions are compatibles except the 1st ones (no extension and *.php* match) all I should have to do is to add the [OR] condition to these 2 lines, but when I'm adding it it's not working (my no extension rule don't work anymore). This is my new (not working) code: #If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Hopefully someone will be able to clarify this issue... I guess I don't fully understand the use of [OR]. Thanks!

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • 4.4.1 Timeout in 10 minute intervals SMTP on batch email jobs

    - by TEEKAY
    I am running a job that uses SMTP and it can run in excess of an hour, emailing the entire time. It's not my code but a workflow based app so I just get a form to configure the mail server, subj, msg, etc and can't see it's implementation. I know it is .NET and SmtpClient. I have been seeing 4.4.1 timeouts every 10 minutes being reported by the application as the response from the server. The # of emails in those 10 minute sessions are variable, between 100 and below 150 which leads me to ask about the 10 minute timeout time specifically. I have found there are several exchange properties (though I don't know what version they are running) that set timeout limits. (http://technet.microsoft.com/en-us/library/bb232205%28v=exchg.150%29.aspx) Would those values for ConnectionInactivityTimeOut and ConnectionTimeout be the controlling the timeouts? and finally I would like to ask if exchange considers the consistent connection(s) it kept receiving from the same source as one continuous connection and cause the timeout each 10 minutes and cause the timeout? I am using a static ip of the mail server. Thanks if anyone can shed any light on my problem. EDIT - It is my belief that the library is just keeping the connections around and isn't wrapped in any cleanup code or using statement. That said, I still haven't made any progress on this issue in the last year and just requeue the failed ones as I see them.

    Read the article

  • Which revision control system for single user

    - by G. Bach
    I'm looking to set up a revision control system with me as a single user. I'd like to have access (read and write) protected using SSL, little overhead, and preferrably a simple setup. I'm looking to do this on my own server, so I don't want to use the option of registering with some professional provider of such a service (I like having direct control over my data; also, I'd like to know how to set up something like that). As far as I'm aware, what kind of project I want to subject to revision control doesn't really matter, but just for completeness' sake, I'm planning on using this for Java project, some html/css/php stuff, and in the future possibly as a synchronizing tool for small data bases (ignore that later one if it doesn't fit in with the paradigm of revision control). My questions primarily arise from the fact that I only ever used Subversion from Eclipse, so I don't have thorough knowledge of what's out there, what fits better for which needs, etc. So far I've heard of Subversion, Git, Mercurial, but I'm open to any system that's widely used and well supported. My server is running Ubuntu 11.10. Which system should I choose, what are the advantages of the respective systems, and if you know of any particularly useful ones, are there tutorials regarding the setup of the system I should choose that you could recommend?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >