Search Results

Search found 7570 results on 303 pages for 'doug hope'.

Page 252/303 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Reading graph inputs for a programming puzzle and then solving it

    - by Vrashabh
    I just took a programming competition question and I absolutely bombed it. I had trouble right at the beginning itself from reading the input set. The question was basically a variant of this puzzle http://codercharts.com/puzzle/evacuation-plan but also had an hour component in the first line(say 3 hours after start of evacuation). It reads like this This puzzle is a tribute to all the people who suffered from the earthquake in Japan. The goal of this puzzle is, given a network of road and locations, to determine the maximum number of people that can be evacuated. The people must be evacuated from evacuation points to rescue points. The list of road and the number of people they can carry per hour is provided. Input Specifications Your program must accept one and only one command line argument: the input file. The input file is formatted as follows: the first line contains 4 integers n r s t n is the number of locations (each location is given by a number from 0 to n-1) r is the number of roads s is the number of locations to be evacuated from (evacuation points) t is the number of locations where people must be evacuated to (rescue points) the second line contains s integers giving the locations of the evacuation points the third line contains t integers giving the locations of the rescue points the r following lines contain to the road definitions. Each road is defined by 3 integers l1 l2 width where l1 and l2 are the locations connected by the road (roads are one-way) and width is the number of people per hour that can fit on the road Now look at the sample input set 5 5 1 2 3 0 3 4 0 1 10 0 2 5 1 2 4 1 3 5 2 4 10 The 3 in the first line is the additional component and is defined as the number of hours since the resuce has started which is 3 in this case. Now my solution was to use Dijisktras algorithm to find the shortest path between each of the rescue and evac nodes. Now my problem started with how to read the input set. I read the first line in python and stored the values in variables. But then I did not know how to store the values of the distance between the nodes and what DS to use and how to input it to say a standard implementation of dijikstras algorithm. So my question is two fold 1.) How do I take the input of such problems? - I have faced this problem in quite a few competitions recently and I hope I can get a simple code snippet or an explanation in java or python to read the data input set in such a way that I can input it as a graph to graph algorithms like dijikstra and floyd/warshall. Also a solution to the above problem would also help. 2.) How to solve this puzzle? My algorithm was: Find shortest path between evac points (in the above example it is 14 from 0 to 3) Multiply it by number of hours to get maximal number of saves Also the answer given for the variant for the input set was 24 which I dont understand. Can someone explain that also. UPDATE: I get how the answer is 14 in the given problem link - it seems to be just the shortest path between node 0 and 3. But with the 3 hour component how is the answer 24 UPDATE I get how it is 24 - its a complete graph traversal at every hour and this is how I solve it Hour 1 Node 0 to Node 1 - 10 people Node 0 to Node 2- 5 people TotalRescueCount=0 Node 1=10 Node 2= 5 Hour 2 Node 1 to Node 3 = 5(Rescued) Node 2 to Node 4 = 5(Rescued) Node 0 to Node 1 = 10 Node 0 to Node 2 = 5 Node 1 to Node 2 = 4 TotalRescueCount = 10 Node 1 = 10 Node 2= 5+4 = 9 Hour 3 Node 1 to Node 3 = 5(Rescued) Node 2 to Node 4 = 5+4 = 9(Rescued) TotalRescueCount = 9+5+10 = 24 It hard enough for this case , for multiple evac and rescue points how in the world would I write a pgm for this ?

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • Xenserver 5.6 SR_BACKEND_FAILURE_47 no such volume group, but it is there

    - by Juan Carlos
    I've looked everywhere (Google, here, a bunch of other sites), and while I have found people with similar problems, I couldn't find a single one with a solution to this. Last night our xenserver 5.6 box corrupted the /var/xapi/state.db, and I couldn't fix the xml, no matter what I did. After a good hour fiddling with the file, I figured it would be faster to just reinstall. The server had one 2tb hard drive running Xen and its VMs, and since Xen's install said it would erase the hard drive it was installed on, I plugged a new harddrive and installed Xen on it, without selecting any hard drives for storage. I Figured I could make it happen after install, using the partition on the old harddrive with all my VMs on it. After instalation finished and the system booted I did: #fdisk -l found the old partition at /dev/sda3 #ll /dev/disk/by-id found the partition at /dev/disk/by-id/scsi-3600188b04c02f100181ab3a48417e490-part3 #xe host-list uuid ( RO) : a019d93e-4d84-4a4b-91e3-23572b5bd8a4 name-label ( RW): xenserver-scribfourteen name-description ( RW): Default install of XenServer #pvscan PV /dev/sda3 VG VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d lvm2 [1.81 TB / 204.85 GB free] Total: 1 [1.81 TB] / in use: 1 [1.81 TB] / in no VG: 0 [0 ] #vgscan Reading all physical volumes. This may take a while... Found volume group "VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d" using metadata type lvm2 # pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d PV Size 1.81 TB / not usable 6.97 MB Allocatable yes PE Size (KByte) 4096 Total PE 474747 Free PE 52441 Allocated PE 422306 PV UUID U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW # xe sr-introduce name-label="VMs" type=lvm uuid=U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW name-description="VMs Local HD Storage" content-type=user shared=false device-config=:device=/dev/disk/by-id/scsi-3600188b04c02f100181ab3a483f9f0ae-part3 U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW # xe pbd-create host-uuid=a019d93e-4d84-4a4b-91e3-23572b5bd8a4 sr-uuid=U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW device-config:device=/dev/disk/by-id/scsi-3600188b04c02f100181ab3a483f9f0ae-part3 adf92b7f-ad40-828f-0728-caf94d2a0ba1 # xe pbd-plug uuid=adf92b7f-ad40-828f-0728-caf94d2a0ba1 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW] At this point I did a # vgrename VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d VG_XenStorage-U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW cause the VG name was different, but pdb-plug still gives me the same error. So, now I'm kinda lost about what to do, I'm not used to Xen and most sites I've been finding are really unhelpful. I hope someone can guide me in the right way to fix this. I cant lose those VMs (got backups, but from inside the guests, not the VMs themselves).

    Read the article

  • Mounting NAS share: Bad Address

    - by Korben
    I've faced to the problem that can't solve. Hope you can help me with it. I have a storage QNAP TS-459U, with it's own Linux, and 'massive1' folder shared, which I need to mount to my Debian server. They are connected by regular patch cord. Debian server has two network interfaces - eth0 and eth1. eth0 is for Internet, eth1 is for QNAP. So, I'm saying this: mount -t cifs //169.254.100.100/massive1/ /mnt/storage -o user=admin , where 169.254.100.100 is an IP of QNAP's interface. The result I get (after entering password): mount error(14): Bad address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Tried: mount.cifs, smbmount, with '/' at the end of the network share and without it, and many other variations of that command. And always its: mount error(14): Bad address Funny thing is when I was in Data Center, I had connected my netbook to QNAP by the same scheme (with Fedora 16 on it), and it connected without any problems, I could read/write files on the QNAP's NAS share! So I'm really stuck with the Debian. I can't undrestand where's the difference with Fedora, making this error. Yeah, I've used Google. Couldn't find any useful info. Ping to the QNAP's IP is working, I can log into QNAP's Linux by ssh, telnet on 139's port is working. This is network interface configuration I use in Debian: IP: 169.254.100.1 Netmask: 255.255.0.0 The only diffence in connecting to Fedora and Debian is that in Fedora I've added gateway - 169.254.100.129, but ping to this IP is not working, so I think it's not necessary at all. P.S. ~# cat /etc/debian_version wheezy/sid ~# uname -a Linux host 2.6.32-5-openvz-amd64 #1 SMP Mon Mar 7 22:25:57 UTC 2011 x86_64 GNU/Linux ~# smbtree WORKGROUP \\HOST host server \\HOST\IPC$ IPC Service (host server) \\HOST\print$ Printer Drivers NAS \\MASSIVE1 NAS Server \\MASSIVE1\IPC$ IPC Service (NAS Server) \\MASSIVE1\massive1 \\MASSIVE1\Network Recycle Bin 1 [RAID5 Disk Volume: Drive 1 2 3 4] \\MASSIVE1\Public System default share \\MASSIVE1\Usb System default share \\MASSIVE1\Web System default share \\MASSIVE1\Recordings System default share \\MASSIVE1\Download System default share \\MASSIVE1\Multimedia System default share Please, help me with solving this strange issue. Thanks before.

    Read the article

  • Programmatically add an ISAPI extension dll in IIS 7 using ADSI?

    - by fretje
    I apologize beforehand, this is a cross post of this SO question. I thought I'd ask it there first, but apparently it doesn't harvest any answers there. I hope it will get more attention here. When I have an answer somewhere, I'll delete the other one. I'm trying to programmatically add an ISAPI extension dll in IIS using ADSI. This has been working for ages on previous versions of IIS, but it seems to fail on IIS 7. I am using similar code like shown in this question: var web = GetObject("IIS://localhost/W3SVC/1/ROOT/specificVirtualDirectory"); var maps = web.ScriptMaps.toArray(); map[maps.length] = ".aaa,c:\\path\\to\\isapi\\extension.dll,1,GET,POST"; web.ScriptMaps = maps.asDictionary(); web.SetInfo(); After executing that code, I do see an "AboMapperCustom-12345678" entry for that specific dll in the "Handler mappings" of the specific virtual directory in which I added the script map. But when I try to use that extension in a browser, I always get HTTP Error 404.2 Not Found The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server. Even after adding an entry to allow that specific dll in the "ISAPI and CGI restrictions", I keep getting that error. To make it actually work, I first have to undo these steps (encountering the same issue like the OP of the question mentioned above: after deleting the script map entry from the IIS manager GUI, I also have to programmatically delete it using ADSI before it's actually gone from the metabase). And then manually add an entry like this: inetmgr - webserver - website - virtual directory - handler mappings - add script map... path = *.dll, executable = <path to dll>, name = <doesn't matter, but it's mandatory> click "yes" on the question "do you want to allow this ISAPI extension?" When I compare the 2 entries, they are exactly the same, except for the "Entry Type" which seems to be "Inherited" for the programmatically added one and "Local" for the one added manually. The strange thing is, even though it says "Inherited", I don't see it anywhere in IIS on a higher level. Where is it inheriting from? In my code, I do add the script map to the specific virtual directory so it should be "Local" as well. Maybe there is the problem, but I don't know how to add a "Local" Script Map using ADSI. I really would like to keep using the ADSI method, as otherwise I will have to use different methods in our setup when working with IIS 7 or previous versions, and I would like to avoid that. To recap: How can I programmatically add a script map entry and its companion CGI and ISAPI restrictions entry to IIS 7 using ADSI? Anybody who can shed some light on this? Any help appreciated.

    Read the article

  • Networking issues with Linux server (CentOS 5.3)

    - by sxanness
    I have a Linux server hosting our bug tracking software (CentOS 5.2 Kernel 2.6.18-128.4.1.el5) that I have having some strange network problems with. The machine is configured with two NICS, one for the public interface and the other for our server back end network. The problem is that after doing a service network restart I can ping the public interface and it sends anywhere from 200-500 ICMP packets and then all of a sudden I start getting a request timed out error. Strange but as soon as I connect to the private interface the ping starts working again to the public interface. I clearly have a routing issue somewhere. I have a Juniper Router with the following configuration. Interface 0/0 -- Connect subnet to the ISP at our co-location Interface 0/2 -- For our DRAC network Interface 0/3 -- The Server-backend network (plugs directly into a switch that feeds to all the NICs that are on the 10.3.20.x network. Interface 0/4 -- Plugs directly into another switch that feeds our public interfaces, that interface as all the gateways from our public ip rangs as secondary IP addresses. I hope that someone can ask the right questions that can lead me to check things and figure out what is going on. Has anyone had similar problems and what kind of things should I be checking? Routing issue or something even more complicated? [root@fogbugz ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ DEVICE=eth0 BOOTPROTO=static IPADDR=72.249.134.98 NETMASK=255.255.255.248 BROADCAST=72.249.134.103 HWADDR=00:16:3E:AA:BB:EE ONBOOT=yes [root@fogbugz ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 # Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ DEVICE=eth1 BOOTPROTO=static BROADCAST=10.3.20.255 HWADDR=00:17:3E:AA:BB:EE IPADDR=10.3.20.25 NETMASK=255.255.255.0 NETWORK=10.3.20.0 ONBOOT=yes [root@fogbugz ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=fogbugz.dfw.hisg-it.net GATEWAY=72.249.134.97 [root@fogbugz ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 72.249.134.96 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.3.20.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 10.0.0.0 10.3.20.1 255.0.0.0 UG 0 0 0 eth1 0.0.0.0 72.249.134.97 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Networking issues with Linux server (CentOS 5.3)

    - by sxanness
    I have a Linux server hosting our bug tracking software (CentOS 5.2 Kernel 2.6.18-128.4.1.el5) that I have having some strange network problems with. The machine is configured with two NICS, one for the public interface and the other for our server back end network. The problem is that after doing a service network restart I can ping the public interface and it sends anywhere from 200-500 ICMP packets and then all of a sudden I start getting a request timed out error. Strange but as soon as I connect to the private interface the ping starts working again to the public interface. I clearly have a routing issue somewhere. I have a Juniper Router with the following configuration. Interface 0/0 -- Connect subnet to the ISP at our co-location Interface 0/2 -- For our DRAC network Interface 0/3 -- The Server-backend network (plugs directly into a switch that feeds to all the NICs that are on the 10.3.20.x network. Interface 0/4 -- Plugs directly into another switch that feeds our public interfaces, that interface as all the gateways from our public ip rangs as secondary IP addresses. I hope that someone can ask the right questions that can lead me to check things and figure out what is going on. Has anyone had similar problems and what kind of things should I be checking? Routing issue or something even more complicated? [root@fogbugz ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ DEVICE=eth0 BOOTPROTO=static IPADDR=72.249.134.98 NETMASK=255.255.255.248 BROADCAST=72.249.134.103 HWADDR=00:16:3E:AA:BB:EE ONBOOT=yes [root@fogbugz ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 # Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ DEVICE=eth1 BOOTPROTO=static BROADCAST=10.3.20.255 HWADDR=00:17:3E:AA:BB:EE IPADDR=10.3.20.25 NETMASK=255.255.255.0 NETWORK=10.3.20.0 ONBOOT=yes [root@fogbugz ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=fogbugz.dfw.hisg-it.net GATEWAY=72.249.134.97 [root@fogbugz ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 72.249.134.96 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.3.20.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 10.0.0.0 10.3.20.1 255.0.0.0 UG 0 0 0 eth1 0.0.0.0 72.249.134.97 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Vista install works on one computer, but bluescreens another (on which Vista is known to work)

    - by Ken
    I hope my explanations make some sense -- please ask for clarification if they don't. I had a computer running Windows Vista (Ultimate, 64-bit). All was well! Then one day there was a nasty power surge at the office, and it died. (We didn't have surge protectors at the office, unfortunately. I assumed our lines were conditioned elsewhere, or was not an issue here. Oops.) After some testing, it was determined that the PSU, motherboard, and RAM were bad. While waiting for new hardware to arrive, I put my hard disk in a spare PC which had identical parts (mobo/CPU/RAM/PSU/video). Everything worked perfectly. The only way I could even tell it wasn't my computer is because Vista asked to re-activate itself with the new hardware, which worked fine, too. So the hard disk seems OK. Then the new parts arrived. The old motherboard model is no longer manufactured, so it's a new one with the same CPU/RAM/videocard/etc. slots. The PSU is also new, while the RAM I'm using is from the spare PC mentioned above. When I put it together and tried booting with my old hard disk, it starts to boot Windows, and then (fairly early in the process) gives a bluescreen and immediately reboots (so I can't see whatever the bluescreen is trying to tell me). I tried "safe mode", which also bluescreened. I tried booting the Vista DVD and running the repair utility, which found a Vista install, confirmed that it would not boot, and, eventually, declared that it was unable to repair it. I installed Vista fresh on a new hard disk, with the new mobo/etc., and it works perfectly. (That's what I'm running now.) I've also booted a Linux CD here, which ran great, and I've run Memtest86+ for a while, which found no errors. So all the hardware apart from the old hard disk seems OK, too. I don't think the problem is with my old Vista hard disk, since I used that with another mobo/CPU just fine. I don't think it's any other part of the new hardware, since I'm able to use it (and test it) with no trouble. It's just the combination of my old Vista install plus the new PC hardware that's not happy. I can get my data off my old hard disk and onto my new hard disk, and reinstall my apps, but it would be nice if I could fix things so I could continue to use my old hard disk as before. The latest hypothesis I've heard is that Vista had trouble with the new hardware (i.e., motherboard), but we have no idea what to do about that (except Safe Mode, which didn't work). Suggestions? Hypotheses for what's not right about this combination of Vista install and motherboard? Thanks!

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • windows firewall broken on server 2008

    - by Chloraphil
    This evening I tried to rdp into my server 2008 box and was unable to. After poking around some I discovered that something is awry with my Windows Firewall. I did install 5 windows updates remotely earlier today but rolled those back in an attempt to see if that fixed the problem but had no luck. Symptoms: cannot rdp to machine (including from itself) cannot ping machine cannot connect to file share on machine error message when attempting to open "windows firewall with advanced security" snap-in (there was an error opening the windows firewall with advanced security snap-in ... The Windows Firewall with Advanced Security snap-in failed to load. Restart the windows firewall service on the computer that you are managing. Error code: 0x6D9. When I opened the "user-friendly" Windows Firewall it failed to load most of the gui elements, meaning, the title bar with close, minimize, and maximize buttons is present, the rest of the window has a white background with a yellow rectangle with rounded corners and a yellow triangle w/ an exclamation point is in the upper right. hope that made sense "Windows Firewall" does not appear in the list of services I ran a virus scan that found nothing. How do I fix the firewall and hopefully restore the ability to rdp? EDIT: Added at fission's request: c:\sc query mpsdrv SERVICE_NAME: mpsdrv TYPE : 1 KERNEL_DRIVER STATE : 4 RUNNING (STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN) WIN32_EXIT_CODE : 0 (0x0) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x0 WAIT_HINT : 0x0 c:\sc query mpssvc SERVICE_NAME: mpssvc TYPE : 20 WIN32_SHARE_PROCESS STATE : 1 STOPPED WIN32_EXIT_CODE : 1068 (0x42c) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x0 WAIT_HINT : 0x0 Those two registry keys do exist: HKLM\SYSTEM\CurrentControlSet\Services\mpsdrv & HKLM\SYSTEM\CurrentControlSet\Services\MpsSvc ! The problem seems to be with the Base Filtering Engine, when I try to start it I get the following error: Windows could not start the Base Filtering Engine service on MYCOMPUTER. Error 15100: The resource loader failed to find MUI file. EDIT2: I ran sfc /scannow and i found about 100 occurrences of "[SR] Cannot repair member file"... including several related to the firewall (ex: [l:32{16}]"Firewall.cpl.mui" of Networking-MPSSVC.Resources...). One of them mentioned wordpad.exe, which I tried to open, and it failed. I found here mentions of mounting the install.wim on the install media to copy the affected files over. I am downloading the appropriate AIK and will continue tomorrow evening.

    Read the article

  • Accidentally Removed Permissions for the XP SP3 Registry key HKEY_CLASSES_ROOT! Workaround Please!

    - by Praveen Kumar
    This is Praveen and I am using Microsoft Windows XP SP3 Build 2600. I had problems with using Microsoft Office 2010. It was keeping on saying, "Please wait while windows configures Microsoft Office Professional Plus 2010". After seeing this link: http://social.answers.microsoft.com/Forums/en-US/outlookcontact/thread/6e9c3f2e-010a-4b74-b433-0c41548ee468?prof=required I thought of giving the Registry Key: HKEY_CLASSES_ROOT, full access permissions for Everyone. I went to the registry, right-clicked on HKEY_CLASSES_ROOT and clicked on Permissions. I added Everyone and gave Full Control to the ACL and clicked on Apply. Also I checked "Replace permission entries on all child objects with entries shown here that apply to child objects." After a long time, it said with an error, cannot replace for few entries. Now, the key, HKEY_CLASSES_ROOT has no access to any user. My system is not starting up. Some of my friends asked me to try the Last Known Good Configuration. Even that did not work out. When I tried to open in the Safe Mode, I got only one driver to load and even that didn't load well. Another friend suggested me to put the Setup disk and reinstall the OS. I tried that and after completing the "Installing Device Drivers" part, when it started "Installing Network", the system is getting restarted. Now, the setup is also half way through and even if I open in Safe Mode to try System Restore, it popped up a message saying, "Setup cannot run under Safe Mode. Setup will restart now." and the system is restarting. All my official files and my software, which I developed for Registry Security, resides in my system now. I am unable to access the system and I want it to be working to submit the project, as the deadline is this week. I had no better solutions from elsewhere. Can anyone please help me out with this issue. If it is possible to open the registry editor's stored file from another system and restore the access permissions, I hope it would solve the problem. Please do help me. Thanking You, Praveen.

    Read the article

  • MariaDB doesn't upgrade, 2 versions are installed

    - by zahorak
    I have a server running on Debian wheezy with MaraiDB and OwnCloud. Few days ago, I wanted to update the packages because of the OwnCloud updates but something went wrong. Usually in this case I'd probably try to remove and again install the problematic packages, but on a server which is used by different people it doesn't seem like a valid solution anymore. Here you can see my console output: user@server:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libmariadbclient18 : Depends: libmysqlclient18 (= 10.0.4+maria-1~wheezy) but 10.0.5+maria-1~wheezy is installed libmysqlclient18 : Depends: libmariadbclient18 (= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-client-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-client-core-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-server : Depends: mariadb-server-10.0 (= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-server-core-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed E: Unmet dependencies. Try using -f. user@server:~$ sudo apt-get upgrade -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be upgraded: libmariadbclient18 mariadb-server-10.0 owncloud 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 7 not fully installed or removed. Need to get 0 B/37.2 MB of archives. After this operation, 3,565 kB of additional disk space will be used. Do you want to continue [Y/n]? Y Preconfiguring packages ... (Reading database ... 35901 files and directories currently installed.) Preparing to replace libmariadbclient18 10.0.4+maria-1~wheezy (using .../libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb) ... Unpacking replacement libmariadbclient18 ... dpkg: error processing /var/cache/apt/archives/libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb (--unpack): trying to overwrite '/usr/lib/mysql/plugin/dialog.so', which is also in package mariadb-server-10.0 10.0.4+maria-1~wheezy Errors were encountered while processing: /var/cache/apt/archives/libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I tried removing the 10.0.4 package version of libmariadbclient18 but I wasn't really successful in doing that. So my last hope is here, do you have any ideas how exactly I could fix this issue? Thx very much

    Read the article

  • An alternative to Google Talk, AIM, MSN, et al [closed]

    - by mkaito
    I'm not entirely sure whether this part of stack exchange is the most adequate for my question, but it would seem to me that people sharing this kind of concern would converge either here, or possibly on a more unix-specific sub site. Either way, here goes. Background Feel free to skip to The Question, below. This should, however, help those interested understand where I'm coming from, and where I expect to get, messaging-wise. My online talking place-to-go has been IRC for the last fifteen years. I think it's a great protocol, and clients out there are very good. I still use, and will always continue to use IRC for most of my chat needs. But then, there is private instant messaging. While IRC can solve this with queries and DCC chats, the protocol just isn't meant to work too well on intermittent connections, such as a mobile device, where you can often walk around places with low signal. I used MSN for a while, but didn't like it. The concept was awesome, but I think Microsoft didn't get the implementation quite right. When they started adding all that eye candy, and my buddies started flooding me with custom icons and buzzing my screen to it's knees, I shut my account and told folks that missed me to just email or call me. Much whining happened, I got called many weird things for not using MSN, but folks eventually got over it. Next, Google Talk came along, and seemed to be a lot better than MSN ever was. The protocol was open, so I could use whatever client I felt a fancy for. With the advent of smart phones, I just got myself a gtalk client on the phone, and have had a really decent integrated mostly-universal IM solution. Over the last few months, all Google services have been feeling flaky. IMs will often arrive anywhere between twenty minutes and one hour after being sent, clients will randomly disconnect, client priorities seem to work sometimes, and sometimes just a random device of those connected will get an IM. I think the time has come to look for greener grass. The Question It's rather hard to put what I'm looking for into precise words. I guess I just want something that is kind of like MSN/Gtalk, but that doesn't let me down when I need it. IRC is pretty much perfect, but the protocol just isn't designed to work well on mobile devices. Really, at this point I'm considering sticking to IRC for desktop messaging, and SMS/email on the phone, but I hope that in this day and age there is something better out there.

    Read the article

  • How to shutdown VMware Fusion virtual machine on host shutdown

    - by Nikksno
    I have a Mac mini running Mavericks server. I installed the Atmail server + webmail vm [a linux centos distribution] in VMware Fusion Professional 6 with the VMware Tools addon. It works flawlessly. I've set it to start on boot and that works very reliably. However I've been looking for a way to also safely and gracefully shut it down whenever OS X shuts down for whatever reason. The Mac is connected to a UPS and configured to perform an automatic shutdown in case the battery starts running low so that's no additional problem. Now the first thing I did was to go into Fusion's prefs and select "Power off the vm" when closing it. However I noticed that for some arcane reason closing the vm window would actually forcibly power off the vm: so then I found this post that showed me how to change the default power options and I managed to have the vm cleanly shutdown when closing its window or quitting Fusion altogether. At this point I was hoping to have solved the problem but as it turns out upon invoking system shutdown OS X doesn't wait for the vm to shutdown and terminates Fusion before it has a chance to do so. At this point I started looking for a way to automate the process of shutting down the guest os via some advanced setting but had no luck in doing so. That's when I found a command to shut the vm down: vmrun and it worked. The only thing left was to find out a way to execute this script on os x shutdown and giving it a little time to power off completely. However this turned out to be a nightmare: I spent hours looking through several ways to do this with Startup Items, rc.shutdown, cron, launchd, etc... but none of them worked the way I had configured them. I have to say that I found very limited information on using launchd for a shutdown script execution and I know it's the latest thing in the OS X world so I'm hoping someone out there among you will be able to help me out with this. I still think this is an extremely basic feature to ask for and I was really surprised to find this little documentation on so many different aspects of this problem. Is Fusion too basic of an application for this? I really hope someone can help. Thank you very much in advance.

    Read the article

  • chkdsk, SeaTools, and "does not have enough space to replace bad clusters"

    - by Zian Choy
    When I tried to do a Windows Vista Complete PC Backup, I received an error message that blathered about bad sectors. Then, when I ran chkdsk /r on the destination drive, this is what I got: C:\Windows\system32>chkdsk /R E: The type of the file system is NTFS. Volume label is Desktop Backup. CHKDSK is verifying files (stage 1 of 5)... 822016 file records processed. File verification completed. 1 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 848938 index entries processed. Index verification completed. 0 unindexed files processed. CHKDSK is verifying security descriptors (stage 3 of 5)... 822016 security descriptors processed. Security descriptor verification completed. 13461 data files processed. CHKDSK is verifying file data (stage 4 of 5)... The disk does not have enough space to replace bad clusters detected in file 239649 of name . The disk does not have enough space to replace bad clusters detected in file 239650 of name . The disk does not have enough space to replace bad clusters detected in file 239651 of name . An unspecified error occurred.f 822000 files processed) Yet, when I ran the SeaTools short & long generic tests on the Seagate disk, I didn't receive any errors. I know that I could reformat the disk and try running chkdsk /r again but I'd prefer to avoid waiting 4 hours in the hope that the problem was magically fixed. On the other hand, if I RmA the drive to Seagate, I have no SeaTools error number to use and they may claim that the drive is just fine. What should I try to do next? Side frustration: There is plenty of free hard drive space. The E: partition has 182 GB free.

    Read the article

  • Passive FTP on Windows Server 2008 R2 using the IIS7 FTP-Server

    - by ntor
    Hello serverFault-community! During the last few days I have been setting up a Windows Server 2008 R2 in a VMware. I installed the standard FTP-Server on it by using the Webserver (IIS)-role. Everything works fine with accessing my FTP-Site with ftp://localhost in Firefox. I can also get access to it via the local IP of my Server. Actually everything works fine in my LAN. But here's my problem: I want to get access "from outside", using the external IP or a dyndns-URL. I have a LinkSys-Router in front of my Server, therefore I'm forwarding all the important ports. If you may now think "this idiot has probably forgotten some ports", I must dissappoint you. It even works getting access to my Server-Website and messing around in some WebInterfaces. The problem is my passive FTP (active works for me). I always get a timeout, when e.g. FileZilla waits for a response to the LIST-command. The one big thing I don't get, is, why my Server sends a response to the PASV-command, naming a port like 40918, even if I have restricted the data port range for my passive FTP ( in the IIS-Manager) to e.g. [5000-5009]. I simply don't want to open and forward all possible data ports! And another thing is, I can't specify a static external IP-adress for my server, since I don't own any. I hope I have explained my problem in a comprehensible way. If not, simply ask by posting a comment! LG ntor PS: I have already mainly tried following articles: Out Of Band FTP 7 shows "Operation timed out" How to Configure Windows Firewall for a Passive Mode FTP Server ServerFault --- Passive ftp on Server 2008 --- EDIT: --- There is one idea rising up in my mind: When I use FileZilla to connect by passive mode I always get something like this: 227 Entering Passive Mode (192,168,1,102,160,86) According to a Rhinosof-article FZ tries to connect on port "160*256+86 = 41046", although I have restricted the data ports (as mentioned above). Could this be caused by the router, that doesn't forward out-ports directly, but uses different ones? (-- The IP-Adress given is the local one, since I'm not able to define a static external in the IIS-Mgr)

    Read the article

  • EC2 Filesystem / Files stored on the wrong partiton after launching new instance from AMI

    - by Philip Isaacs
    Today I set up a new EC2 Instance from and AMI I created from an older EC2 instance. When I launched the new instance I took the AMI that was on a small instance and launched with a medium instance. From what I can tell this is pretty standard stuff. But here's the stang part. According to AWS these are the differences Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform Medium Instance 3.75 GB of memory, 2 EC2 Compute Units (1 virtual core with 2 EC2 Compute Units each), 410 GB of local instance storage, 32-bit or 64-bit platform Okay now here's where I'm having an issue. I when I log into the new bigger instance it still reports only having 1.7 GB of ram. The other strange part is that all my old partitions are still their in the same configurations. I see a new larger partition /mnt which is essential empty. Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 5.9G 1.6G 79% / none 846M 120K 846M 1% /dev none 879M 0 879M 0% /dev/shm none 879M 76K 878M 1% /var/run none 879M 0 879M 0% /var/lock none 879M 0 879M 0% /lib/init/rw /dev/sda2 335G 195M 318G 1% /mnt /dev/sdf 16G 9.9G 5.1G 67% /var2 This EC2 is a web server and I was serving files off the /var2 directory but for some reason the instance is storing everything on / Okay here's what I'd like to do. Move all my website files to /mnt and have the web server point to that. Any suggestions? If it helps here is what my fstab looks like as well. root@myserver:/var# mount -l /dev/sda1 on / type ext3 (rw) [cloudimg-rootfs] proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) /dev/sda2 on /mnt type ext3 (rw) /dev/sdf on /var2 type ext4 (rw,noatime) I hope this question makes sense. Basically i want my old files on this new partition. Thanks in advance

    Read the article

  • Completely remove user account and create another with same name in Windows 7

    - by TeaJay
    Here's my question simply and then the details in case they help to get me an appropriate answer. Question: How can I completely and permanently delete a user account in Windows 7 so that I can create another one with the same user name without the computer name extension added, eg Jane Smith not Jane Smith.computer name? The details: I just did a clean install of Windows 7 Professional 32 bit. (My laptop crashed, I reinstalled Vista and restored backup files but things weren't working so I decided to just get Windows 7 since I had to start over anyway). I used Windows Easy Transfer to save just about everything, even customizing to include a user's appdata from Windows.old which was created when I reinstalled Vista -- not knowing that another windows.old file would be created with the installation of Windows 7. After installing Windows 7, I used Windows Easy Transfer to transfer the user file, appdata, to the new user account which I gave the same name (Jane Smith) in case having a different name would cause problems with reading files or something. Afterwards, I realized that I did not want ALL of that junk. So, I thought no problem, I'll just delete the user account I just created, nothing lost, and create another one this time transferring only the files I wanted (using the customize option in windows easy transfer). I wanted to keep the same user name, e.g. Jane Smith, so after I deleted the user account I checked the files, and I didn't see. It was late so I went to bed and the next morning I created a new user with that same name (Jane Smith). The files looked fine if I remember correctly. Meanwhile, I updated the computer and it restarted a couple times. As I was moving files to the "Jane Smith" user account file, things weren't working as they should. I was actually moving files to the deleted user account and that the current user account was named "Jane Smith.computer name" and that's where the files needed to go. I don't like this. It's too confusing. I want just "Jane Smith". How can I do this without just changing the user name (which doesn't change it in the file path etc)? I want the first one GONE. If I can't do this, is it a problem to create an account with another name and still transfer files to it without path or other problems? I hope this question makes sense and that someone can help me. Thank you in advance!

    Read the article

  • SSL_CLIENT_CERT_CHAIN not being passed to backend server

    - by nidkil
    I have client certificate configured and working in Apache. I want to pass the PEM-encoded X.509 certificates of the client to the backend server. I tried with the SSLOptions +ExportCertData. This does nothing at all, while the documentation states it should add SSL_SERVER_CERT, SSL_CLIENT_CERT and SSL_CLIENT_CERT_CHAINn (with n = 0,1,2,..) as headers. Any ideas why this option is not working? I then tried setting the headers myself using RequestHeader. This works fine for all variables except SSL_CLIENT_CERT_CHAIN. It shows null in the header. Any ideas why the certificate chain is not being filled? This is my first Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 SSLOptions +ExportCertData ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> This is my second Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 RequestHeader set SSL_CLIENT_S_DN "%{SSL_CLIENT_S_DN}s" RequestHeader set SSL_CLIENT_I_DN "%{SSL_CLIENT_I_DN}s" RequestHeader set SSL_CLIENT_S_DN_CN "%{SSL_SERVER_S_DN_CN}s" RequestHeader set SSL_SERVER_S_DN_OU "%{SSL_SERVER_S_DN_OU}s" RequestHeader set SSL_CLIENT_CERT "%{SSL_CLIENT_CERT}s" RequestHeader set SSL_CLIENT_CERT_CHAIN0 "%{SSL_CLIENT_CERT_CHAIN0}s" RequestHeader set SSL_CLIENT_CERT_CHAIN1 "%{SSL_CLIENT_CERT_CHAIN1}s" RequestHeader set SSL_CLIENT_VERIFY "%{SSL_CLIENT_VERIFY}s" ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> Hope someone can help. Regards, nidkil

    Read the article

  • OpenVPN Clients using server's connection (with no default gateway)

    - by Branden Martin
    I wanted an OpenVPN server so that I could create a private VPN network for staff to connect to the server. However, not as planned, when clients connect to the VPN, it's using the VPN's internet connection (ex: when going to whatsmyip.com, it's that of the server and not the clients home connection). server.conf local <serverip> port 1194 proto udp dev tun ca ca.crt cert x.crt key x.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 9 client.conf client dev tun proto udp remote <srever> 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert x.crt key x.key ns-cert-type server comp-lzo verb 3 Server's route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 69.64.48.0 * 255.255.252.0 U 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 Server's IP Tables Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-proftpd tcp -- anywhere anywhere multiport dports ftp,ftp-data,ftps,ftps-data fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-proftpd (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere My goal is that clients can only talk to the server and other clients that are connected. Hope I made sense. Thanks for the help!

    Read the article

  • Cannot install passenger with Nginx

    - by Luc
    Hello, I have a rack application that I want to migrate from Ruby 1.8.7 + Apache + passenger to Ruby 1.9.1 + Nginx + passenger. I have made up the following script for a quick install all in one, and it raises an error... Here is the installation script: (basic one with all the steps I need to install everything on a Ubuntu 10.04 Lucid Lynx fresh box) Nginx sources cd /tmp wget http://nginx.org/download/nginx-0.7.66.tar.gz tar xzf nginx-0.7.66.tar.gz cd nginx-0.7.66 openssl required for SSL/TLS sudo apt-get install openssl sudo apt-get install libssl-dev Compilation stuff sudo apt-get zlib1g-dev Ruby interpreter 1.9.1 sudo apt-get install ruby1.9.1 ruby1.9.1-dev rubygems1.9.1 irb1.9.1 ri1.9.1 rdoc1.9.1 build-essential nginx libopenssl-ruby1.9.1 Make sure default ruby uses version 1.9.1 sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.1 400 --slave /usr/share/man/man1/ruby.1.gz ruby.1.gz /usr/share/man/man1/ruby1.9.1.1.gz --slave /usr/bin/ri ri /usr/bin/ri1.9.1 --slave /usr/bin/irb irb /usr/bin/irb1.9.1 --slave /usr/bin/rdoc rdoc /usr/bin/rdoc1.9.1 sudo update-alternatives --config ruby Passenger (rake-0.8.7, fastthread-1.0.7, rack-1.1.0, passenger-2.2.14) sudo gem install passenger Activate Passenger in nginx, select option 2 to use nginx sources donwloaded above cd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin sudo ./passenger-install-nginx-module And this is the error message I got: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/ContentHandler.c gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /tmp/pcre-8.00 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx/StaticContentHandler.o \ /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c: In function ‘passenger_static_content_handler’: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c:71: error: ‘ngx_http_request_t’ has no member named ‘zero_in_uri’ make[1]: *** [objs/addon/nginx/StaticContentHandler.o] Error 1 make[1]: Leaving directory `/tmp/nginx-0.7.66' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /var/lib/gems/1.9.1/gems/passenger-2.2.14/doc/Users guide Nginx.html I do not understand the reason of this error. Is this a compatibility problem ? Hope you have any clues :) Thanks a lot, Luc

    Read the article

  • OpenBSD pf 'match in all scrub (no-df)' causes HTTPS to be unreachable on mobile network

    - by Frank ter V.
    First of all: excuse me for my poor usage of the English language. For several years I'm experiencing problems with the 'match in all scrub (no-df)' rule in pf. I can't find out what's happening here. I'll try to be clear and simple. The pf.conf has been extremely shortened for this forum posting. Here is my pf.conf: set skip on lo0 match in all scrub (no-df) block all block in quick from urpf-failed pass in on em0 proto tcp from any to 213.125.xxx.xxx port 80 synproxy state pass in on em0 proto tcp from any to 213.125.xxx.xxx port 443 synproxy state pass out on em0 from 213.125.xxx.xxx to any modulate state HTTP and HTTPS are working fine. Until the moment a customer in France (Wanadoo DSL) couldn't view HTTPS pages! I blamed his provider and did no investigation on that problem. But then... I bought an Android Samsung Galaxy SII (Vodafone) to monitor my servers. Hours after I walked out of the telephone store: no HTTPS-connections on my server! I thought my servers were down, drove back to the office very fast. But they were up. I discovered that disabling the rule match in all scrub (no-df) solves the problem. Android phone (Vodafone NL) and Wanadoo DSL FR are now OK on HTTPS. But now I don't have any scrubbing anymore. This is not what I want. Does anyone here understand what is going on? I don't. Enabling scrubbing causes HTTPS webpages not to be loaded on SOME ISP's, but not all. In systat, I strangely DO see a state created and packets received from those ISP's... Still confused. I'm using OpenBSD 5.1/amd64 and OpenBSD 5.0/i386. I have two ISP's at my office (one DSL and one cable). Affects both. This can be reproduced quite easily. I hope someone has experience with this problem. Greetings, Frank

    Read the article

  • Moving windows-2003 hdd into virtual machine - with HDD shrink

    - by jm666
    Before you vote to close as exact duplicate, please read the full question. I was already read: Can I make a virtual machine out of a Windows XP physical machine? Disk2vhd,convert my PC to Hyper-V Virtual Machine Creating a Windows Virtual PC image from a Physical machine physical machine to virtual machine and place into VirtualBox BSOD trying to migrate Windows XP from a physical to a virtual machine http://en.wikipedia.org/wiki/Physical-to-Virtual and all other similiar questions here and several external sites too Unfortunately, don't find answer for my problem. I have an physical machine with 500GB HDD, on what is installed old Windows-2003 server with one server application. The application is like the windows itself, too old, no support for it today, haven't installation media and so on.. ;( On the HDD it is used only approx. 100MB (maybe less when will delete all unnecessary files). Want convert the the machine into the VirtualBox, and the VirtualBox should run on the same machine. Is possible to do this with the next steps? I can attach another HDD (via USB or internally) Boot an live Linux from CD, mount HDDs Run "something" on the Linux (the above wikipedia article have many pointer for the SW) for the conversion and store the image on the USB HDD - unfortunately, many of tools uses some specialty what exists in Windows-XP and above. No informations about Windows-2003 server, so what is an working solution for Windows-2003? try boot the virtual image with VirtualBox when it will run ok, remove the old installation, install Linux on the old 500GB hdd, copy the image and run.. The above should works (i hope), but the problems: i currently have only 320GB external USB hdd. (ofc, i can remove it from a box and enter it as internal HDD too) so, for the conversion I looking for the on the fly HDD shrink, so while moving the physical 500GB HDD need shrink it into smaller HDD - as i told above, only 100MB is used Exists something for this? (free) - or the only way is buying and larger 1TB hdd and using it for the conversion? Another question are: is anybody have real experience with windows-2003 conversion into VirtualBox? Looking for an answer from someone who really doing it and can figure out real pitfalls. (googling can do myself). exists here better approach for the solution?

    Read the article

  • Debian Lenny syslog kernel messages

    - by andre
    Hello everyone. I've recently got a disk swap on my colo'd box. I've been getting these messages from the kernel (viewed on the terminal and later on /var/log/messages Mar 22 09:04:29 seedbox kernel: [72710.442831] Pid: 6527, comm: sshd Not tainted 2.6.26-2-amd64 #1 Mar 22 09:04:29 seedbox kernel: [72710.442831] RIP: 0010:[<ffffffff802876b9>] [<ffffffff802876b9>] page_remove_rmap+0xff/0x11a Mar 22 09:04:29 seedbox kernel: [72710.442831] RSP: 0018:ffff8100b75d1da8 EFLAGS: 00010246 Mar 22 09:04:29 seedbox kernel: [72710.442831] RAX: 0000000000000000 RBX: ffffe2000185e3e8 RCX: 0000000000008e53 Mar 22 09:04:29 seedbox kernel: [72710.442831] RDX: ffff810080a4c000 RSI: 0000000000000046 RDI: 0000000000000282 Mar 22 09:04:29 seedbox kernel: [72710.442831] RBP: ffff8100379838c8 R08: 00007f6d11ba4000 R09: ffff8100b75d1800 Mar 22 09:04:29 seedbox kernel: [72710.442831] R10: 0000000000000000 R11: 0000010000000010 R12: ffff8100bb446b00 Mar 22 09:04:29 seedbox kernel: [72710.442831] R13: 00007f6d11ba4000 R14: ffffe2000185e3e8 R15: ffff810001023b80 Mar 22 09:04:29 seedbox kernel: [72710.442831] FS: 0000000000000000(0000) GS:ffffffff8053d000(0000) knlGS:0000000000000000 Mar 22 09:04:29 seedbox kernel: [72710.442831] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 22 09:04:29 seedbox kernel: [72710.442831] CR2: 00007f6d1195b480 CR3: 0000000037904000 CR4: 00000000000006e0 Mar 22 09:04:29 seedbox kernel: [72710.442831] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 22 09:04:29 seedbox kernel: [72710.442831] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Mar 22 09:04:29 seedbox kernel: [72710.442831] Process sshd (pid: 6527, threadinfo ffff8100b75d0000, task ffff8100bd0a0990) Mar 22 09:04:29 seedbox kernel: [72710.442831] Stack: 800000006f65b045 800000006f65b045 ffff8100bc013d20 ffffffff8027f69a Mar 22 09:04:29 seedbox kernel: [72710.442831] ffff810100000000 0000000000000000 ffff8100b75d1eb8 ffffffffffffffff Mar 22 09:04:29 seedbox kernel: [72710.442831] 0000000000000000 ffff8100379838c8 ffff8100b75d1ec0 0000000000296460 Mar 22 09:04:29 seedbox kernel: [72710.442831] Call Trace: Mar 22 09:04:29 seedbox kernel: [72710.442831] [<ffffffff8027f69a>] ? unmap_vmas+0x4c9/0x885 Mar 22 09:04:29 seedbox kernel: [72710.442831] [<ffffffff80283ac8>] ? exit_mmap+0x7c/0xf0 Mar 22 09:04:29 seedbox kernel: [72710.442831] [<ffffffff80232538>] ? mmput+0x2c/0xa2 Mar 22 09:04:29 seedbox kernel: [72710.442831] [<ffffffff802378ad>] ? do_exit+0x25a/0x6a6 Mar 22 09:04:29 seedbox kernel: [72710.442831] [<ffffffff802afa45>] ? mntput_no_expire+0x20/0x117 Mar 22 09:04:29 seedbox kernel: [72710.442831] [<ffffffff80237d66>] ? do_group_exit+0x6d/0x9d Mar 22 09:04:29 seedbox kernel: [72710.442831] [<ffffffff80237da8>] ? sys_exit_group+0x12/0x16 Mar 22 09:04:29 seedbox kernel: [72710.442831] [<ffffffff8020beda>] ? system_call_after_swapgs+0x8a/0x8f Mar 22 09:04:29 seedbox kernel: [72710.442831] Mar 22 09:04:29 seedbox kernel: [72710.442831] Mar 22 09:04:29 seedbox kernel: [72710.442831] RSP <ffff8100b75d1da8> Mar 22 09:04:29 seedbox kernel: [72710.442831] ---[ end trace e8a2f3b263482c6e ]--- I don't know what the problem is or how to debug / track it, hope you guys can help.. let me know if you need more information.

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >