Search Results

Search found 19539 results on 782 pages for 'pretty print'.

Page 748/782 | < Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • my webserver with 16GB ram shows all RAM as used, but is it really, see the 'top'

    - by Alex
    I have some questions about my web server. Its a LAMP web server running centos 5.5 and php5, mysql5. The server gets hundreds (maybe thousand) of concurrent users during peak hours. I'm trying to optimize a little and understand "top". From what I can see: all 16GB of my ram have been used up? does that mean that my server needs more memory? My swap is only 2GB, should it be increased? usually during peak hours my server load average first number is about 2.5-3. What could I do to optimize the server so that the load average even during peak doesn't go above 1? In the past I was told a good working server should stay under 1 load, is this still true? Although even during load of 2.5-3, server pages and applications seem to load with pretty good speed. what should the memory size in php.ini be set to? top - 14:30:18 up 2 days, 12:41, 5 users, load average: 1.25, 1.74, 2.92 Tasks: 305 total, 2 running, 302 sleeping, 0 stopped, 1 zombie Cpu(s): 6.3%us, 0.9%sy, 0.0%ni, 92.5%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 16427200k total, 16111472k used, 315728k free, 3120316k buffers Swap: 2104496k total, 268k used, 2104228k free, 6216756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 29080 apache 15 0 358m 36m 5192 S 20.2 0.2 2:08.40 httpd 29093 apache 18 0 357m 36m 5192 S 18.2 0.2 2:02.52 httpd 29079 apache 15 0 370m 49m 5832 S 10.0 0.3 2:32.14 httpd 1812 apache 15 0 370m 49m 5196 S 7.3 0.3 2:25.30 httpd 5204 apache 15 0 358m 36m 5168 S 5.3 0.2 0:59.28 httpd 29075 apache 15 0 370m 48m 5184 S 3.3 0.3 2:15.93 httpd 9712 apache 15 0 360m 38m 5180 S 3.0 0.2 0:54.81 httpd 29072 apache 16 0 358m 36m 5192 S 2.7 0.2 2:24.43 httpd 6310 apache 17 0 388m 67m 5180 S 2.3 0.4 0:58.85 httpd 8674 apache 15 0 343m 21m 4980 S 2.0 0.1 0:07.91 httpd 29085 apache 15 0 371m 49m 5224 S 2.0 0.3 2:16.86 httpd 29083 apache 15 0 370m 48m 5196 S 1.7 0.3 2:10.64 httpd 5575 apache 15 0 357m 36m 5228 S 1.3 0.2 0:53.78 httpd 29066 apache 15 0 379m 59m 5860 R 1.3 0.4 2:11.93 httpd 29078 apache 15 0 370m 48m 5188 S 1.3 0.3 2:14.52 httpd 29084 apache 15 0 370m 48m 5208 S 1.0 0.3 2:02.49 httpd 29089 apache 15 0 370m 48m 5188 S 1.0 0.3 2:27.61 httpd 29082 apache 15 0 390m 68m 5188 S 0.7 0.4 2:32.48 httpd 29984 apache 15 0 358m 36m 5228 S 0.7 0.2 2:08.32 httpd 3571 root 16 0 13400 1792 848 S 0.3 0.0 2:37.89 top 4419 mysql 15 0 668m 175m 7204 S 0.3 1.1 3:32.25 mysqld 28181 root 15 0 90460 3624 2680 S 0.3 0.0 0:17.60 sshd 29091 apache 15 0 390m 69m 5196 S 0.3 0.4 2:29.99 httpd 32476 root 15 0 12900 1320 848 R 0.3 0.0 0:06.46 top 1 root 15 0 10372 680 572 S 0.0 0.0 0:02.01 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.51 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:01.45 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.22 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/7 24 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/7 25 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/7 26 root RT -5 0 0 0 S 0.0 0.0 0:00.19 migration/8 27 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/8 28 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/8 29 root RT -5 0 0 0 S 0.0 0.0 0:00.34 migration/9 30 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/9 31 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/9 32 root RT -5 0 0 0 S 0.0 0.0 0:00.16 migration/10 33 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/10 34 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/10 35 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/11 36 root 34 19 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/11 37 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/11 38 root RT -5 0 0 0 S 0.0 0.0 0:00.35 migration/12

    Read the article

  • Setup routing and iptables for new VPN connection to redirect **only** ports 80 and 443

    - by Steve
    I have a new VPN connection (using openvpn) to allow me to route around some ISP restrictions. Whilst it is working fine, it is taking all the traffic over the vpn. This is causing me issues for downloading (my internet connection is a lot faster than the vpn allows), and for remote access. I run an ssh server, and have a daemon running that allows me to schdule downloads via my phone. I have my existing ethernet connection on eth0, and the new VPN connection on tun0. I believe I need to setup the default route to use my existing eth0 connection on the 192.168.0.0/24 network, and set the default gateway to 192.168.0.1 (my knowledge is shaky as I haven't done this for a number of years). If that is correct, then I'm not exactly sure how to do it!. My current routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface MSS Window irtt 0.0.0.0 10.51.0.169 0.0.0.0 UG 0 0 0 tun0 0 0 0 10.51.0.1 10.51.0.169 255.255.255.255 UGH 0 0 0 tun0 0 0 0 10.51.0.169 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 0 0 0 85.25.147.49 192.168.0.1 255.255.255.255 UGH 0 0 0 eth0 0 0 0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0 0 0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 0 0 0 After fixing the routing, I believe I need to use iptables to configure prerouting or masquerading to force everything for destination port 80 or 443 over tun0. Again, I'm not exactly sure how to do this! Everything I've found on the internet is trying to do something far more complicated, and trying to sort the wood from the trees is proving difficult. Any help would be much appreciated. UPDATE So far, from the various sources, I've cobbled together the following: #!/bin/sh DEV1=eth0 IP1=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 192.` GW1=192.168.0.1 TABLE1=internet TABLE2=vpn DEV2=tun0 IP2=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` GW2=`route -n | grep 'UG[ \t]' | awk '{print $2}'` ip route flush table $TABLE1 ip route flush table $TABLE2 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table $TABLE1 $ROUTE ip route add table $TABLE2 $ROUTE done ip route add table $TABLE1 $GW1 dev $DEV1 src $IP1 ip route add table $TABLE2 $GW2 dev $DEV2 src $IP2 ip route add table $TABLE1 default via $GW1 ip route add table $TABLE2 default via $GW2 echo "1" > /proc/sys/net/ipv4/ip_forward echo "1" > /proc/sys/net/ipv4/ip_dynaddr ip rule add from $IP1 lookup $TABLE1 ip rule add from $IP2 lookup $TABLE2 ip rule add fwmark 1 lookup $TABLE1 ip rule add fwmark 2 lookup $TABLE2 iptables -t nat -A POSTROUTING -o $DEV1 -j SNAT --to-source $IP1 iptables -t nat -A POSTROUTING -o $DEV2 -j SNAT --to-source $IP2 iptables -t nat -A PREROUTING -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -t nat -A PREROUTING -i $DEV1 -m state --state NEW -j CONNMARK --set-mark 1 iptables -t nat -A PREROUTING -i $DEV2 -m state --state NEW -j CONNMARK --set-mark 2 iptables -t nat -A PREROUTING -m connmark --mark 1 -j MARK --set-mark 1 iptables -t nat -A PREROUTING -m connmark --mark 2 -j MARK --set-mark 2 iptables -t nat -A PREROUTING -m state --state NEW -m connmark ! --mark 0 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 80 -j CONNMARK --set-mark 2 iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 443 -j CONNMARK --set-mark 2 route del default route add default gw 192.168.0.1 eth0 Now this seems to be working. Except it isn't! Connections to the blocked websites are going through, connections not on ports 80 and 443 are using the non-VPN connection. However port 80 and 443 connections that aren't to the blocked websites are using the non-VPN connection too! As the general goal has been reached, I'm relatively happy, but it would be nice to know why it isn't working exactly right. Any ideas? For reference, I now have 3 routing tables, main, internet, and vpn. The listing of them is as follows... Main: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 Internet: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 192.168.0.1 dev eth0 scope link src 192.168.0.73 VPN: default via 10.38.0.205 dev tun0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1

    Read the article

  • Managing BES Software Configurations

    - by DaveJohnston
    Hi, I am having problems with OTA deployment of a bespoke application that we have written. I have read loads of threads elsewhere and I have got mixed help, but for my particular case none of it has really helped. So I thought I would explain my exact situation and try and get some help here. I am running BES version 4.1.5 (Bundle 79) for Microsoft Exchange. The application we have written is split into 5 modules, which we control, and another 4 modules which are 3rd party libraries that we require. So for our modules the version numbers are regularly changing but for the others they are pretty much always going to remain the same. We have an alx file set up that identifies all of the files required and in fact I am able to create a software configuration and deploy the application with no problems. What I am trying to do however is maintain multiple versions of our application on the BES and be able to select which version I want to deploy to each user. I have tried this a number of ways (as I said I have read lots of other threads with solutions to this problem) but each seems to come with its own problem. First of all I tried just creating different configurations for each version of the application, but because they each had the same application ID the BES informed me that I couldn't do this. I read somewhere that the solution was to create a second shared folder (e.g. \Program Files\Common Files\RIM) and add the apploader stuff and the new version of the app to this folder. I could then create a second software configuration that would have the same application ID. The result of this seemed promising to start with. When I changed the config that was assigned to a user the new version was pushed out fine. But afterwards the BES reported that the device state was invalid, which meant I couldn't push anything else until I reactivated the device. I guess this is because the first config was never set to disallowed so the old version wasn't removed and the device essentially reported that it had multiple versions of the same application installed. The next suggestion I got was to change the application ID for each version, e.g. to include the version number. This meant that each version of the application could be included in a single configuration and I could set one to disallowed and the other to required. Initially this worked and the first version was deployed. But when I switched (i.e. the old version became disallowed and the new version required) the BES reported upgrade required and removed the old version. The device restarts and the old version is gone but the new version is not pushed out. I checked the BES and it still said Upgrade Required. I checked the log files and found: [40000] (11/12 09:50:27.397):{0xEB8} {[email protected], PIN=1234, UserId=2}SCS::PollDBQueueNewRequests - Queuing POLL_FOR_MISSING_APPS request [40000] (11/12 09:50:28.241):{0xE9C} RequestHandler::PollForMissingApps: Starting Poll For Missing Apps. [40304] (11/12 09:50:28.241):{0xE90} WorkerThreadPool:: ThreadProc(): Thread released with empty queue [40000] (11/12 09:50:28.241):{0xE9C} SCS::RemoveAppDeliveryRequests - No App Delivery Requests purged for User id 2 [30000] (11/12 09:50:28.960):{0xE9C} Discard duplicate module group "name" on device [30000] (11/12 09:50:28.960):{0xE9C} Discard duplicate module group "name" on device [40000] (11/12 09:50:29.163):{0xE9C} RequestHandler::PollForMissingApps: Completed Poll For Missing Apps, elapsed time 0.922 seconds. (You will notice I have removed actual names and email addresses etc for privacy reasons. But one question: where does the name of the module group come from? In my case it is close to the application ID but doesn't include the version number that I added at the end in order to get it to work. Is that information embedded in a COD file or something??) So it is reporting a duplicate module group on the device? What does this mean? I checked the device properties (as reported on the BES) and it confirms that the modules with the old version numbers are still present on the device. So the application has been removed but not the modules?? I checked the device and the modules are gone, so it is just the BES reporting that they are still there?? I checked the database and it has the modules in questions in the SyncDeviceMgmt table. If I delete these from the DB the BES changes to report Install Required, and low and behold the new version of the app is pushed out. So at the end of all that, my question is: does anyone have any other suggestions of how to handle upgrading our bespoke application OTA from the BES? Or can anyone point out something I am doing wrong in what I described above that might solve the problems I am having? I guess the question is why does the database maintain that the modules are on the device after they are removed? Thanks for any help you can provide.

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • SVN Server not responding

    - by Rob Forrest
    I've been bashing my head against a wall with this one all day and I would greatly appreciate a few more eyes on the problem at hand. We have an in-house SVN Server that contains all live and development code for our website. Our live server can connect to this and get updates from the repository. This was all working fine until we migrated the SVN Server from a physical machine to a vSphere VM. Now, for some reason that continues to fathom me, we can no longer connect to the SVN Server. The SVN Server runs CentOS 6.2, Apache and SVN 1.7.2. SELinux is well and trully disabled and the problem remains when iptables is stopped. Our production server does run an older version of CentOS and SVN but the same system worked previously so I don't think that this is the issue. Of note, if I have iptables enabled, using service iptables status, I can see a single packet coming in and being accepted but the production server simply hangs on any svn command. If I give up waiting and do a CTRL-C to break the process I get a "could not connect to server". To me it appears to be something to do with the SVN Server rejecting external connections but I have no idea how this would happen. Any thoughts on what I can try from here? Thanks, Rob Edit: Network topology Production server sits externally to our in-house SVN server. Our IPCop (?) firewall allows connections from it (and it alone) on port 80 and passes the connection to the SVN Server. The hardware is all pretty decent and I don't doubt that its doing its job correctly, especially as iptables is seeing the new connections. subversion.conf (in /etc/httpd/conf.d) LoadModule dav_svn_module modules/mod_dav_svn.so <Location /repos> DAV svn SVNPath /var/svn/repos <LimitExcept PROPFIND OPTIONS REPORT> AuthType Basic AuthName "SVN Server" AuthUserFile /var/svn/svn-auth Require valid-user </LimitExcept> </Location> ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:5F:C8:3A inet addr:172.16.0.14 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe5f:c83a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:32317 errors:0 dropped:0 overruns:0 frame:0 TX packets:632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2544036 (2.4 MiB) TX bytes:143207 (139.8 KiB) netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1484/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1135/rpcbind tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1351/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1230/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1575/master tcp 0 0 0.0.0.0:58401 0.0.0.0:* LISTEN 1153/rpc.statd tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN 1626/qpidd tcp 0 0 :::139 :::* LISTEN 1678/smbd tcp 0 0 :::111 :::* LISTEN 1135/rpcbind tcp 0 0 :::80 :::* LISTEN 1615/httpd tcp 0 0 :::22 :::* LISTEN 1351/sshd tcp 0 0 ::1:631 :::* LISTEN 1230/cupsd tcp 0 0 ::1:25 :::* LISTEN 1575/master tcp 0 0 :::445 :::* LISTEN 1678/smbd tcp 0 0 :::56799 :::* LISTEN 1153/rpc.statd iptables --list -v -n (when iptables is stopped) Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination iptables --list -v -n (when iptables is running, after one attempted svn connection) Chain INPUT (policy ACCEPT 68 packets, 6561 bytes) pkts bytes target prot opt in out source destination 19 1304 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:80 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 17 packets, 1612 bytes) pkts bytes target prot opt in out source destination tcpdump 17:08:18.455114 IP 'production server'.43255 > 'svn server'.local.http: Flags [S], seq 3200354543, win 5840, options [mss 1380,sackOK,TS val 2011458346 ecr 0,nop,wscale 7], length 0 17:08:18.455169 IP 'svn server'.local.http > 'production server'.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 816478 ecr 2011449346,nop,wscale 7], length 0 17:08:19.655317 IP 'svn server'.local.http > 'production server'k.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 817679 ecr 2011449346,nop,wscale 7], length 0

    Read the article

  • Improving TCP performance over a gigabit network lots of connections and high traffic for storage and streaming services

    - by Linux Guy
    I have two servers, Both servers hardware Specification are Processor : Dual Processor RAM : over 128 G.B Hard disk : SSD Hard disk Outging Traffic bandwidth : 3 Gbps network cards speed : 10 Gbps Server A : for Encoding videos Server B : for storage videos andstream videos over web interface like youtube The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos Both servers in same network , when i do ping from Server A to Server B i got high time latency above 1.0ms , the time range time=52.7 ms to time=215.7 ms - This is the output of iftop utility 353Mb 707Mb 1.04Gb 1.38Gb 1.73Gb mqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqq server.example.com => ip.address 6.36Mb 4.31Mb 1.66Mb <= 158Kb 94.8Kb 35.1Kb server.example.com => ip.address 1.23Mb 4.28Mb 1.12Mb <= 17.1Kb 83.5Kb 21.9Kb server.example.com => ip.address 395Kb 3.89Mb 1.07Mb <= 6.09Kb 109Kb 28.6Kb server.example.com => ip.address 4.55Mb 3.83Mb 1.04Mb <= 55.6Kb 45.4Kb 13.0Kb server.example.com => ip.address 649Kb 3.38Mb 1.47Mb <= 9.00Kb 38.7Kb 16.7Kb server.example.com => ip.address 5.00Mb 3.32Mb 1.80Mb <= 65.7Kb 55.1Kb 29.4Kb server.example.com => ip.address 387Kb 3.13Mb 1.06Mb <= 18.4Kb 39.9Kb 15.0Kb server.example.com => ip.address 3.27Mb 3.11Mb 1.01Mb <= 81.2Kb 64.5Kb 20.9Kb server.example.com => ip.address 1.75Mb 3.08Mb 2.72Mb <= 16.6Kb 35.6Kb 32.5Kb server.example.com => ip.address 1.75Mb 2.90Mb 2.79Mb <= 22.4Kb 32.6Kb 35.6Kb server.example.com => ip.address 3.03Mb 2.78Mb 1.82Mb <= 26.6Kb 27.4Kb 20.2Kb server.example.com => ip.address 2.26Mb 2.66Mb 1.36Mb <= 51.7Kb 49.1Kb 24.4Kb server.example.com => ip.address 586Kb 2.50Mb 1.03Mb <= 4.17Kb 26.1Kb 10.7Kb server.example.com => ip.address 2.42Mb 2.49Mb 2.44Mb <= 31.6Kb 29.7Kb 29.9Kb server.example.com => ip.address 2.41Mb 2.46Mb 2.41Mb <= 26.4Kb 24.5Kb 23.8Kb server.example.com => ip.address 2.37Mb 2.39Mb 2.40Mb <= 28.9Kb 27.0Kb 28.5Kb server.example.com => ip.address 525Kb 2.20Mb 1.05Mb <= 7.03Kb 26.0Kb 12.8Kb qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cum: 102GB peak: 1.65Gb rates: 1.46Gb 1.44Gb 1.48Gb RX: 1.31GB 24.3Mb 19.5Mb 18.9Mb 20.0Mb TOTAL: 103GB 1.67Gb 1.48Gb 1.46Gb 1.50Gb I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.2 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.2, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.1 port 38895 connected with 0.0.0.2 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes ifconfig server A eth0 Link encap:Ethernet HWaddr 00:25:90:ED:9E:AA inet addr:0.0.0.1 Bcast:0.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1202795665 errors:0 dropped:64334 overruns:0 frame:0 TX packets:2313161968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:893413096188 (832.0 GiB) TX bytes:3360949570454 (3.0 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2207544 errors:0 dropped:0 overruns:0 frame:0 TX packets:2207544 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:247769175 (236.2 MiB) TX bytes:247769175 (236.2 MiB) ifconfig Server B eth2 Link encap:Ethernet HWaddr 00:25:90:82:C4:FE inet addr:0.0.0.2 Bcast:0.0.0.2 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:39973046980 errors:0 dropped:1828387600 overruns:0 frame:0 TX packets:69618752480 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3013976063688 (2.7 TiB) TX bytes:102250230803933 (92.9 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1049495 errors:0 dropped:0 overruns:0 frame:0 TX packets:1049495 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:129012422 (123.0 MiB) TX bytes:129012422 (123.0 MiB) Netstat -i on Server B # netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 9000 0 42098629968 0 2131223717 0 73698797854 0 0 0 BMRU lo 65536 0 1077908 0 0 0 1077908 0 0 0 LRU I Turn up send/receive buffers on the network card to 2048 and problem still persist I increase the MTU for server A and problem still persist and i increase the MTU for server B for better connectivity and transfer speed but it couldn't transfer at all The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service in server B the transfer in server A at full speed, after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • Build and migrated to software raid (mdadm) on GPT disk, now can't assemble array

    - by John H
    mdadm, gpt issues, unrecognized partitions. Simplified question: How do I get mdadm to recognize GPT partitions? I have been attempting to convert/copy my Ubuntu 11.10 OS from a single drive to software raid 1. I have done similar in the past, but in this case, I was adding in a drive that has been configured for GPT and I tried to work with that without fully looking into the implications. Currently, I have a non-booting mdadm RAID 1 array of /dev/md127 (the OS assigned that and it keeps picking up). I am booting off of live USB keys, currently System Rescue CD from sysresccd. While gdisk and parted can see all the partitions, most of the OS utilities do not, including mdadm. My main goal is just to make the raid array accessible so I can get pull the data and start fresh (without using GPT). /dev/md127 /dev/sda /dev/sda1 <- GPT type partition /dev/sda1 <- exists within the GPT part, member of md127 /dev/sda2 <- exists within the GPT part, empty /dev/sdb /dev/sdb1 <- GPT type partition /dev/sdb1 <- exists within the GPT part, member of md127 History: POINT A: The original OS was install on sda (actually /dev/sda6). I used a the Ubuntu live usb to add sdb. I got warning from fdisk about GPT so I used gdisk to create a raid partition (sdb1) and mdadm to create a raid1 mirror with a missing drive. I had many issues getting this working (including being unable to get grub to install) but I eventually got it to boot using grub on sda and /dev/md127 off of sdb. So at point A, I had copied my OS from sda6 to md127 on sdb. I then booted into a rescue mode and attempted to get a bootloader onto sdb, which failed. I then discovered my mistake: I had installed the raid onto sdb instead of sdb1, essentially overwriting the sdb1 partition. POINT B: I now had two copies of my data- one on md127/sdb, and one on sda. I destroyed data on sda and created a new GPT table on sda. I then created sda1 for the raid array, and sda2 for a scratch partition. I added sda1 into the raid array and let it rebuild. md127 now covered /dev/sdb and /dev/sda1 as fully active and synced. POINT C: I rebooted onto linux rescue again and was still able to access the raid array. I then removed /dev/sdb from the array and created /dev/sdb1 for the raid. I added sdb1 to the array and let it sync. I was able to mount and access /dev/md127 without issues. Once it completed, both /dev/sda1 and /dev/sdb1 were GPT partitions and actively syncing. POINT D (current): I rebooted again to test if the array would boot and grub failed to load. I booted off of my live thumb drive and found that I can no longer assemble the raid array. mdadm doesn't see the required partitions. -- root@freshdesk /root % uname -a Linux freshdesk 3.0.24-std251-amd64 #2 SMP Sat Mar 17 12:08:55 UTC 2012 x86_64 AMD Athlon(tm) II X4 645 Processor AuthenticAMD GNU/Linux === /proc/partitions and parted look good: root@freshdesk /root % cat /proc/partitions major minor #blocks name 7 0 301788 loop0 8 0 976762584 sda 8 1 732579840 sda1 8 2 244181703 sda2 8 16 732574584 sdb 8 17 732573543 sdb1 8 32 7876607 sdc 8 33 7873349 sdc1 (parted) print all Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 2 750GB 1000GB 250GB Linux/Windows data Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 Linux RAID raid Model: SanDisk SanDisk Cruzer (scsi) Disk /dev/sdc: 8066MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 31.7kB 8062MB 8062MB primary fat32 boot, lba === # no sda2, and I double the sdb1 is the one shown in parted root@freshdesk /root % blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="75dd6c2d-f0a8-4302-9da4-792cc7d72355" TYPE="ext4" /dev/sdc1: LABEL="PENDRIVE" UUID="1102-3720" TYPE="vfat" /dev/sdb1: UUID="2dd89f15-65bb-ff88-e368-bf24bd0fce41" TYPE="linux_raid_member" root@freshdesk /root % mdadm -E /dev/sda1 mdadm: No md superblock detected on /dev/sda1. # this is probably a result of me attempting to force the array up, putting superblocks on the GPT partition root@freshdesk /root % mdadm -E /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : 2dd89f15:65bbff88:e368bf24:bd0fce41 Creation Time : Fri Mar 30 19:25:30 2012 Raid Level : raid1 Used Dev Size : 732568320 (698.63 GiB 750.15 GB) Array Size : 732568320 (698.63 GiB 750.15 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Update Time : Sat Mar 31 12:39:38 2012 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Checksum : a7d038b3 - correct Events : 20195 Number Major Minor RaidDevice State this 2 8 17 2 spare /dev/sdb1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed 2 2 8 17 2 spare /dev/sdb1 === root@freshdesk /root % mdadm -A /dev/md127 /dev/sda1 /dev/sdb1 mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has no superblock - assembly aborted root@freshdesk /root % mdadm -A /dev/md127 /dev/sdb1 mdadm: cannot open device /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 has no superblock - assembly aborted

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Connecting via ShrewSoft VPN client means no LAN internet access (Windows 7 64 bit) - any advice please?

    - by iwishiknewmoreaboutnetworking
    I have a Windows 7 64 bit desktop machine which is connected to a LAN. I recently installed ShrewSoft VPN client v 2.1.7 on my machine so that I can connect to a license server hosted by my customer. They are running a Cisco VPN server and I originally tried (unsuccessfully!) to use the Cisco VPN client for Windows 64 bit but the default gateway wasn't being configured correctly after loading in my pcf file. Using ShrewSoft I am able to import the same pcf file, and successfully connect to the machine I need to using the VPN client software. The client machine I need to connect to has IP address 1.52.90.33. The problem is that when I am connected to the customer network using the VPN client application (and after a few minutes) I lose my LAN internet connection. I can only presume that this is because, by default the ShrewSoft VPN client application automatically tunnels all traffic through the VPN connection. I know there is an option to switch off the "Tunnel All" option on the Policy tab of the application and enter a Remote Network Resource (to "Include" or "Exclude") as "Address" and "Netmask" IP addresses however I am not sure what I need to enter here. Here is my ipconfig output before connecting to the VPN (with suffixes blanked out): Windows IP Configuration Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : ***.*** Link-local IPv6 Address . . . . . : fe80::8de3:9dbe:393a:33ba%11 IPv4 Address. . . . . . . . . . . : 150.237.13.17 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 150.237.13.1 Tunnel adapter 6TO4 Adapter: Connection-specific DNS Suffix . : ***.*** IPv6 Address. . . . . . . . . . . : 2002:96ed:d11::96ed:d11 Default Gateway . . . . . . . . . : 2002:c058:6301::c058:6301 Tunnel adapter Local Area Connection* 9: Connection-specific DNS Suffix . : IPv6 Address. . . . . . . . . . . : 2001:0:4137:9e76:2cf9:38c4:6912:f2ee Link-local IPv6 Address . . . . . : fe80::2cf9:38c4:6912:f2ee%12 Default Gateway . . . . . . . . . : Tunnel adapter isatap.***.***: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : ***.*** Here is my route print output before connecting to the VPN: =========================================================================== Interface List 11...20 cf 30 9d ec 2a ......Realtek RTL8168D/8111D Family PCI-E Gigabit Ethern et NIC (NDIS 6.20) 1...........................Software Loopback Interface 1 14...00 00 00 00 00 00 00 e0 Microsoft 6to4 Adapter 12...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 13...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 150.237.13.1 150.237.13.17 2 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 150.237.13.0 255.255.255.0 On-link 150.237.13.17 257 150.237.13.17 255.255.255.255 On-link 150.237.13.17 257 150.237.13.255 255.255.255.255 On-link 150.237.13.17 257 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 150.237.13.17 257 255.255.255.255 255.255.255.255 On-link 12

    Read the article

  • Sporadic unspecific kernel panic

    - by koma
    I'm experiencing seldom (so far about once a month) hard crashes on our ubuntu server 10.04 LTS box. The box itself is quite old (Dell PowerEdge 750 from 2004, Pentium4 2.8 GHz). I set up netconsole after it crashed twice last thursday and was able to extract the following output: [ 9354.062473] invalid opcode: 0000 [#1] SMP [ 9354.062516] last sysfs file: /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/uevent [ 9354.062555] Modules linked in: ppdev adm1026 hwmon_vid i2c_i801 bridge stp dcdbas psmouse serio_raw netconsole configfs shpchp lp parport usbhid hid e1000 [ 9354.062685] [ 9354.062704] Pid: 3988, comm: rsync Not tainted 2.6.38-12-generic-pae #51~lucid1-Ubuntu Dell Computer Corporation PowerEdge 750 /0R1479 [ 9354.062773] EIP: 0060:[<c104fef1>] EFLAGS: 00010046 CPU: 1 [ 9354.062802] EIP is at check_preempt_wakeup+0x181/0x250 [ 9354.062826] EAX: 00000002 EBX: f2a10ccc ECX: 00000000 EDX: 00000002 [ 9354.062850] ESI: f1db71cc EDI: f1db71a0 EBP: f1dbdea8 ESP: f1dbde8c [ 9354.062875] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 [ 9354.062900] Process rsync (pid: 3988, ti=f1dbc000 task=f1db71a0 task.ti=f1dbc000) [ 9354.062933] Stack: [ 9354.062951] 0053ea60 f7907680 f28da840 f2a10ca0 c153ea60 f7907680 c153ea60 f1dbdebc [ 9354.063019] c103f98a f2a10ca0 f7907680 00000001 f1dbdef8 c104f97f 00000000 f2f0bacc [ 9354.063088] f7904338 00000001 00000003 00000000 f2f0bacc 00000001 00000001 00000086 [ 9354.063157] Call Trace: [ 9354.063183] [<c103f98a>] check_preempt_curr+0x6a/0x80 [ 9354.063210] [<c104f97f>] try_to_wake_up+0x5f/0x3f0 [ 9354.063236] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.063261] [<c104fd64>] wake_up_process+0x14/0x20 [ 9354.063286] [<c1077a1d>] hrtimer_wakeup+0x1d/0x30 [ 9354.063310] [<c1077f4a>] __run_hrtimer+0x7a/0x1c0 [ 9354.063336] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.063360] [<c1078310>] hrtimer_interrupt+0x120/0x2b0 [ 9354.063390] [<c1535c36>] smp_apic_timer_interrupt+0x56/0x8a [ 9354.063418] [<c152f459>] apic_timer_interrupt+0x31/0x38 [ 9354.063446] [<c1520000>] ? mca_attach_bus+0x5/0xc0 [ 9354.063469] Code: 8b 9b 20 01 00 00 8b 86 24 01 00 00 3b 83 24 01 00 00 75 e6 85 db 0f 84 a3 00 00 00 89 da 89 f0 e8 75 f6 fe ff 83 f8 01 0f 85 00 <fe> ff ff 89 f8 e8 95 f9 fe ff 8b 5e 1c 85 db 0f 84 e4 fe ff ff [ 9354.063804] EIP: [<c104fef1>] check_preempt_wakeup+0x181/0x250 SS:ESP 0068:f1dbde8c [ 9354.064231] ---[ end trace 290689cea65aea7f ]--- [ 9354.064290] Kernel panic - not syncing: Fatal exception in interrupt [ 9354.064352] Pid: 3988, comm: rsync Tainted: G D 2.6.38-12-generic-pae #51~lucid1-Ubuntu [ 9354.064424] Call Trace: [ 9354.064481] [<c152c057>] ? panic+0x5c/0x15b [ 9354.064539] [<c15302bd>] ? oops_end+0xcd/0xd0 [ 9354.064539] [<c100d9e4>] ? die+0x54/0x80 [ 9354.064539] [<c152f926>] ? do_trap+0x96/0xc0 [ 9354.064539] [<c100ba00>] ? do_invalid_op+0x0/0xa0 [ 9354.064539] [<c100ba8b>] ? do_invalid_op+0x8b/0xa0 [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c144884d>] ? __kfree_skb+0x3d/0x90 [ 9354.064539] [<c1042ae7>] ? update_curr+0x247/0x2a0 [ 9354.064539] [<c10447bb>] ? update_cfs_load+0x11b/0x2d0 [ 9354.064539] [<c1042a25>] ? update_curr+0x185/0x2a0 [ 9354.064539] [<c152f6bf>] ? error_code+0x67/0x6c [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c103f98a>] ? check_preempt_curr+0x6a/0x80 [ 9354.064539] [<c104f97f>] ? try_to_wake_up+0x5f/0x3f0 [ 9354.064539] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.064539] [<c104fd64>] ? wake_up_process+0x14/0x20 [ 9354.064539] [<c1077a1d>] ? hrtimer_wakeup+0x1d/0x30 [ 9354.064539] [<c1077f4a>] ? __run_hrtimer+0x7a/0x1c0 [ 9354.064539] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.064539] [<c1078310>] ? hrtimer_interrupt+0x120/0x2b0 [ 9354.064539] [<c1535c36>] ? smp_apic_timer_interrupt+0x56/0x8a [ 9354.064539] [<c152f459>] ? apic_timer_interrupt+0x31/0x38 [ 9354.064539] [<c1520000>] ? mca_attach_bus+0x5/0xc0 Googling for this issue didn't really turn up anything useful (most stuff I found was related to btrfs, but I don't use that, although the module exists and is sometimes loaded). From experience it might have to do with relatively heavy I/O, as two of the panics happened during a backup procedure. Kernel is 2.6.38-12-generic-pae, but I'm pretty sure I also saw panics on 2.6.32. I meanwhile upgraded to 3.0.0-17-generic-pae and am waiting for the next crash ;-) I'm at a loss here, so any pointers where to look for the cause or what it could be would be great :-) Thanks !

    Read the article

  • Linux/Apache performance very slow even on local network

    - by klausch
    I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to be served too, but do not show a long response time in the log file. So probably different tissues are involved here. I cannot find a solution until now, any suggestions??? THanx in advance, Klaas link to screenshot picture from access logfile Some extra configuration info: apache2.conf (comment is removed) LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %T/%D" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ And the virtual hostfile for one of the slow sites, in fact it is pretty straightforward... <VirtualHost *:80> ServerAdmin [email protected] ServerSignature EMail ServerName toenjoy.drsklaus.nl DocumentRoot /var/www/toenjoy.drsklaus.nl <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/toenjoy.drsklaus.nl/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig AuthType Basic AuthName "To Enjoy" AuthUserFile /etc/.htpasswd Require user petraaa Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> And the output of free -m: klaas@ubuntu-server:/etc/apache2$ free -m total used free shared buffers cached Mem: 1997 1401 595 0 144 1017 -/+ buffers/cache: 238 1758 Swap: 2035 0 2035 and I have no indication that swapping occurs on the moments the site is slow. I have runned top and it does not appear to be a CPU issue. I have the impression that the spawning of a apache thread could maybe be the bottleneck but it is just a suggestion. Maybe this gives some extra information! EDIT: The problem seemed to be gone for some time but occurs again! And not only with Apache, also connecting using SSH takes a tremendous time, sometimes it takes up to 15 seconds before the keyphrase is asked for. Also scp works very slowly. The behavious is really unpredoctable and makes the server very hard to use. Any ideas...?

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • Why does my ping command (Windows) results alternate between "timeout" and "network is not reachable"?

    - by Sopalajo de Arrierez
    My Windows is in Spanish, so I will have to paste console outputs in that language (I think that translating without knowing the exact terms used in english versions could give worse results than leaving it as it appears on screen). This is the issue: when pinging a non-existent IP from a WinXP-SP3 machine (clean Windows install, just formatted), I get sometimes a "Timeout" result, and sometimes a "network is not reachable" message. This is the result of: ping 192.168.210.1 Haciendo ping a 192.168.210.1 con 32 bytes de datos: Tiempo de espera agotado para esta solicitud. Respuesta desde 80.58.67.86: Red de destino inaccesible. Respuesta desde 80.58.67.86: Red de destino inaccesible. Tiempo de espera agotado para esta solicitud. Estadísticas de ping para 192.168.210.1: Paquetes: enviados = 4, recibidos = 2, perdidos = 2 (50% perdidos), Tiempos aproximados de ida y vuelta en milisegundos: Mínimo = 0ms, Máximo = 0ms, Media = 0ms 192.168.210.1 does not exist on the network. DHCP client is enabled, and the computer gets assigned those network config by the router. My IP: 192.168.11.2 Netmask: 255.255.255.0 Gateway: 192.168.11.1 DNS: 80.58.0.33/194.224.52.36 This is the output from "route print command": =========================================================================== Rutas activas: Destino de red Máscara de red Puerta de acceso Interfaz Métrica 0.0.0.0 0.0.0.0 192.168.11.1 192.168.11.2 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.11.0 255.255.255.0 192.168.11.2 192.168.11.2 20 192.168.11.2 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.11.255 255.255.255.255 192.168.11.2 192.168.11.2 20 224.0.0.0 240.0.0.0 192.168.11.2 192.168.11.2 20 255.255.255.255 255.255.255.255 192.168.11.2 192.168.11.2 1 255.255.255.255 255.255.255.255 192.168.11.2 3 1 Puerta de enlace predeterminada: 192.168.11.1 =========================================================================== Rutas persistentes: ninguno The output of: ping 1.1.1.1 Haciendo ping a 1.1.1.1 con 32 bytes de datos: Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Estadísticas de ping para 1.1.1.1: Paquetes: enviados = 4, recibidos = 0, perdidos = 4 1.1.1.1 does not exist on the network. and the output of: ping 10.1.1.1 Haciendo ping a 10.1.1.1 con 32 bytes de datos: Respuesta desde 80.58.67.86: Red de destino inaccesible. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Respuesta desde 80.58.67.86: Red de destino inaccesible. Estadísticas de ping para 10.1.1.1: Paquetes: enviados = 4, recibidos = 2, perdidos = 2 (50% perdidos), 10.1.1.1 does not exist on the network. I can do some aproximate translation of what you demand if necessary. I have another computers in the same network (WinXP-SP3 and Win7-SP1), and they have, too, this problem. Gateway (Router): Buffalo WHR-HP-GN (official Buffalo firmware, not DD-WRT). I have some Linux (Debian/Kali) machine in my network, so I tested things on it: ping 192.168.210.1 PING 192.168.210.1 (192.168.210.1) 56(84) bytes of data. From 80.58.67.86 icmp_seq=1 Packet filtered From 80.58.67.86 icmp_seq=2 Packet filtered From 80.58.67.86 icmp_seq=3 Packet filtered From 80.58.67.86 icmp_seq=4 Packet filtered to the non-existing 1.1.1.1 : ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data. ^C --- 1.1.1.1 ping statistics --- 153 packets transmitted, 0 received, 100% packet loss, time 153215ms (no response after waiting a few minutes). and the non-existing 10.1.1.1: ping 10.1.1.1 PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data. From 80.58.67.86 icmp_seq=20 Packet filtered From 80.58.67.86 icmp_seq=22 Packet filtered From 80.58.67.86 icmp_seq=23 Packet filtered From 80.58.67.86 icmp_seq=24 Packet filtered From 80.58.67.86 icmp_seq=25 Packet filtered What is going on here? I am posing this question mainly for learning purposes, but there is another reason: when all pings are returning "timeout", it creates an %ERRORLEVEL% value of 1, but if there is someone of "Network is not reachable" type, %ERRORLEVEL% goes to 0 (no error), and this could be inappropriate for a shell script (we can not use ping to detect, for example, if the network is down due to loss of contact with the gateway).

    Read the article

  • How to decrypt an encrypted Apple iTunes iPhone backup?

    - by afit
    I've been asked by a number of unfortunate iPhone users to help them restore data from their iTunes backups. This is easy when they are unencrypted, but not when they are encrypted, whether or not the password is known. As such, I'm trying to figure out the encryption scheme used on mddata and mdinfo files when encrypted. I have no problems reading these files otherwise, and have built some robust C# libraries for doing so. (If you're able to help, I don't care which language you use. It's the principle I'm after here!) The Apple "iPhone OS Enterprise Deployment Guide" states that "Device backups can be stored in encrypted format by selecting the Encrypt iPhone Backup option in the device summary pane of iTunes. Files are encrypted using AES128 with a 256-bit key. The key is stored securely in the iPhone keychain." That's a pretty good clue, and there's some good info here on Stackoverflow on iPhone AES/Rijndael interoperability suggesting a keysize of 128 and CBC mode may be used. Aside from any other obfuscation, a key and initialisation vector (IV)/salt are required. One might assume that the key is a manipulation of the "backup password" that users are prompted to enter by iTunes and passed to "AppleMobileBackup.exe", padded in a fashion dictated by CBC. However, given the reference to the iPhone keychain, I wonder whether the "backup password" might not be used as a password on an X509 certificate or symmetric private key, and that the certificate or private key itself might be used as the key. (AES and the iTunes encrypt/decrypt process is symmetric.) The IV is another matter, and it could be a few things. Perhaps it's one of the keys hard-coded into iTunes, or into the devices themselves. Although Apple's comment above suggests the key is present on the device's keychain, I think this isn't that important. One can restore an encrypted backup to a different device, which suggests all information relevant to the decryption is present in the backup and iTunes configuration, and that anything solely on the device is irrelevant and replacable in this context. So where might be the key be? I've listed paths below from a Windows machine but it's much of a muchness whichever OS we use. The "\appdata\Roaming\Apple Computer\iTunes\itunesprefs.xml" contains a PList with a "Keychain" dict entry in it. The "\programdata\apple\Lockdown\09037027da8f4bdefdea97d706703ca034c88bab.plist" contains a PList with "DeviceCertificate", "HostCertificate", and "RootCertificate", all of which appear to be valid X509 certs. The same file also appears to contain asymmetric keys "RootPrivateKey" and "HostPrivateKey" (my reading suggests these might be PKCS #7-enveloped). Also, within each backup there are "AuthSignature" and "AuthData" values in the Manifest.plist file, although these appear to be rotated as each file gets incrementally backed up, suggested they're not that useful as a key, unless something really quite involved is being done. There's a lot of misleading stuff out there suggesting getting data from encrypted backups is easy. It's not, and to my knowledge it hasn't been done. Bypassing or disabling the backup encryption is another matter entirely, and is not what I'm looking to do. This isn't about hacking apart the iPhone or anything like that. All I'm after here is a means to extract data (photos, contacts, etc.) from encrypted iTunes backups as I can unencrypted ones. I've tried all sorts of permutations with the information I've put down above but got nowhere. I'd appreciate any thoughts or techniques I might have missed.

    Read the article

  • EF4 POCO WCF Serialization problems (no lazy loading, proxy/no proxy, circular references, etc)

    - by kdawg
    OK, I want to make sure I cover my situation and everything I've tried thoroughly. I'm pretty sure what I need/want can be done, but I haven't quite found the perfect combination for success. I'm utilizing Entity Framework 4 RTM and its POCO support. I'm looking to query for an entity (Config) that contains a many-to-many relationship with another entity (App). I turn off lazy loading and disable proxy creation for the context and explicitly load the navigation property (either through .Include() or .LoadProperty()). However, when the navigation property is loaded (that is, Apps is loaded for a given Config), the App objects that were loaded already contain references to the Configs that have been brought to memory. This creates a circular reference. Now I know the DataContractSerializer that WCF uses can handle circular references, by setting the preserveObjectReferences parameter to true. I've tried this with a couple of different attribute implementations I've found online. It is needed to prevent the "the object graph contains circular references and cannot be serialized" error. However, it doesn't prevent the serialization of the entire graph, back and forth between Config and App. If I invoke it via WcfTestClient.exe, I get a stackoverflow (ha!) exception from the client and I'm hosed. I get different results from different invocation environments (C# unit test with a local reference to the web service appears to work ok though I still can drill back and forth between Configs and Apps endlessly, but calling it from a coldfusion environment only returns the first Config in the list and errors out on the others.) My main goal is to have a serialized representation of the graph I explicitly load from EF (ie: list of Configs, each with their Apps, but no App back to Config navigation.) NOTE: I've also tried using the ProxyDataContractResolver technique and keeping the proxy creation enabled from my context. This blows up complaining about unknown types encountered. I read that the ProxyDataContractResolver didn't fully work in Beta2, but should work in RTM. For some reference, here is roughly how I'm querying the data in the service: var repo = BootStrapper.AppCtx["AppMeta.ConfigRepository"] as IRepository<Config>; repo.DisableLazyLoading(); repo.DisableProxyCreation(); //var temp2 = repo.Include(cfg => cfg.Apps).Where(cfg => cfg.Environment.Equals(environment)).ToArray(); var temp2 = repo.FindAll(cfg => cfg.Environment.Equals(environment)).ToArray(); foreach (var cfg in temp2) { repo.LoadProperty(cfg, c => c.Apps); } return temp2; I think the crux of my problem is when loading up navigation properties for POCO objects from Entity Framework 4, it prepopulates navigation properties for objects already in memory. This in turn hoses up the WCF serialization, despite every effort made to properly handle circular references. I know it's a lot of information, but it's really standing in my way of going forward with EF4/POCO in our system. I've found several articles and blogs touching upon these subjects, but for the life of me, I cannot resolve this issue. Feel free to simply ask questions and help me brainstorm this situation. PS: For the sake of being thorough, I am injecting the WCF services using the HEAD build of Spring.NET for the fix to Spring.ServiceModel.Activation.ServiceHostFactory. However I don't think this is the source of the problem.

    Read the article

  • Failed to Install Xdebug

    - by burnt1ce
    've registered xdebug in php.ini (as per http://xdebug.org/docs/install) but it's not showing up when i run "php -m" or when i get a test page to run "phpinfo()". I've just installed the latest version of XAMPP. I've used both "zend_extention" and "zend_extention_ts" to specify the path of the xdebug dll. I ensured that my apache server restarted and used the latest change of my php.ini by executing "httpd -k restart". Can anyone provide any suggestions in getting xdebug to show up? Here are the contents of my php.ini file. [PHP] ;;;;;;;;;;;;;;;;;;; ; About php.ini ; ;;;;;;;;;;;;;;;;;;; ; PHP's initialization file, generally called php.ini, is responsible for ; configuring many of the aspects of PHP's behavior. ; PHP attempts to find and load this configuration from a number of locations. ; The following is a summary of its search order: ; 1. SAPI module specific location. ; 2. The PHPRC environment variable. (As of PHP 5.2.0) ; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0) ; 4. Current working directory (except CLI) ; 5. The web server's directory (for SAPI modules), or directory of PHP ; (otherwise in Windows) ; 6. The directory from the --with-config-file-path compile time option, or the ; Windows directory (C:\windows or C:\winnt) ; See the PHP docs for more specific information. ; http://php.net/configuration.file ; The syntax of the file is extremely simple. Whitespace and Lines ; beginning with a semicolon are silently ignored (as you probably guessed). ; Section headers (e.g. [Foo]) are also silently ignored, even though ; they might mean something in the future. ; Directives following the section heading [PATH=/www/mysite] only ; apply to PHP files in the /www/mysite directory. Directives ; following the section heading [HOST=www.example.com] only apply to ; PHP files served from www.example.com. Directives set in these ; special sections cannot be overridden by user-defined INI files or ; at runtime. Currently, [PATH=] and [HOST=] sections only work under ; CGI/FastCGI. ; http://php.net/ini.sections ; Directives are specified using the following syntax: ; directive = value ; Directive names are *case sensitive* - foo=bar is different from FOO=bar. ; Directives are variables used to configure PHP or PHP extensions. ; There is no name validation. If PHP can't find an expected ; directive because it is not set or is mistyped, a default value will be used. ; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one ; of the INI constants (On, Off, True, False, Yes, No and None) or an expression ; (e.g. E_ALL & ~E_NOTICE), a quoted string ("bar"), or a reference to a ; previously set variable or directive (e.g. ${foo}) ; Expressions in the INI file are limited to bitwise operators and parentheses: ; | bitwise OR ; ^ bitwise XOR ; & bitwise AND ; ~ bitwise NOT ; ! boolean NOT ; Boolean flags can be turned on using the values 1, On, True or Yes. ; They can be turned off using the values 0, Off, False or No. ; An empty string can be denoted by simply not writing anything after the equal ; sign, or by using the None keyword: ; foo = ; sets foo to an empty string ; foo = None ; sets foo to an empty string ; foo = "None" ; sets foo to the string 'None' ; If you use constants in your value, and these constants belong to a ; dynamically loaded extension (either a PHP extension or a Zend extension), ; you may only use these constants *after* the line that loads the extension. ;;;;;;;;;;;;;;;;;;; ; About this file ; ;;;;;;;;;;;;;;;;;;; ; PHP comes packaged with two INI files. One that is recommended to be used ; in production environments and one that is recommended to be used in ; development environments. ; php.ini-production contains settings which hold security, performance and ; best practices at its core. But please be aware, these settings may break ; compatibility with older or less security conscience applications. We ; recommending using the production ini in production and testing environments. ; php.ini-development is very similar to its production variant, except it's ; much more verbose when it comes to errors. We recommending using the ; development version only in development environments as errors shown to ; application users can inadvertently leak otherwise secure information. ;;;;;;;;;;;;;;;;;;; ; Quick Reference ; ;;;;;;;;;;;;;;;;;;; ; The following are all the settings which are different in either the production ; or development versions of the INIs with respect to PHP's default behavior. ; Please see the actual settings later in the document for more details as to why ; we recommend these changes in PHP's behavior. ; allow_call_time_pass_reference ; Default Value: On ; Development Value: Off ; Production Value: Off ; display_errors ; Default Value: On ; Development Value: On ; Production Value: Off ; display_startup_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; error_reporting ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; html_errors ; Default Value: On ; Development Value: On ; Production value: Off ; log_errors ; Default Value: Off ; Development Value: On ; Production Value: On ; magic_quotes_gpc ; Default Value: On ; Development Value: Off ; Production Value: Off ; max_input_time ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; output_buffering ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; register_argc_argv ; Default Value: On ; Development Value: Off ; Production Value: Off ; register_long_arrays ; Default Value: On ; Development Value: Off ; Production Value: Off ; request_order ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; session.bug_compat_42 ; Default Value: On ; Development Value: On ; Production Value: Off ; session.bug_compat_warn ; Default Value: On ; Development Value: On ; Production Value: Off ; session.gc_divisor ; Default Value: 100 ; Development Value: 1000 ; Production Value: 1000 ; session.hash_bits_per_character ; Default Value: 4 ; Development Value: 5 ; Production Value: 5 ; short_open_tag ; Default Value: On ; Development Value: Off ; Production Value: Off ; track_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; url_rewriter.tags ; Default Value: "a=href,area=href,frame=src,form=,fieldset=" ; Development Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; Production Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; variables_order ; Default Value: "EGPCS" ; Development Value: "GPCS" ; Production Value: "GPCS" ;;;;;;;;;;;;;;;;;;;; ; php.ini Options ; ;;;;;;;;;;;;;;;;;;;; ; Name for user-defined php.ini (.htaccess) files. Default is ".user.ini" ;user_ini.filename = ".user.ini" ; To disable this feature set this option to empty value ;user_ini.filename = ; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes) ;user_ini.cache_ttl = 300 ;;;;;;;;;;;;;;;;;;;; ; Language Options ; ;;;;;;;;;;;;;;;;;;;; ; Enable the PHP scripting language engine under Apache. ; http://php.net/engine engine = On ; This directive determines whether or not PHP will recognize code between ; <? and ?> tags as PHP source which should be processed as such. It's been ; recommended for several years that you not use the short tag "short cut" and ; instead to use the full <?php and ?> tag combination. With the wide spread use ; of XML and use of these tags by other languages, the server can become easily ; confused and end up parsing the wrong code in the wrong context. But because ; this short cut has been a feature for such a long time, it's currently still ; supported for backwards compatibility, but we recommend you don't use them. ; Default Value: On ; Development Value: Off ; Production Value: Off ; http://php.net/short-open-tag short_open_tag = Off ; Allow ASP-style <% %> tags. ; http://php.net/asp-tags asp_tags = Off ; The number of significant digits displayed in floating point numbers. ; http://php.net/precision precision = 14 ; Enforce year 2000 compliance (will cause problems with non-compliant browsers) ; http://php.net/y2k-compliance y2k_compliance = On ; Output buffering is a mechanism for controlling how much output data ; (excluding headers and cookies) PHP should keep internally before pushing that ; data to the client. If your application's output exceeds this setting, PHP ; will send that data in chunks of roughly the size you specify. ; Turning on this setting and managing its maximum buffer size can yield some ; interesting side-effects depending on your application and web server. ; You may be able to send headers and cookies after you've already sent output ; through print or echo. You also may see performance benefits if your server is ; emitting less packets due to buffered output versus PHP streaming the output ; as it gets it. On production servers, 4096 bytes is a good setting for performance ; reasons. ; Note: Output buffering can also be controlled via Output Buffering Control ; functions. ; Possible Values: ; On = Enabled and buffer is unlimited. (Use with caution) ; Off = Disabled ; Integer = Enables the buffer and sets its maximum size in bytes. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; http://php.net/output-buffering output_buffering = Off ; You can redirect all of the output of your scripts to a function. For ; example, if you set output_handler to "mb_output_handler", character ; encoding will be transparently converted to the specified encoding. ; Setting any output handler automatically turns on output buffering. ; Note: People who wrote portable scripts should not depend on this ini ; directive. Instead, explicitly set the output handler using ob_start(). ; Using this ini directive may cause problems unless you know what script ; is doing. ; Note: You cannot use both "mb_output_handler" with "ob_iconv_handler" ; and you cannot use both "ob_gzhandler" and "zlib.output_compression". ; Note: output_handler must be empty if this is set 'On' !!!! ; Instead you must use zlib.output_handler. ; http://php.net/output-handler ;output_handler = ; Transparent output compression using the zlib library ; Valid values for this option are 'off', 'on', or a specific buffer size ; to be used for compression (default is 4KB) ; Note: Resulting chunk size may vary due to nature of compression. PHP ; outputs chunks that are few hundreds bytes each as a result of ; compression. If you prefer a larger chunk size for better ; performance, enable output_buffering in addition. ; Note: You need to use zlib.output_handler instead of the standard ; output_handler, or otherwise the output will be corrupted. ; http://php.net/zlib.output-compression zlib.output_compression = Off ; http://php.net/zlib.output-compression-level ;zlib.output_compression_level = -1 ; You cannot specify additional output handlers if zlib.output_compression ; is activated here. This setting does the same as output_handler but in ; a different order. ; http://php.net/zlib.output-handler ;zlib.output_handler = ; Implicit flush tells PHP to tell the output layer to flush itself ; automatically after every output block. This is equivalent to calling the ; PHP function flush() after each and every call to print() or echo() and each ; and every HTML block. Turning this option on has serious performance ; implications and is generally recommended for debugging purposes only. ; http://php.net/implicit-flush ; Note: This directive is hardcoded to On for the CLI SAPI implicit_flush = Off ; The unserialize callback function will be called (with the undefined class' ; name as parameter), if the unserializer finds an undefined class ; which should be instantiated. A warning appears if the specified function is ; not defined, or if the function doesn't include/implement the missing class. ; So only set this entry, if you really want to implement such a ; callback-function. unserialize_callback_func = ; When floats & doubles are serialized store serialize_precision significant ; digits after the floating point. The default value ensures that when floats ; are decoded with unserialize, the data will remain the same. serialize_precision = 100 ; This directive allows you to enable and disable warnings which PHP will issue ; if you pass a value by reference at function call time. Passing values by ; reference at function call time is a deprecated feature which will be removed ; from PHP at some point in the near future. The acceptable method for passing a ; value by reference to a function is by declaring the reference in the functions ; definition, not at call time. This directive does not disable this feature, it ; only determines whether PHP will warn you about it or not. These warnings ; should enabled in development environments only. ; Default Value: On (Suppress warnings) ; Development Value: Off (Issue warnings) ; Production Value: Off (Issue warnings) ; http://php.net/allow-call-time-pass-reference allow_call_time_pass_reference = On ; Safe Mode ; http://php.net/safe-mode safe_mode = Off ; By default, Safe Mode does a UID compare check when ; opening files. If you want to relax this to a GID compare, ; then turn on safe_mode_gid. ; http://php.net/safe-mode-gid safe_mode_gid = Off ; When safe_mode is on, UID/GID checks are bypassed when ; including files from this directory and its subdirectories. ; (directory must also be in include_path or full path must ; be used when including) ; http://php.net/safe-mode-include-dir safe_mode_include_dir = ; When safe_mode is on, only executables located in the safe_mode_exec_dir ; will be allowed to be executed via the exec family of functions. ; http://php.net/safe-mode-exec-dir safe_mode_exec_dir = ; Setting certain environment variables may be a potential security breach. ; This directive contains a comma-delimited list of prefixes. In Safe Mode, ; the user may only alter environment variables whose names begin with the ; prefixes supplied here. By default, users will only be able to set ; environment variables that begin with PHP_ (e.g. PHP_FOO=BAR). ; Note: If this directive is empty, PHP will let the user modify ANY ; environment variable! ; http://php.net/safe-mode-allowed-env-vars safe_mode_allowed_env_vars = PHP_ ; This directive contains a comma-delimited list of environment variables that ; the end user won't be able to change using putenv(). These variables will be ; protected even if safe_mode_allowed_env_vars is set to allow to change them. ; http://php.net/safe-mode-protected-env-vars safe_mode_protected_env_vars = LD_LIBRARY_PATH ; open_basedir, if set, limits all file operations to the defined directory ; and below. This directive makes most sense if used in a per-directory ; or per-virtualhost web server configuration file. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/open-basedir ;open_basedir = ; This directive allows you to disable certain functions for security reasons. ; It receives a comma-delimited list of function names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-functions disable_functions = ; This directive allows you to disable certain classes for security reasons. ; It receives a comma-delimited list of class names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-classes disable_classes = ; Colors for Syntax Highlighting mode. Anything that's acceptable in ; <span style="color: ???????"> would work. ; http://php.net/syntax-highlighting ;highlight.string = #DD0000 ;highlight.comment = #FF9900 ;highlight.keyword = #007700 ;highlight.bg = #FFFFFF ;highlight.default = #0000BB ;highlight.html = #000000 ; If enabled, the request will be allowed to complete even if the user aborts ; the request. Consider enabling it if executing long requests, which may end up ; being interrupted by the user or a browser timing out. PHP's default behavior ; is to disable this feature. ; http://php.net/ignore-user-abort ;ignore_user_abort = On ; Determines the size of the realpath cache to be used by PHP. This value should ; be increased on systems where PHP opens many files to reflect the quantity of ; the file operations performed. ; http://php.net/realpath-cache-size ;realpath_cache_size = 16k ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://php.net/realpath-cache-ttl ;realpath_cache_ttl = 120 ;;;;;;;;;;;;;;;;; ; Miscellaneous ; ;;;;;;;;;;;;;;;;; ; Decides whether PHP may expose the fact that it is installed on the server ; (e.g. by adding its signature to the Web server header). It is no security ; threat in any way, but it makes it possible to determine whether you use PHP ; on your server or not. ; http://php.net/expose-php expose_php = On ;;;;;;;;;;;;;;;;;;; ; Resource Limits ; ;;;;;;;;;;;;;;;;;;; ; Maximum execution time of each script, in seconds ; http://php.net/max-execution-time ; Note: This directive is hardcoded to 0 for the CLI SAPI max_execution_time = 60 ; Maximum amount of time each script may spend parsing request data. It's a good ; idea to limit this time on productions servers in order to eliminate unexpectedly ; long running scripts. ; Note: This directive is hardcoded to -1 for the CLI SAPI ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; http://php.net/max-input-time max_input_time = 60 ; Maximum input variable nesting level ; http://php.net/max-input-nesting-level ;max_input_nesting_level = 64 ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 128M ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Error handling and logging ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; This directive informs PHP of which errors, warnings and notices you would like ; it to take action for. The recommended way of setting values for this ; directive is through the use of the error level constants and bitwise ; operators. The error level constants are below here for convenience as well as ; some common settings and their meanings. ; By default, PHP is set to take action on all errors, notices and warnings EXCEPT ; those related to E_NOTICE and E_STRICT, which together cover best practices and ; recommended coding standards in PHP. For performance reasons, this is the ; recommend error reporting setting. Your production server shouldn't be wasting ; resources complaining about best practices and coding standards. That's what ; development servers and development settings are for. ; Note: The php.ini-development file has this setting as E_ALL | E_STRICT. This ; means it pretty much reports everything which is exactly what you want during ; development and early testing. ; ; Error Level Constants: ; E_ALL - All errors and warnings (includes E_STRICT as of PHP 6.0.0) ; E_ERROR - fatal run-time errors ; E_RECOVERABLE_ERROR - almost fatal run-time errors ; E_WARNING - run-time warnings (non-fatal errors) ; E_PARSE - compile-time parse errors ; E_NOTICE - run-time notices (these are warnings which often result ; from a bug in your code, but it's possible that it was ; intentional (e.g., using an uninitialized variable and ; relying on the fact it's automatically initialized to an ; empty string) ; E_STRICT - run-time notices, enable to have PHP suggest changes ; to your code which will ensure the best interoperability ; and forward compatibility of your code ; E_CORE_ERROR - fatal errors that occur during PHP's initial startup ; E_CORE_WARNING - warnings (non-fatal errors) that occur during PHP's ; initial startup ; E_COMPILE_ERROR - fatal compile-time errors ; E_COMPILE_WARNING - compile-time warnings (non-fatal errors) ; E_USER_ERROR - user-generated error message ; E_USER_WARNING - user-generated warning message ; E_USER_NOTICE - user-generated notice message ; E_DEPRECATED - warn about code that will not work in future versions ; of PHP ; E_USER_DEPRECATED - user-generated deprecation warnings ; ; Common Values: ; E_ALL & ~E_NOTICE (Show all errors, except for notices and coding standards warnings.) ; E_ALL & ~E_NOTICE | E_STRICT (Show all errors, except for notices) ; E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR (Show only errors) ; E_ALL | E_STRICT (Show all errors, warnings and notices including coding standards.) ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; http://php.net/error-reporting error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ; This directive controls whether or not and where PHP will output errors, ; notices and warnings too. Error output is very useful during development, but ; it could be very dangerous in production environments. Depending on the code ; which is triggering the error, sensitive information could potentially leak ; out of your application such as database usernames and passwords or worse. ; It's recommended that errors be logged on production servers rather than ; having the errors sent to STDOUT. ; Possible Values: ; Off = Do not display any errors ; stderr = Display errors to STDERR (affects only CGI/CLI binaries!) ; On or stdout = Display errors to STDOUT ; Default Value: On ; Development Value: On ; Production Value: Off ; http://php.net/display-errors display_errors = On ; The display of errors which occur during PHP's startup sequence are handled ; separately from display_errors. PHP's default behavior is to suppress those ; errors from clients. Turning the display of startup errors on can be useful in ; debugging configuration problems. But, it's strongly recommended that you ; leave this setting off on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/display-startup-errors display_startup_errors = On ; Besides displaying errors, PHP can also log errors to locations such as a ; server-specific log, STDERR, or a location specified by the error_log ; directive found below. While errors should not be displayed on productions ; servers they should still be monitored and logging is a great way to do that. ; Default Value: Off ; Development Value: On ; Production Value: On ; http://php.net/log-errors log_errors = Off ; Set maximum length of log_errors. In error_log information about the source is ; added. The default is 1024 and 0 allows to not apply any maximum length at all. ; http://php.net/log-errors-max-len log_errors_max_len = 1024 ; Do not log repeated messages. Repeated errors must occur in same file on same ; line unless ignore_repeated_source is set true. ; http://php.net/ignore-repeated-errors ignore_repeated_errors = Off ; Ignore source of message when ignoring repeated messages. When this setting ; is On you will not log errors with repeated messages from different files or ; source lines. ; http://php.net/ignore-repeated-source ignore_repeated_source = Off ; If this parameter is set to Off, then memory leaks will not be shown (on ; stdout or in the log). This has only effect in a debug compile, and if ; error reporting includes E_WARNING in the allowed list ; http://php.net/report-memleaks report_memleaks = On ; This setting is on by default. ;report_zend_debug = 0 ; Store the last error/warning message in $php_errormsg (boolean). Setting this value ; to On can assist in debugging and is appropriate for development servers. It should ; however be disabled on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/track-errors track_errors = Off ; Turn off normal error reporting and emit XML-RPC error XML ; http://php.net/xmlrpc-errors ;xmlrpc_errors = 0 ; An XML-RPC faultCode ;xmlrpc_error_number = 0 ; When PHP displays or logs an error, it has the capability of inserting html ; links to documentation related to that error. This directive controls whether ; those HTML links appear in error messages or not. For performance and security ; reasons, it's recommended you disable this on production servers. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: On ; Development Value: On ; Production value: Off ; http://php.net/html-errors html_errors = On ; If html_errors is set On PHP produces clickable error messages that direct ; to a page describing the error or function causing the error in detail. ; You can download a copy of the PHP manual from http://php.net/docs ; and change docref_root to the base URL of your local copy including the ; leading '/'. You must also specify the file extension being used including ; the dot. PHP's default behavior is to leave these settings empty. ; Note: Never use this feature for production boxes. ; http://php.net/docref-root ; Examples ;docref_root = "/phpmanual/" ; http://php.net/docref-ext ;docref_ext = .html ; String to output before an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-prepend-string ; Example: ;error_prepend_string = "<font color=#ff0000>" ; String to output after an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-append-string ; Example: ;error_append_string = "</font>" ; Log errors to specified file. PHP's default behavior is to leave this value ; empty. ; http://php.net/error-log ; Example: ;error_log = php_errors.log ; Log errors to syslog (Event Log on NT, not valid in Windows 95). ;error_log = syslog ;error_log = "C:\xampp\apache\logs\php_error.log" ;;;;;;;;;;;;;;;;; ; Data Handling ; ;;;;;;;;;;;;;;;;; ; Note - track_vars is ALWAYS enabled ; The separator used in PHP generated URLs to separate arguments. ; PHP's default setting is "&". ; http://php.net/arg-separator.output ; Example: arg_separator.output = "&amp;" ; List of separator(s) used by PHP to parse input URLs into variables. ; PHP's default setting is "&

    Read the article

  • Creating a multi-tenant application using PostgreSQL's schemas and Rails

    - by ramon.tayag
    Stuff I've already figured out I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application. I already have a few concerns answered: How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog. What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas. I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer. Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way. Finally, to my question When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea. Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question. Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema. Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas). Thanks, and I hope that wasn't too long! UPDATE May 11, 2010 11:26 GMT+8 Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know. module SchemaUtils def self.add_schema_to_path(schema) conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}" end def self.reset_search_path conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{conn.schema_search_path}" end def self.create_and_migrate_schema(schema_name) conn = ActiveRecord::Base.connection schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'") if schemas.include?(schema_name) tables = conn.tables Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}" else Rails.logger.info "About to create #{schema_name}" conn.execute "create schema #{schema_name}" end # Save the old search path so we can set it back at the end of this method old_search_path = conn.schema_search_path # Tried to set the search path like in the methods above (from Guy Naor) # conn.execute "SET search_path TO #{schema_name}" # But the connection itself seems to remember the old search path. # If set this way, it works. conn.schema_search_path = schema_name # Directly from databases.rake. # In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake file = "#{Rails.root}/db/schema.rb" if File.exists?(file) Rails.logger.info "About to load the schema #{file}" load(file) else abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!} end Rails.logger.info "About to set search path back to #{old_search_path}." conn.schema_search_path = old_search_path end end

    Read the article

  • IE8 losing session cookies in popup windows.

    - by HackedByChinese
    We have an ASP.NET application that uses Forms Auth. When users log in, a session ID cookie and a Forms Auth ticket (stored as a cookie) are generated. These are session cookies, not permanent cookies. It is intentional and desirable that when the browser closes, the user is effectively logged out. Once a user logs in, a new window is popped up using window.open('location here');. The page that is opened is effectively the workspace the user works in throughout the rest of their session. From this page, other pop-ups are also used. Lately, we've had a number of customers (all using latest versions of IE8) complaining that the when they log in, the initial pop-up takes them back to the log in screen rather than their homepage. Alternately, users can sometimes log in, get to the homepage (which again, is in a new pop up window), and it all seems fine, until any additional pop-ups are created, where it starts redirecting them to the log in screen again. In attempting to troubleshoot the issue, I've used good old Fiddler. When the problem starts manifesting, I've noticed that the browser is not sending up the ASP.NET session ID session cookie OR the Forms Auth ticket session cookie, even though the response to the log in POST clearly pushes down those cookies. What's more strange is if I CTRL+N to open a new window from the popped-up window that is missing the session cookies, then manually type in the URL to the home page, those cookies magically appear again. However, subsequent window.open(); calls will continue to be broken, not sending the session cookies and taking the user to the log in screen. It's important to note that sometimes, for seemingly no good reason, those same users can suddenly log in and work normally for a while, then it goes back to broken. Now, I've ensured that there are no browser add-ons, plug-ins, toolbars, etc. are running. I've added our site as a trusted site and dropped the security settings to Low, I've modified the Cookie Privacy policy to "accept all" and even disabled automatic policy settings, manually forcing it to accept everything and include session cookies. Nothing appears to affect it. Also note the web application resides on a single server. There is no load balancing, web gardens, server farms, clusters, etc. The server does reside behind an ISA server, but other than that it's pretty straight forward. I've been searching around for days and haven't found anything actionable. Heck, sometimes I can't even reproduce it reliably. I have found a few references to people having this same problem, but they seem to be referencing an issue that was allegedly fixed in a beta or RC release (example: http://stackoverflow.com/questions/179260/ie8-loses-cookies-when-opening-a-new-window-after-a-redirect). These are release versions of IE, with up-to-date patches. I'm aware that I can try to set permanent cookies instead of session cookies. However, this has drastic security implications for our application. Update It seems that the problem automagically goes away when the user is added as a Local Administrator on the machine. Only time will tell if this change permanently (and positively) affects this problem. Time to bust out ProcMon and see if there is a resource access problem.

    Read the article

  • Implements EAN13 and UPC-A barcode in PDF using fpdf in classic ASP

    - by Jeremy N
    /* FPDF library for ASP can be downloaded from: http://www.aspxnet.it/public/default.asp INFORMATIONS: Translated by: Jeremy Author: Olivier License: Freeware DESCRIPTION: This script implements EAN13 and UPC-A barcodes (the second being a particular case of the first one). Bars are drawn directly in the PDF (no image is generated) function EAN13(x,y,barcode,h,w) -x = x coordinate to start drawing the barcode -y = y coordinate to start drawing the barcode -barcode = code to write (must be all numeric) -h = height of the bar -w = the minimum width of individual bar function UPC_A(x,y,barcode,h,w) Same parameters An EAN13 barcode is made up of 13 digits, UPC-A of 12 (leading zeroes are added if necessary). The last digit is a check digit; if it's not supplied or if it is incorrect, it will be automatically computed. USAGE: Copy all of this text and save it in a file called barcode.ext file under fpdf/extends folder EXAMPLE: Set pdf=CreateJsObject("FPDF") pdf.CreatePDF "P","mm","letter" pdf.SetPath("fpdf/") pdf.LoadExtension("barcode") pdf.Open() pdf.AddPage() 'set the fill color to black pdf.setfillcolor 0,0,0 pdf.UPC_A 80,40,"123456789012",16,0.35 pdf.Close() pdf.NewOutput "" , true, "test.pdf" */ this.EAN13=function (x,y,barcode,h,w) { return this.Barcode(x,y,barcode,h,w,13); }; this.UPC_A=function (x,y,barcode,h,w) { return this.Barcode(x,y,barcode,h,w,12); }; function GetCheckDigit(barCode) { bc = barCode.replace(/[^0-9]+/g,''); total = 0; //Get Odd Numbers for (i=bc.length-1; i=0; i=i-2) { total = total + parseInt(bc.substr(i,1)); } //Get Even Numbers for (i=bc.length-2; i=0; i=i-2) { temp = parseInt(bc.substr(i,1)) * 2; if (temp 9) { tens = Math.floor(temp/10); ones = temp - (tens*10); temp = tens + ones; } total = total + temp; } //Determine the checksum modDigit = (10 - total % 10) % 10; return modDigit.toString(); } //Test validity of check digit function TestCheckDigit(barcode) { var cd=GetCheckDigit(barcode.substring(0,barcode.length-1)); return cd==parseInt(barcode.substring(barcode.length-1,1)); } this.Barcode=function Barcode(x,y,barcode,h,w,len) { //Padding while(barcode.length < len-1) { barcode = '0' + barcode; } if(len==12) {barcode='0' + barcode;} //Add or control the check digit if(barcode.length==12) { barcode += GetCheckDigit(barcode); } else { //if the check digit is incorrect, fix the check digit. if(!TestCheckDigit(barcode)) { barcode = barcode.substring(0,barcode.length-1) + GetCheckDigit(barcode.substring(0,barcode.length-1)); } } //Convert digits to bars var codes=[['0001101','0011001','0010011','0111101','0100011','0110001','0101111','0111011','0110111','0001011'], ['0100111','0110011','0011011','0100001','0011101','0111001','0000101','0010001','0001001','0010111'], ['1110010','1100110','1101100','1000010','1011100','1001110','1010000','1000100','1001000','1110100'] ]; var parities=[[0,0,0,0,0,0], [0,0,1,0,1,1], [0,0,1,1,0,1], [0,0,1,1,1,0], [0,1,0,0,1,1], [0,1,1,0,0,1], [0,1,1,1,0,0], [0,1,0,1,0,1], [0,1,0,1,1,0], [0,1,1,0,1,0] ]; var code='101'; var p=parities[parseInt(barcode.substr(0,1))]; var i; for(i=1;i<=6;i++) { code+= codes[p[i-1]][parseInt(barcode.substr(i,1))]; } code+='01010'; for(i=7;i<=12;i++) { code+= codes[2][parseInt(barcode.substr(i,1))]; } code+='101'; //Draw bars for(i=0;i<code.length;i++) { if(code.substr(i,1)=='1') { this.Rect(x+i*w,y,w,h,'F'); } } //Print text uder barcode. this.SetFont('Arial','',12); //Set the x so that the font is centered under the barcode this.Text(x+parseInt(0.5*barcode.length)*w,y+h+11/this.k,barcode.substr(barcode.length-len,len)); }

    Read the article

  • Parse JSON in C#

    - by Ender
    I'm trying to parse some JSON data from the Google AJAX Search API. I have this URL and I'd like to break it down so that the results are displayed. I've currently written this code, but I'm pretty lost in regards of what to do next, although there are a number of examples out there with simplified JSON strings. Being new to C# and .NET in general I've struggled to get a genuine text output for my ASP.NET page so I've been recommended to give JSON.NET a try. Could anyone point me in the right direction to just simply writing some code that'll take in JSON from the Google AJAX Search API and print it out to the screen? EDIT: I think I've made some progress in regards to getting some code working using DataContractJsonSerializer. Here is the code I have so far. Any advice on whether this is close to working and/or how I would output my results in a clean format? EDIT 2: I've followed the advice from Dreas Grech and the StackOverflowException has gone. However, now I am getting no output. Any ideas on where to go next? EDIT 3: ALL FIXED! All results are working fine. Thank you again Dreas Grech! using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.ServiceModel.Web; using System.Runtime.Serialization; using System.Runtime.Serialization.Json; using System.IO; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { GoogleSearchResults g1 = new GoogleSearchResults(); const string json = @"{""responseData"": {""results"":[{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://www.cheese.com/"",""url"":""http://www.cheese.com/"",""visibleUrl"":""www.cheese.com"",""cacheUrl"":""http://www.google.com/search?q\u003dcache:bkg1gwNt8u4J:www.cheese.com"",""title"":""\u003cb\u003eCHEESE\u003c/b\u003e.COM - All about \u003cb\u003echeese\u003c/b\u003e!."",""titleNoFormatting"":""CHEESE.COM - All about cheese!."",""content"":""\u003cb\u003eCheese\u003c/b\u003e - everything you want to know about it. Search \u003cb\u003echeese\u003c/b\u003e by name, by types of milk, by textures and by countries.""},{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://en.wikipedia.org/wiki/Cheese"",""url"":""http://en.wikipedia.org/wiki/Cheese"",""visibleUrl"":""en.wikipedia.org"",""cacheUrl"":""http://www.google.com/search?q\u003dcache:n9icdgMlCXIJ:en.wikipedia.org"",""title"":""\u003cb\u003eCheese\u003c/b\u003e - Wikipedia, the free encyclopedia"",""titleNoFormatting"":""Cheese - Wikipedia, the free encyclopedia"",""content"":""\u003cb\u003eCheese\u003c/b\u003e is a food consisting of proteins and fat from milk, usually the milk of cows, buffalo, goats, or sheep. It is produced by coagulation of the milk \u003cb\u003e...\u003c/b\u003e""},{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://www.ilovecheese.com/"",""url"":""http://www.ilovecheese.com/"",""visibleUrl"":""www.ilovecheese.com"",""cacheUrl"":""http://www.google.com/search?q\u003dcache:GBhRR8ytMhQJ:www.ilovecheese.com"",""title"":""I Love \u003cb\u003eCheese\u003c/b\u003e!, Homepage"",""titleNoFormatting"":""I Love Cheese!, Homepage"",""content"":""The American Dairy Association\u0026#39;s official site includes recipes and information on nutrition and storage of \u003cb\u003echeese\u003c/b\u003e.""},{""GsearchResultClass"":""GwebSearch"",""unescapedUrl"":""http://www.gnome.org/projects/cheese/"",""url"":""http://www.g

    Read the article

  • jQuery Validate - require at least one field in a group to be filled

    - by Nathan Long
    I'm using the excellent jQuery Validate Plugin to validate some forms. On one form, I need to ensure that the user fills in at least one of a group of fields. I think I've got a pretty good solution, and wanted to share it. Please suggest any improvements you can think of. Finding no built-in way to do this, I searched and found Rebecca Murphey's custom validation method, which was very helpful. I improved this in three ways: To let you pass in a selector for the group of fields To let you specify how many of that group must be filled for validation to pass To show all inputs in the group as passing validation as soon as one of them passes validation. So you can say "at least X inputs that match selector Y must be filled." The end result is a rule like this: partnumber: { require_from_group: [2,".productinfo"] } //The partnumber input will validate if //at least 2 `.productinfo` inputs are filled For best results, put this rule AFTER any formatting rules for that field (like "must contain only numbers", etc). This way, if the user gets an error from this rule and starts filling out one of the fields, they will get immediate feedback about the formatting required without having to fill another field first. Item #3 assumes that you're adding a class of .checked to your error messages upon successful validation. You can do this as follows, as demonstrated here. success: function(label) { label.html(" ").addClass("checked"); } As in the demo linked above, I use CSS to give each span.error an X image as its background, unless it has the class .checked, in which case it gets a check mark image. Here's my code so far: jQuery.validator.addMethod("require_from_group", function(value, element, options) { // From the options array, find out what selector matches // our group of inputs and how many of them should be filled. numberRequired = options[0]; selector = options[1]; var commonParent = $(element).parents('form'); var numberFilled = 0; commonParent.find(selector).each(function(){ // Look through fields matching our selector and total up // how many of them have been filled if ($(this).val()) { numberFilled++; } }); if (numberFilled >= numberRequired) { // This part is a bit of a hack - we make these // fields look like they've passed validation by // hiding their error messages, etc. Strictly speaking, // they haven't been re-validated, though, so it's possible // that we're hiding another validation problem. But there's // no way (that I know of) to trigger actual re-validation, // and in any case, any other errors will pop back up when // the user tries to submit the form. // If anyone knows a way to re-validate, please comment. // // For imputs matching our selector, remove error class // from their text. commonParent.find(selector).removeClass('error'); // Also look for inserted error messages and mark them // with class 'checked' var remainingErrors = commonParent.find(selector) .next('label.error').not('.checked'); remainingErrors.text("").addClass('checked'); // Tell the Validate plugin that this test passed return true; } // The {0} in the next line is the 0th item in the options array }, jQuery.format("Please fill out at least {0} of these fields.")); Questions? Comments?

    Read the article

  • How to intercept raw soap request/response (data) from WCF client

    - by JohnIdol
    This question seems to be pretty close to what I am looking for - I was able to setup tracing and I am looking at the log entries for my calls to the service. However I need to see the raw soap request with the data I am sending to the service and I see no way of doing that from the SvcTraceViewer (only log entries are shown but no data sent to the service) - am I just missing configuration? Here's what I got in my web.config: <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Verbose" propagateActivity="true"> <listeners> <add name="sdt" type="System.Diagnostics.XmlWriterTraceListener" initializeData="App_Data/Logs/WCFTrace.svclog" /> </listeners> </source> </sources> </system.diagnostics> Any help appreciated! UPDATE: this is all I see in my trace: <E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent"> <System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system"> <EventID>262163</EventID> <Type>3</Type> <SubType Name="Information">0</SubType> <Level>8</Level> <TimeCreated SystemTime="2010-05-10T13:10:46.6713553Z" /> <Source Name="System.ServiceModel" /> <Correlation ActivityID="{00000000-0000-0000-1501-0080000000f6}" /> <Execution ProcessName="w3wp" ProcessID="3492" ThreadID="23" /> <Channel /> <Computer>MY_COMPUTER_NAME</Computer> </System> <ApplicationData> <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Information"> <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.Channels.MessageSent.aspx</TraceIdentifier> <Description>Sent a message over a channel.</Description> <AppDomain>MY_DOMAIN</AppDomain> <Source>System.ServiceModel.Channels.HttpOutput+WebRequestHttpOutput/50416815</Source> <ExtendedData xmlns="http://schemas.microsoft.com/2006/08/ServiceModel/MessageTraceRecord"> <MessageProperties> <Encoder>text/xml; charset=utf-8</Encoder> <AllowOutputBatching>False</AllowOutputBatching> <Via>http://xxx.xx.xxx.xxx:9080/MyWebService/myService</Via> </MessageProperties> <MessageHeaders></MessageHeaders> </ExtendedData> </TraceRecord> </DataItem> </TraceData> </ApplicationData>

    Read the article

  • Help With Database Layout

    - by three3
    Hello everyone, I am working on a site similar to Craigslist where users can make postings and sell items in different cities. One difference between my site and Craigslist will be you will be able to search by zip code instead of having all of the cities listed on the page. I already have the ZIP Code database that has all of the city, state, latitude, longitude, and zip code info for each city. Okay, so to dive into what I need done and what I need help with: 1.) Although I have the ZIP Code database, it is not setup perfectly for my use. (I downloaded it off of the internet for free from http://zips.sourceforge.net/) 2.) I need help setting up my database structure (Ex: How many different tables should I use and how should I link them) I will be using PHP and MySQL. These our my thoughts so far on how the database can be setup: (I am not sure if this will work though.) Scenario: Someone goes to the homepage and it will tell them, "Please enter your ZIP Code.". If they enter "17241" for example, this ZIP Code is for a city named Newville located in Pennsylvania. The query would look like this with the current database setup: SELECT city FROM zip_codes WHERE zip = 17241; The result of the query would be "Newville". The problem I see here now is when they want to post something in the Newville section of the site, I will have to have an entire table setup just for the Newville city postings. There are over 42,000 cities which means I would have to have over 42,000 tables (one for each city) so that would be insane to have to do it that way. One way I was thinking of doing it was to add a column to the ZIP Code database called "city_id" which would be a unique number assigned to each city. So for example, the city Newville would have a city_id of 83. So now if someone comes and post a listing in the city Newville I would only need one other table. That one other table would be setup like this: CREATE TABLE postings ( posting_id INT NOT NULL AUTO_INCREMENT, for_sale LONGTEXT NULL, for_sale_date DATETIME NULL, for_sale_city_id INT NULL, jobs LONGTEXT NULL, jobs_date DATETIME NULL, jobs_city_id INT NULL, PRIMARY KEY(posting_id) ); (The for_sale and job_ column names are categories of the types of postings users will be able to list under. There will be many more categories than just those two but this is just for example.) So now when when someone comes to the website and they are looking for something to buy and not sell, they can enter their ZIP Code, 17241, for example, and this is the query that will run: SELECT city, city_id FROM zip_codes WHERE zip = 17241; //Result: Newville 83 (Please note that I will be using PHP to store the ZIP Code the user enters in SESSIONS and Cookies to remember them throughout the site) Now it will tell them, "Please choose your category.". If they choose the category "Items For Sale" then this is the query to run and sort the results: SELECT posting_id, for_sale, for_sale_date FROM postings WHERE for_sale_city_id = $_SESSION['zip_code']; Will this work? So now my question to everyone is will this work? I am pretty sure it will but I do not want to set this thing up and realize I overlooked something and have to start from all over from scratch. Any opinions and ideas are welcomed and I will listen to anyone who has some thoughts. I really appreciate the help in advance :D

    Read the article

  • Multi-level navigation controller on left-hand side of UISplitView with a small twist.

    - by user141146
    Hi. I'm trying make something similar to (but not exactly like) the email app found on the iPad. Specifically, I'd like to create a tab-based app, but each tab would present the user with a different UISplitView. Each UISplitView contains a Master and a Detail view (obviously). In each UISplitView I would like the Master to be a multi-level navigational controller where new UIViewControllers are pushed onto (or popped off of) the stack. This type of navigation within the UISplitView is where the application is similar to the native email app. To the best of my knowledge, the only place that has described a decent "splitviewcontroller inside of a uitabbarcontroller" is here: http://stackoverflow.com/questions/2475139/uisplitviewcontroller-in-a-tabbar-uitabbarcontroller and I've tried to follow the accepted answer. The accepted solution seems to work for me (i.e., I get a tab-bar controller that allows me to switch between different UISplitViews). The problem is that I don't know how to make the left-hand side of the UISplitView to be a multi-level navigation controller. Here is the code I used within my app delegate to create the initial "split view 'inside' of a tab bar controller" (it's pretty much as suggested in the aforementioned link). - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { NSMutableArray *tabArray = [NSMutableArray array]; NSMutableArray *array = [NSMutableArray array]; UISplitViewController *splitViewController = [[UISplitViewController alloc] init]; MainViewController *viewCont = [[MainViewController alloc] initWithNibName:@"MainViewController" bundle:nil]; [array addObject:viewCont]; [viewCont release]; viewCont = [[DetailViewController alloc] initWithNibName:@"DetailViewController" bundle:nil]; [array addObject:viewCont]; [viewCont release]; [splitViewController setViewControllers:array]; [tabArray addObject:splitViewController]; [splitViewController release]; array = [NSMutableArray array]; splitViewController = [[UISplitViewController alloc] init]; viewCont = [[Master2 alloc] initWithNibName:@"Master2" bundle:nil]; [array addObject:viewCont]; [viewCont release]; viewCont = [[Slave2 alloc] initWithNibName:@"Slave2" bundle:nil]; [array addObject:viewCont]; [viewCont release]; [splitViewController setViewControllers:array]; [tabArray addObject:splitViewController]; [splitViewController release]; // Add the tab bar controller's current view as a subview of the window [tabBarController setViewControllers:tabArray]; [window addSubview:tabBarController.view]; [window makeKeyAndVisible]; return YES; } the class MainViewController is a UIViewController that contains the following method: - (IBAction)push_me:(id)sender { M2 *m2 = [[[M2 alloc] initWithNibName:@"M2" bundle:nil] autorelease]; [self.navigationController pushViewController:m2 animated:YES]; } this method is attached (via interface builder) to a UIButton found within MainViewController.xib Obviously, the method above (push_me) is supposed to create a second UIViewController (called m2) and push m2 into view on the left-side of the split-view when the UIButton is pressed. And yet it does nothing when the button is pressed (even though I can tell that the method is called). Thoughts on where I'm going wrong? TIA!

    Read the article

< Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >