Search Results

Search found 14974 results on 599 pages for 'old games'.

Page 580/599 | < Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >

  • Resolve Wrong IP from Domain Name only on certain networks

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. There are several strange things about this problem so I will break it down into steps. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I use the automatic update client, but logged in and checked and my no-ip domain has the proper IP tied to it. Here is a link to the homepage through the no-ip domain for reference. Also, I do a ping and a traceroute on the no-ip domain and get: [eckertzs@localhost ~]$ ping -c 1 endradil.noip.me PING endradil.noip.me (65.24.215.99) 56(84) bytes of data. 64 bytes from endradil.noip.me (65.24.215.99): icmp_seq=1 ttl=64 time=2.23 ms --- endradil.noip.me ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 104ms rtt min/avg/max/mdev = 2.233/2.233/2.233/0.000 ms [eckertzs@localhost ~]$ traceroute endradil.noip.me traceroute to endradil.noip.me (65.24.215.99), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.755 ms 5.409 ms 5.380 ms 2 endradil.noip.me (65.24.215.99) 6.297 ms 9.543 ms 10.324 ms Using this domain, I can connect to my webserver without issue or interruption(the https is required to avoid a redirect serverside, but it works). I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. CNAME Record Host: www Points to: endradil.noip.me TTL: 1 hour For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of the past few days, however, the GoDaddy domain has only worked intermittently, for a few minutes at a time and then will go down for hours at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have run online tests of the DNS and have seen that the website is visible by external servers and resolved to the correct IP. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. My personal computers (Windows desktop, linux laptop, android phone) all fail to connect when on my personal wifi. If I disconnect my phone from the wifi and use my AT&T wireless data, it can connect with both domains without issue. When I attempt to use Google webmaster tools to crawl the site using the GoDaddy domain, Google can not find the site. From my linux laptop, I have found some interesting results when I ping or traceroute the domain. The results from these: [eckertzs@localhost ~]$ ping -c 1 www.endradil.com PING www.endradil.com.Belkin (198.105.244.228) 56(84) bytes of data. --- www.endradil.com.Belkin ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 10000ms [eckertzs@localhost ~]$ traceroute www.endradil.com traceroute to www.endradil.com (198.105.244.228), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.918 ms 2.806 ms 2.772 ms 2 cpe-65-24-208-1.insight.res.rr.com (65.24.208.1) 29.247 ms 29.654 ms 30.094 ms 3 cpe-69-23-24-117.new.res.rr.com (69.23.24.117) 15.597 ms 23.218 ms 23.581 ms 4 agg24.clmcohib01r.midwest.rr.com (65.29.1.52) 30.581 ms 30.556 ms 31.192 ms 5 be27.clevohek01r.midwest.rr.com (65.29.1.38) 30.580 ms 31.062 ms 31.038 ms 6 bu-ether25.atlngamq47w-bcr01.tbone.rr.com (107.14.19.38) 37.863 ms 68.844 ms 43.773 ms 7 107.14.17.178 (107.14.17.178) 51.866 ms 51.019 ms 50.989 ms 8 ae0.pr1.dca10.tbone.rr.com (107.14.17.200) 48.467 ms ae-4-0.a0.lax91.tbone.rr.com (66.109.1.113) 49.912 ms * 9 v413.core1.ash1.he.net (209.51.175.33) 60.270 ms 50.842 ms 50.819 ms 10 100ge5-1.core1.nyc4.he.net (184.105.223.166) 55.597 ms 56.045 ms 56.020 ms 11 xerocole-inc.10gigabitethernet12-4.core1.nyc4.he.net (216.66.41.242) 56.001 ms 55.969 ms 55.992 ms 12 * * * both show the incorrect IP. Also, the traceroute timesout on hops 12 through 255 (output truncated above). The traceroute using site24x7 works and shows reasonable results when run from their california server. From another linux box on a different network but in the same city as me (10 miles away), I still get timeout for traceroute, however the IP resolves correctly for the domain. From this I believe that the DNS result is incorrectly cached in either my router/modem or perhaps even at my ISP level. My question is, first, how do I find out exactly what is wrong, and second, how do I resolve it.

    Read the article

  • Mac won't boot into safe mode

    - by Stephen
    Mac boots fine normally, except when in safe mode. Holding down shift when booting gets me to the progress bar on the grey screen. Progress bar gets about half way before mac reboots. I modified nvram boot-args to get a better look: sudo nvram boot-args="-x -v" It definitely gets through fsck, skips loading kernel extensions (since it's in safe mode), does something with the network interfaces, then this is the last thing it wips through... Aug 22 11:56:21 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'com.apple.ServiceManagement.daemons.modify' by client '/usr/libexec/UserEventAgent' [10] for authorization created by '/usr/libexec/UserEventAgent' [10] (100012,0) Aug 22 11:56:22 Crockpot fseventsd[37]: event logs in /.fseventsd out of sync with volume. destroying old logs. (1 174 330) Aug 22 11:56:22 Crockpot fseventsd[37]: log dir: /.fseventsd getting new uuid: 5C379650-26FA-428F-B81F-4FE4349D50B3 Aug 22 11:56:23 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) starting OSXVers 12 Aug 22 11:56:23 Crockpot systemkeychain[35]: done file: /var/run/systemkeychaincheck.done Aug 22 11:56:23 Crockpot configd[17]: network changed: DNS* Aug 22 11:56:24 --- last message repeated 1 time --- Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2D_IPC: Loaded Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2DInitialize succeeded Aug 22 11:56:24 Crockpot mDNSResponder[39]: Adding registration domain 273025955.members.btmm.icloud.com. Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe MAC AUTH succeeded Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe Unsolicited Auth Aug 22 11:56:24 Crockpot kernel[0]: wlEvent: en1 en1 Link UP virtIf = 0 Aug 22 11:56:24 Crockpot kernel[0]: AirPort: Link Up on en1 Aug 22 11:56:24 Crockpot kernel[0]: en1: BSSID changed to 00:23:69:35:dc:fe Aug 22 11:56:24 Crockpot kernel[0]: en1::IO80211Interface::postMessage bssid changed Aug 22 11:56:24 Crockpot kernel[0]: AirPort: RSN handshake complete on en1 Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences failed to read preferences data. Errno was 21 Aug 22 11:56:25 --- last message repeated 1 time --- Aug 22 11:56:25 Crockpot airportd[30]: _doAutoJoin: Already associated to “burnum”. Bailing on auto-join. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Can't load IOBluetoothSerialManager.kext - ineligible during safe boot. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Load com.apple.iokit.IOBluetoothSerialManager failed; removing personalities from kernel. Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences: error renaming file blued.plist.HXuEmQn to blued.plist. Aug 22 11:56:27 Crockpot awacsd[52]: Starting awacsd connectivity-77 (Jun 20 2012 15:40:49) Aug 22 11:56:27 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'system.services.systemconfiguration.network' by client '/System/Library/Frameworks/SystemConfiguration.framework/Versions/A/Resources/SCHelper' [54] for authorization created by '/usr/sbin/awacsd' [52] (100003,0) Aug 22 11:56:27 --- last message repeated 1 time --- Aug 22 11:56:27 Crockpot awacsd[52]: Configuring lazy AWACS client: 273025955.p04.members.btmm.icloud.com. Aug 22 11:56:28 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:32 --- last message repeated 1 time --- Aug 22 11:56:32 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:38 --- last message repeated 1 time --- Aug 22 11:56:38 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:47 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:49 Crockpot configd[17]: subnet_route: write routing socket failed, Network is unreachable Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1+:169.254.80.161) DNS* Proxy+ SMB Aug 22 11:56:51 Crockpot UserEventAgent[10]: Captive: en1: Not probing 'burnum' (protected network) Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1:169.254.80.161) DNS Proxy SMB Aug 22 11:57:07 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:57:23 Crockpot fseventsd[37]: Logging disabled completely for device:1: /Volumes/Recovery HD Aug 22 11:57:25 Crockpot kernel[0]: Kext loading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kext unloading now disabled. Aug 22 11:57:25 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) stopping Aug 22 11:57:25 Crockpot com.apple.SecurityServer[15]: Killing auth hosts Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot configd[17]: dnssd_clientstub read_all(26) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: [0x7fb025119ff0] SCNetworkReachability _llq_callback w/error=-65563 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2D_IPC: Terminated Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2DTerminate succeeded Aug 22 11:57:25 Crockpot awacsd[52]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 2 times --- Aug 22 11:57:25 Crockpot apsd[55]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: SCNC: stop, triggered by configd, type PPPSerial, reason Terminated All Aug 22 11:57:25 Crockpot configd[17]: _d2dCallback: D2D connection to mDNSResponder lost Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 4 times --- Aug 22 11:57:25 Crockpot kernel[0]: Kext autounloading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kernel requests now disabled. ... before rebooting in the middle of the safe mode startup sequence. Aug 22 12:01:10 localhost bootlog[0]: BOOT_TIME 1345662070 0 Aug 22 12:01:32 localhost kernel[0]: PMAP: PCID enabled Aug 22 12:01:32 localhost kernel[0]: Darwin Kernel Version 12.0.0: Sun Jun 24 23:00:16 PDT 2012; root:xnu-2050.7.9~1/RELEASE_X86_64 Any ideas what's causing the safe mode boot to fail? System Info MacBook Pro 8,2 2.2 Ghz Core i7 4 GM Ram Mountain Lion 10.8 500GB TOSHIBA MK5065GSXF Serial-ATA rotational disk

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Why are my USB 2.0 devices crashing Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic ATX motherboard with no identifying markings other than one stamp that says "Rev.B". CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. On board it has 1 x 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors. It has a DVD+/-RW running as master on IDE1 and a 1.44Mb 3.5" floppy drive connected to the floppy connector. It has an 80Gb Western Digital hard drive running as master on IDE0. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • Why are my USB 2.0 devices hanging Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic ATX motherboard with no identifying markings other than one stamp that says "Rev.B". CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. On board it has 1 x 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors - it has a NEC uPD720102 chipset. It has a DVD+/-RW running as master on IDE1 and a 1.44Mb 3.5" floppy drive connected to the floppy connector. It has an 80Gb Western Digital hard drive running as master on IDE0. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine? Edit: Updated the USB 2.0 info with reference to actual card - http://www.xpcgear.com/lpnec4u.html

    Read the article

  • System user authentication via web interface [closed]

    - by donodarazao
    Background: We have one pretty slow and expensive satellite Internet connection that is shared in a network with 5-50 users. To limit traffic, users shall pay a certain sum of money per hour. Routing and traffic accounting on user basis is done by a opensuse 10.3 server. Login is done via pppoe, and for each connection, username, bytes_sent, bytes_rcvd, start_time, end_time,etc are written into a mysql database. Now it was decided that we want to change from time-based to volume-based pricing. As the original developer who installed the system a couple of years ago isn't available, I'm trying to do the changes. Although I'm absolutely new to all this, there is some progress. However, there's one point I'm absolutely stuck. Up to now, only administrators can access connection details and billing information via a web interface. But as volume-based prices are less transparent to users than time-based prices, it is essential that users themselves can check their connections and how much they cost via the web interface. For this, we need some kind of user authentication. Actual question: How to develop such a user authentication? Every user has a linux system user account. With this user name and password, connection to the pppoe-server is made by the client machines. I thought about two possibles ways to authenticate users: First possibility: Users type username and password in a form. This is then somehow checked. We already have to possibilities to change passwords via the web interface. Here are parts of the code: Part of the Perl script the homepage is linked to: #!/usr/bin/perl use CGI; use CGI::Carp qw(fatalsToBrowser); use lib '../lib'; use own_perl_module; my @error; my $data; $query = new CGI; $username = $query->param('username') || ''; $oldpasswd = $query->param('oldpasswd') || ''; $passwd = $query->param('passwd') || ''; $passwd2 = $query->param('passwd2') || ''; own_perl_module::connect(); if ($query->param('submit')) { my $benutzer = own_perl_module::select_benutzer(username => $username) or push @error, "user not exists"; push @error, "your password?!?" unless $passwd; unless (@error) { own_perl_module::update_benutzer($benutzer->{id}, { oldpasswd => $oldpasswd, passwd => $passwd, passwd2 => $passwd2 }, error => \@error) and push @error, "Password changed."; } } Here's part of the sub update_benutzer in the own_perl_module: if ($dat-{passwd} ne '') { my $username = $dat-{username} || $select-{username}; my $system = "./chpasswd.pl '$username' '$dat-{passwd}'" . (defined($dat-{oldpasswd}) ? " '$dat-{oldpasswd}'" : undef); my $answer = $system; if ($? != 0) { chomp($answer); push @$error, $answer || "error changing password ($?)"; Here's chpasswd.pl: #!/usr/bin/perl use FileHandle; use IPC::Open3; local $username = shift; local $passwd = shift; local $oldpasswd = shift; local $chat = { 'Old Password: $' => sub { print POUT "$oldpasswd\n"; }, 'New password: $' => sub { print POUT "$passwd\n"; }, 'Re-enter new password: $' => sub { print POUT "$passwd\n"; }, '(.*)\n$' => sub { print "$1\n"; exit 1; } }; local $/ = \1; my $command; if (defined($oldpasswd)) { $command = "sudo -u '$username' /usr/bin/passwd"; } else { $command = "sudo /usr/bin/passwd '$username'"; } $pid = open3(\*POUT, \*PIN, \*PERR, $command) or die; my $buffer; LOOP: while($_ = <PERR>) { $buffer .= $_; foreach (keys(%$chat)) { if ($buffer =~ /$_/i) { $buffer = undef; &{$chat->{$_}}; } } } exit; Could this somehow be adjusted to verify users, but not changing user passwords? The second possibility I see: all pppoe connections are logged in the mysql database. If I could somehow retrieve the username (or uid) of the user connected by pppoe, this could be used to authenticate users. Users could only check their internet connections and costs when they are online (and thus paying money), but this could be tolerated. Here's a line of the script that inserts connections into the database: my $username = $ENV{PEERNAME}; I thought it would be easy to use this variable, but $username seems to be always empty in test-scripts (print $username). Any idea how to retrieve the user connected to the pppoe server? Sorry for the long question! Any help would be very much appreciated. :)

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Updating with using custom class collection not working

    - by Risho
    I've posted this yesterday on asp forum but no one replied so perhaps I'll have better luck here. For some reason the OnUpdating method does not pull new values from the grid which is in edit mode. I've search and have come across several blogs and sites, some sugesting that an ObjectDataSource is required in order to use the "e.NewValue" construct others provide code to the contrary. I don't get any errors - the variables in the code file would contain the old values rather then new ones. I don't want to use the ODS way of manipulating the data. My delete method works but not the update one. Can you suggest what is wrong with the code? Here is what I've got: aspx file: <asp:GridView ID="gvBlack" runat="server" AutoGenerateColumns="False" OnRowUpdating="gvBlack_OnUpdating" OnRowEditing="gvBlack_RowEditing"> <Columns> <%--<asp:BoundField DataField="Ident_Black" ReadOnly="True" visible="false" />--%> <asp:TemplateField ItemStyle-Width="1px"> <EditItemTemplate> <asp:Label ID="lblIdent_Black" runat="server" Text='<%# Bind("Ident_Black") %>' Visible="false" /> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Model" > <ItemTemplate> <asp:Label ID="lblModel_Black" runat="server" Text='<%# Bind("Model_Black") %>' width="130px" /> </ItemTemplate> <EditItemTemplate> <asp:TextBox ID="txtModel_Black" runat="server" Text='<%# Eval("Model_Black") %>' width="100px" /> <asp:RequiredFieldValidator ID="rfvModel_Black" runat="server" ControlToValidate="txtModel_Black" SetFocusOnError="true" ErrorMessage="*" ValidationGroup="CurrentMfg" ForeColor="Red" Font-Bold="true" /> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Description" > <ItemTemplate> <asp:Label ID="lblDesc_Black" runat="server" Text='<%# Bind("Desc_Black") %>' width="200px" /> </ItemTemplate> <EditItemTemplate> <asp:TextBox ID="txtDesc_Black" runat="server" Text='<%# Eval("Desc_Black") %>' width="170px" /> <span></span> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Qty" > <ItemTemplate> <asp:Label ID="lblQty_Black" runat="server" Text='<%# Bind("Qty_Black") %>' width="35px" /> </ItemTemplate> <EditItemTemplate> <asp:TextBox ID="txtQty_Black" runat="server" Text='<%# Eval("Qty_Black") %>' width="35px" /> <asp:RequiredFieldValidator ID="rfvQty_Black" runat="server" ControlToValidate="txtQty_Black" SetFocusOnError="true" ErrorMessage="*" ValidationGroup="CurrentMfg" ForeColor="Red" Font-Bold="true" /> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Reorder<br />Limit"> <ItemTemplate> <asp:Label ID="lblBlack_Reorder_Limit" runat="server" Text='<%# Bind("Black_Reorder_Limit") %>' width="35px" /> </ItemTemplate> <EditItemTemplate> <asp:TextBox ID="txtBlack_Reorder_Limit" runat="server" Text='<%# Eval("Black_Reorder_Limit") %>' width="35px" /> <asp:RequiredFieldValidator ID="rfvBlack_Reorder_Limit" runat="server" ControlToValidate="txtBlack_Reorder_Limit" SetFocusOnError="true" ErrorMessage="*" ValidationGroup="CurrentMfg" ForeColor="Red" Font-Bold="true" /> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Notes"> <ItemTemplate> <asp:Label ID="lblNotes" runat="server" Text='<%# Bind("Notes") %>' width="200px" /> </ItemTemplate> <EditItemTemplate> <asp:TextBox ID="txtNotes" runat="server" Text='<%# Eval("Notes") %>' width="170px" /> <span></span> </EditItemTemplate> </asp:TemplateField> <asp:CommandField ShowEditButton="True" ShowDeleteButton="false" ValidationGroup="CurrentToner" /> </Columns> </asp:GridView> aspx.cs file: protected void Page_Load(object sender, EventArgs e) { LoadData_TonerBlack(); } private void LoadData_TonerBlack() { dalConsumables_TonerBlack drTonerBlack = new dalConsumables_TonerBlack(); gvBlack.DataSource = drTonerBlack.GetListTonersBlack(); gvBlack.DataBind(); } protected void gvBlack_OnUpdating(object sender, GridViewUpdateEventArgs e) { //GridView gvBlack = (GridView)sender; //GridViewRow gvBlackRow = (GridViewRow)gvBlack.Rows[e.RowIndex]; int _Ident_Black = Convert.ToInt32(gvBlack.DataKeys[e.RowIndex].Values[0].ToString()); TextBox _txtModel_Black = (TextBox)gvBlack.Rows[e.RowIndex].FindControl("txtModel_Black"); TextBox _txtDesc_Black = (TextBox)gvBlack.Rows[e.RowIndex].FindControl("txtDesc_Black"); TextBox _txtQty_Black = (TextBox)gvBlack.Rows[e.RowIndex].FindControl("txtQty_Black"); TextBox _txtBlack_Reorder_Limit = (TextBox)gvBlack.Rows[e.RowIndex].FindControl("txtBlack_Reorder_Limit"); TextBox _txtNotes = (TextBox)gvBlack.Rows[e.RowIndex].FindControl("txtNotes"); string _updatedBy = Request.ServerVariables["AUTH_USER"].ToString(); dalConsumables_TonerBlack updateTonerBlack = new dalConsumables_TonerBlack(); updateTonerBlack.UpdateTonerBlack(_Ident_Black, _txtModel_Black.Text, _txtDesc_Black.Text, Convert.ToInt32(_txtQty_Black.Text), Convert.ToInt32(_txtBlack_Reorder_Limit.Text), _txtNotes.Text, _updatedBy); gvBlack.EditIndex = -1; LoadData_TonerBlack(); } protected void gvBlack_RowEditing(object sender, GridViewEditEventArgs e) { gvBlack.EditIndex = e.NewEditIndex; LoadData_TonerBlack(); } Thanks in advance! Risho

    Read the article

  • Sporadic unspecific kernel panic

    - by koma
    I'm experiencing seldom (so far about once a month) hard crashes on our ubuntu server 10.04 LTS box. The box itself is quite old (Dell PowerEdge 750 from 2004, Pentium4 2.8 GHz). I set up netconsole after it crashed twice last thursday and was able to extract the following output: [ 9354.062473] invalid opcode: 0000 [#1] SMP [ 9354.062516] last sysfs file: /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/uevent [ 9354.062555] Modules linked in: ppdev adm1026 hwmon_vid i2c_i801 bridge stp dcdbas psmouse serio_raw netconsole configfs shpchp lp parport usbhid hid e1000 [ 9354.062685] [ 9354.062704] Pid: 3988, comm: rsync Not tainted 2.6.38-12-generic-pae #51~lucid1-Ubuntu Dell Computer Corporation PowerEdge 750 /0R1479 [ 9354.062773] EIP: 0060:[<c104fef1>] EFLAGS: 00010046 CPU: 1 [ 9354.062802] EIP is at check_preempt_wakeup+0x181/0x250 [ 9354.062826] EAX: 00000002 EBX: f2a10ccc ECX: 00000000 EDX: 00000002 [ 9354.062850] ESI: f1db71cc EDI: f1db71a0 EBP: f1dbdea8 ESP: f1dbde8c [ 9354.062875] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 [ 9354.062900] Process rsync (pid: 3988, ti=f1dbc000 task=f1db71a0 task.ti=f1dbc000) [ 9354.062933] Stack: [ 9354.062951] 0053ea60 f7907680 f28da840 f2a10ca0 c153ea60 f7907680 c153ea60 f1dbdebc [ 9354.063019] c103f98a f2a10ca0 f7907680 00000001 f1dbdef8 c104f97f 00000000 f2f0bacc [ 9354.063088] f7904338 00000001 00000003 00000000 f2f0bacc 00000001 00000001 00000086 [ 9354.063157] Call Trace: [ 9354.063183] [<c103f98a>] check_preempt_curr+0x6a/0x80 [ 9354.063210] [<c104f97f>] try_to_wake_up+0x5f/0x3f0 [ 9354.063236] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.063261] [<c104fd64>] wake_up_process+0x14/0x20 [ 9354.063286] [<c1077a1d>] hrtimer_wakeup+0x1d/0x30 [ 9354.063310] [<c1077f4a>] __run_hrtimer+0x7a/0x1c0 [ 9354.063336] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.063360] [<c1078310>] hrtimer_interrupt+0x120/0x2b0 [ 9354.063390] [<c1535c36>] smp_apic_timer_interrupt+0x56/0x8a [ 9354.063418] [<c152f459>] apic_timer_interrupt+0x31/0x38 [ 9354.063446] [<c1520000>] ? mca_attach_bus+0x5/0xc0 [ 9354.063469] Code: 8b 9b 20 01 00 00 8b 86 24 01 00 00 3b 83 24 01 00 00 75 e6 85 db 0f 84 a3 00 00 00 89 da 89 f0 e8 75 f6 fe ff 83 f8 01 0f 85 00 <fe> ff ff 89 f8 e8 95 f9 fe ff 8b 5e 1c 85 db 0f 84 e4 fe ff ff [ 9354.063804] EIP: [<c104fef1>] check_preempt_wakeup+0x181/0x250 SS:ESP 0068:f1dbde8c [ 9354.064231] ---[ end trace 290689cea65aea7f ]--- [ 9354.064290] Kernel panic - not syncing: Fatal exception in interrupt [ 9354.064352] Pid: 3988, comm: rsync Tainted: G D 2.6.38-12-generic-pae #51~lucid1-Ubuntu [ 9354.064424] Call Trace: [ 9354.064481] [<c152c057>] ? panic+0x5c/0x15b [ 9354.064539] [<c15302bd>] ? oops_end+0xcd/0xd0 [ 9354.064539] [<c100d9e4>] ? die+0x54/0x80 [ 9354.064539] [<c152f926>] ? do_trap+0x96/0xc0 [ 9354.064539] [<c100ba00>] ? do_invalid_op+0x0/0xa0 [ 9354.064539] [<c100ba8b>] ? do_invalid_op+0x8b/0xa0 [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c144884d>] ? __kfree_skb+0x3d/0x90 [ 9354.064539] [<c1042ae7>] ? update_curr+0x247/0x2a0 [ 9354.064539] [<c10447bb>] ? update_cfs_load+0x11b/0x2d0 [ 9354.064539] [<c1042a25>] ? update_curr+0x185/0x2a0 [ 9354.064539] [<c152f6bf>] ? error_code+0x67/0x6c [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c103f98a>] ? check_preempt_curr+0x6a/0x80 [ 9354.064539] [<c104f97f>] ? try_to_wake_up+0x5f/0x3f0 [ 9354.064539] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.064539] [<c104fd64>] ? wake_up_process+0x14/0x20 [ 9354.064539] [<c1077a1d>] ? hrtimer_wakeup+0x1d/0x30 [ 9354.064539] [<c1077f4a>] ? __run_hrtimer+0x7a/0x1c0 [ 9354.064539] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.064539] [<c1078310>] ? hrtimer_interrupt+0x120/0x2b0 [ 9354.064539] [<c1535c36>] ? smp_apic_timer_interrupt+0x56/0x8a [ 9354.064539] [<c152f459>] ? apic_timer_interrupt+0x31/0x38 [ 9354.064539] [<c1520000>] ? mca_attach_bus+0x5/0xc0 Googling for this issue didn't really turn up anything useful (most stuff I found was related to btrfs, but I don't use that, although the module exists and is sometimes loaded). From experience it might have to do with relatively heavy I/O, as two of the panics happened during a backup procedure. Kernel is 2.6.38-12-generic-pae, but I'm pretty sure I also saw panics on 2.6.32. I meanwhile upgraded to 3.0.0-17-generic-pae and am waiting for the next crash ;-) I'm at a loss here, so any pointers where to look for the cause or what it could be would be great :-) Thanks !

    Read the article

  • Should I go along with my choice of web hosting company or still search?

    - by Devner
    Hi all, I have been searching for a good website hosting company that can offer me all the services that I need for hosting my PHP & MySQL based website. Now this is a community based website and users will be able to upload pictures, etc. The hosting company that I have in mind, currently lets me do everything... let me use mail(), supports CRON jobs, etc. Of course they are charging about $6/month. Now the only problem with this company is that they have a limit of 50,000 files that can exist within the hosting account at any time. This kind of contradicts their frontpage ad of "UNLIMITED SPACE" on their website. Apart from this, I know of no other reason why I should not go with this hosting company. But my issue is that 50,000 file limit is what I cannot live with, once the users increase in significant number and the files they upload, exceed 50,000 in number. Now since this is a dynamic website and also includes sensitive issues like payments, etc. I am not sure if I should go ahead with this company as I am just starting out and then later switch over to a better hosting company which does not limit me with 50,000 files. If I need to switch over once I host with this company, I will need to take backups of all the files located in my account (jpg, zip, etc.), then upload them to the new host. I am not aware of any tools that can help me in this process. Can you please mention if you know any? I can go ahead with the other companies right now, but their cost is double/triple of the current price and they all sport less features than my current choice. If I pay more, then they are ready to accommodate my higher demands. Unfortunately, the company that I am willing to go with now, does NOT have any other higher/better plans that I can switch to. So that's the really really bad part. So my question(s): Since I am starting out with my website and since the scope of users initially is going to be less/small, should I go ahead with the current choice and then once the demand increases, switch over to a better provider? If yes, how can I transfer my database, especially the jpg files, etc. to the new provider? I don't even know the tools required to backup and restore to another host. (I don't like this idea but still..) Should I go ahead and pay more right now and go with better providers (without knowing if the website is going to do really that well) just for saving myself the trouble of having to take a backup of the 50,000 files and upload to a new host from an old host and just start paying double/triple the price without even knowing if I would receive back the returns as I expected? Backup and Restore in such a bulky numbers is something that I have never done before and hence I am stuck here trying to decide what to do. The price per month is also a considerable factor in my decision. All these web hosting companies say one common thing: It is customers responsibility to backup and restore data and they are not liable for any loss. So no matter what hosting company that I would like to go with, they ask me to take backup via FTP so that I can restore them whenever I want (& it seems to be safer to have the files locally with me). Some are providing tools for backup and some are not and I am not sure how much their backup tools can be trusted considering the disclaimers they have. I have never backed-up and restored 50,000 files from one web host to another, so please, all you experienced people out there, leave your comments and let me know your suggestions so that I can decide. I have spent 2 days fighting with myself trying to decide what to do and finally concluded that this is a double-edged sword and I can't arrive at a satisfactory final decision without involving others suggestions. I believe that someone must be out there who may have had such troublesome decision to make. So all your suggestions to help me make my decision are appreciated. Thank you all.

    Read the article

  • T60 Screen/LCD gets black after some minutes with a highpitched sound rising and fading

    - by edelwater
    Just now my T60 screen got "black" (so no display). On my second monitor: no problems so the VGA output works. Symptom: Screen blanks / no display, but it works on the second monitor Steps to reproduce: - boot - wait (it does not matter what you do you do not have to login or anything) - (now the monitor of the laptop slowly begins to make a ssssssssHHHHHHHHHHHHHHHHHWOEOEssssssss noise of about 10 seconds) - right after the sounds ends, the monitor gets black. Sometimes it seems to be the same each time. Software: Installed no new software before/after, running ZoneAlarm and antivirus. Other: It does not feel hot in any place, there don't seem to be running processes with strange behaviour. Warranty: Out of warranty What was I doing: Typing text on a website and doing some PHP coding in a text editor. What can I do here other than buy a new laptop? Does it sound familiar to known cases? Update 1: Exactly the same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-Screen-Blackout/m-p/288772 and the second poster (garyj), http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/Black-Screen-on-T60/m-p/235053#M48627 And here: "I have that same problem. I replaced the CCRL on mine and it works fine when the screen is not screwed in. Once the frame of the LCD screen (metal portion) touches the metal on the laptop which holds the screen the screen goes black. If the metal is touching the screen when you boot up it boots up with it being very dimmly lit. " from http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-screen-problems/m-p/205047#M44995 (it seems replacing the LCD display is no use, he tried it three times). Same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-black-screen/m-p/80604#M25914 Hmmm... not handy 3 or 4 months ago I ordered and installed a new fan. Now the LCD. Which does not seem the core issue but some electric issue so it seems replacing the LCD is not the thing to do here. If it is not the LCD that needs to be replaced (see other threads), which parts can I order to fix this? Is there any information which could lead me to identify the issue? I have read replacing the "inverter" AND the "backlightning" would that make sense? Update 2: I replaced the inverter with another inverter, but IO have the same problem. I DID notice that the inverter is the component that makes the sssssssssssssHHHHHHHHHH sound AND it becomes very hot in a few seconds. (So both the old and the test one) The problem is hmmm wat is then the thing that makes the inverter hot by (assumption) after which it shuts itself down. Is it either the input or the output? The output seems to me not, because the screen seems to function so it must be the electricity coming in. But what causes it to become so hot would it be the VGA card outputting some unusual high voltage seems unlikely? I am looking for the component to order / replace update 3: Great news. Ewendish gave me the hint to look in the BIOS. While I was in the BIOS I noticed that the screen did not switch off and there was not a high pitched sound. So I lowered some settings in the BIOS. I then noticed that with brightness turned to 0 (via FN End), it does not make a high pitched sound and does not turn off, with brightness turned up just three "stripes" it starts making the sound. So I could from now on work under lowest brightness modus or... see where the problem lies. So as stated below with either power management or display drivers / ATI Catalyst settings / Windows display settings. I'm trying to see where it lies, but I will google some first. Update 4: I wiped clean the Windows XP installation and installed Windows 7 on it. Unfortunately the problem remains: as soon as the brightness goes up the screen starts hissing. This means... back to original thought: it probably IS a hardware problem. Although ... again... if it is NOT the inverter, what is it? Could it be the backlightning component? I could try to switch that with a another T60... but this is quite tricky.

    Read the article

  • WCF GZip Compression Request/Response Processing

    - by IanT8
    How do I get a WCF client to process server responses which have been GZipped or Deflated by IIS? On IIS, I've followed the instructions here on how to make IIS 6 gzip all responses (where the request contained "Accept-Encoding: gzip, deflate") emitted by .svc wcf services. On the client, I've followed the instructions here and here on how to inject this header into the web request: "Accept-Encoding: gzip, deflate". Fiddler2 shows the response is binary and not plain old Xml. The client crashes with an exception which basically says there's no Xml header, which ofcourse is true. In my IClientMessageInspector, the app crashes before AfterReceiveReply is called. Some further notes: (1) I can't change the WCF service or client as they are supplied by a 3rd party. I can however attach behaviors and/or message inspectors via configuration if this is the right direction to take. (2) I don't want to compress/uncompress just the soap body, but the entire message. Any ideas/solutions? * SOLVED * It was not possible to write a WCF extension to achieve these goals. Instead I followed this CodeProject article which advocate a helper class: public class CompressibleHttpRequestCreator : IWebRequestCreate { public CompressibleHttpRequestCreator() { } WebRequest IWebRequestCreate.Create(Uri uri) { HttpWebRequest httpWebRequest = Activator.CreateInstance(typeof(HttpWebRequest), BindingFlags.CreateInstance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, new object[] { uri, null }, null) as HttpWebRequest; if (httpWebRequest == null) { return null; } httpWebRequest.AutomaticDecompression =DecompressionMethods.GZip | DecompressionMethods.Deflate; return httpWebRequest; } } and also, an addition to the application configuration file: <configuration> <system.net> <webRequestModules> <remove prefix="http:"/> <add prefix="http:" type="Pajocomo.Net.CompressibleHttpRequestCreator, Pajocomo" /> </webRequestModules> </system.net> </configuration> What seems to be happening is that WCF eventually asks some factory or other deep down in system.net to provide an HttpWebRequest instance, and we provide the helper that will be asked to create the required instance. In the WCF client configuration file, a simple basicHttpBinding is all that is required, without the need for any custom extensions. When the application runs, the client Http request contains the header "Accept-Encoding: gzip, deflate", the server returns a gzipped web response, and the client transparently decompresses the http response before handing it over to WCF. When I tried to apply this technique to Web Services I found that it did NOT work. Although the helper class was executed in the same was as when used by the WCF client, the http request did not contain the "Accept-Encoding: ..." header. To make this work for Web Services, I had to edit the Web Proxy class, and add this method: protected override System.Net.WebRequest GetWebRequest(Uri uri) { System.Net.HttpWebRequest rq = (System.Net.HttpWebRequest)base.GetWebRequest(uri); rq.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate; return rq; } Note that it did not matter whether the CompressibleHttpRequestCreator and block from the application config file were present or not. For web services, only overriding GetWebRequest in the Web Service Proxy worked.

    Read the article

  • IE8 losing session cookies in popup windows.

    - by HackedByChinese
    We have an ASP.NET application that uses Forms Auth. When users log in, a session ID cookie and a Forms Auth ticket (stored as a cookie) are generated. These are session cookies, not permanent cookies. It is intentional and desirable that when the browser closes, the user is effectively logged out. Once a user logs in, a new window is popped up using window.open('location here');. The page that is opened is effectively the workspace the user works in throughout the rest of their session. From this page, other pop-ups are also used. Lately, we've had a number of customers (all using latest versions of IE8) complaining that the when they log in, the initial pop-up takes them back to the log in screen rather than their homepage. Alternately, users can sometimes log in, get to the homepage (which again, is in a new pop up window), and it all seems fine, until any additional pop-ups are created, where it starts redirecting them to the log in screen again. In attempting to troubleshoot the issue, I've used good old Fiddler. When the problem starts manifesting, I've noticed that the browser is not sending up the ASP.NET session ID session cookie OR the Forms Auth ticket session cookie, even though the response to the log in POST clearly pushes down those cookies. What's more strange is if I CTRL+N to open a new window from the popped-up window that is missing the session cookies, then manually type in the URL to the home page, those cookies magically appear again. However, subsequent window.open(); calls will continue to be broken, not sending the session cookies and taking the user to the log in screen. It's important to note that sometimes, for seemingly no good reason, those same users can suddenly log in and work normally for a while, then it goes back to broken. Now, I've ensured that there are no browser add-ons, plug-ins, toolbars, etc. are running. I've added our site as a trusted site and dropped the security settings to Low, I've modified the Cookie Privacy policy to "accept all" and even disabled automatic policy settings, manually forcing it to accept everything and include session cookies. Nothing appears to affect it. Also note the web application resides on a single server. There is no load balancing, web gardens, server farms, clusters, etc. The server does reside behind an ISA server, but other than that it's pretty straight forward. I've been searching around for days and haven't found anything actionable. Heck, sometimes I can't even reproduce it reliably. I have found a few references to people having this same problem, but they seem to be referencing an issue that was allegedly fixed in a beta or RC release (example: http://stackoverflow.com/questions/179260/ie8-loses-cookies-when-opening-a-new-window-after-a-redirect). These are release versions of IE, with up-to-date patches. I'm aware that I can try to set permanent cookies instead of session cookies. However, this has drastic security implications for our application. Update It seems that the problem automagically goes away when the user is added as a Local Administrator on the machine. Only time will tell if this change permanently (and positively) affects this problem. Time to bust out ProcMon and see if there is a resource access problem.

    Read the article

  • C# Bind DataTable to Existing DataGridView Column Definitions

    - by Timothy
    I've been struggling with a NullReferenceException and hope someone here will be able to point me in the right direction. I'm trying to create and populate a DataTable and then show the results in a DataGridView control. The basic code follows, and Execution stops with a NullReferenceException at the point where I invoke the new UpdateResults_Delegate. Oddly enough, I can trace entries.Rows.Count successfully before I return it from QueryEventEntries, so I can at least show 1) entries is not a null reference, and 2) the DataTable contains rows of data. I know I have to be doing something wrong, but I just don't know what. private void UpdateResults(DataTable entries) { dataGridView.DataSource = entries; } private void button_Click(object sender, EventArgs e) { PerformQuery(); } private void PerformQuery() { DateTime start = new DateTime(dateTimePicker1.Value.Year, dateTimePicker1.Value.Month, dateTimePicker1.Value.Day, 0, 0, 0); DateTime stop = new DateTime(dateTimePicker2.Value.Year, dateTimePicker2.Value.Month, dateTimePicker2.Value.Day, 0, 0, 0); DataTable entries = QueryEventEntries(start, stop); UpdateResults(entries); } private DataTable QueryEventEntries(DateTime start, DateTime stop) { DataTable entries = new DataTable(); entries.Columns.AddRange(new DataColumn[] { new DataColumn("event_type", typeof(Int32)), new DataColumn("event_time", typeof(DateTime)), new DataColumn("event_detail", typeof(String))}); using (SqlConnection conn = new SqlConnection(DSN)) { using (SqlDataAdapter adapter = new SqlDataAdapter( "SELECT event_type, event_time, event_detail FROM event_log " + "WHERE event_time >= @start AND event_time <= @stop", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new SqlParameter("@start", start), new SqlParameter("@stop", stop)}); adapter.Fill(entries); } } return entries; } Update I'd like to summarize and provide some additional information I've learned from the discussion here and debugging efforts since I originally posted this question. I am refactoring old code that retrieved records from a database, collected those records as an array, and then later iterated through the array to populate a DataGridView row by row. Threading was originally implemented to compensate and keep the UI responsive during the unnecessary looping. I have since stripped out Thread/Invoke; everything now occurs on the same execution thread (thank you, Sam). I am attempting to replace the slow, unwieldy approach using a DataTable which I can fill with a DataAdapter, and assign to the DataGridView through it's DataSource property (above code updated). I've iterated through the entries DataTable's rows to verify the table contains the expected data before assigning it as the DataGridView's DataSource. foreach (DataRow row in entries.Rows) { System.Diagnostics.Trace.WriteLine( String.Format("{0} {1} {2}", row[0], row[1], row[2])); } One of the column of the DataGridView is a custom DataGridViewColumn to stylize the event_type value. I apologize I didn't mention this before in the original post but I wasn't aware it was important to my problem. I have converted this column temporarily to a standard DataGridViewTextBoxColumn control and am no longer experiencing the Exception. The fields in the DataTable are appended to the list of fields that have been pre-specified in Design view of the DataGridView. The records' values are being populated in these appended fields. When the run time attempts to render the cell a null value is provided (as the value that should be rendered is done so a couple columns over). In light of this, I am re-titling and re-tagging the question. I would still appreciate it if others who have experienced this can instruct me on how to go about binding the DataTable to the existing column definitions of the DataGridView.

    Read the article

  • Installing Paperclip - "undefined method `has_attached_file` for" - Ruby on Rails

    - by bgadoci
    I just installed the plugin for Paperclip and I am getting an error message "undefined method has_attached_file for. Not sure why I am getting this. Here is the full error message. NoMethodError (undefined method `has_attached_file' for #<Class:0x10338acd0>): /Users/bgadoci/.gem/ruby/1.8/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:170:in `method_missing' app/models/post.rb:2 app/controllers/posts_controller.rb:50:in `show' For some reason it is referencing the will_paginate gem. From what I can find, it seems that either there is something wrong w/ my PostsController#index or perhaps a previously attempt at installing the gem instead of the plugin (in which case I have read I should be able to remedy through the /config/environments.rb file somehow). I didn't think that previous gem installation would matter as I did it in an old version of the site that I trashed before installing the plugin. In the current version of the site I show that the Table has been updated with the Paperclip columns after migration. Here is my code: PostsController#index def index @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @vote_counts = Vote.count(:group => :post_title, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = [:tags, :votes] end @posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "created_at DESC", :page => params[:page], :per_page => 5) @popular_posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "vote_total DESC", :page => params[:page], :per_page => 3) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end Post Model class Post < ActiveRecord::Base has_attached_file :photo validates_presence_of :body, :title has_many :comments, :dependent => :destroy has_many :tags, :dependent => :destroy has_many :votes, :dependent => :destroy belongs_to :user after_create :self_vote def self_vote # I am assuming you have a user_id field in `posts` and `votes` table. self.votes.create(:user => self.user) end cattr_reader :per_page @@per_page = 10 end /views/posts/new.html.erb <h1>New post</h1> <%= link_to 'Back', posts_path %> <% form_for(@post, :html => { :multipart => true}) do |f| %> <%= f.error_messages %> <p> <%= f.label :title %><br /> <%= f.text_field :title %> </p> <p> <%= f.label :body %><br /> <%= f.text_area :body %> </p> <p> <%= f.file_field :photo %> </p> <p> <%= f.submit 'Create' %> </p> <% end %>

    Read the article

  • delayed job problem in rails.

    - by krunal shah
    My controller data_files_controller.rb def upload_balances DataFile.load_balances(params) end My model data_file.rb def self.load_balances(params) # Pull the file out of the http request, write it to file system name = params['Filename'] directory = "public/uploads" errors_table_name = "snapshot_errors" upload_file = File.join(directory, name) File.open(upload_file, "wb") { |f| f.write(params['Filedata'].read) } # Remove the old data from the table Balance.destroy_all ------ more code----- end It's working fine. Now i want to use delayed job with my controller to call my model action like .. My controller data_files_controller.rb def upload_balances DataFile.send_later(:load_balances,params) end Is it possible?? What's the other way to do it? Is it create any problem? With this send_later i am getting this error in column last_error in delayed_job table. uninitialized stream C:/cyncabc/app/models/data_file.rb:12:in read' C:/cyncabc/app/models/data_file.rb:12:inload_balances' C:/cyncabc/app/models/data_file.rb:12:in open' C:/cyncabc/app/models/data_file.rb:12:inload_balances' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/performable_method.rb:35:in send' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/performable_method.rb:35:inperform' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/backend/base.rb:66:in invoke_job' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:120:inrun' c:/ruby/lib/ruby/1.8/timeout.rb:62:in timeout' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:120:inrun' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/core_ext/benchmark.rb:10:in realtime' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:119:inrun' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:180:in reserve_and_run_one_job' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:104:inwork_off' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:103:in times' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:103:inwork_off' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:78:in start' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/core_ext/benchmark.rb:10:inrealtime' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:77:in start' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:74:inloop' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/worker.rb:74:in start' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:93:inrun' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:72:in run_process' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:215:incall' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:215:in start_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:225:incall' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:225:in start_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/application.rb:255:instart' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/controller.rb:72:in run' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons.rb:188:inrun_proc' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:in call' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons/cmdline.rb:105:incatch_exceptions' c:/ruby/lib/ruby/gems/1.8/gems/daemons-1.0.10/lib/daemons.rb:187:in run_proc' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:71:inrun_process' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:65:in daemonize' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:63:intimes' c:/ruby/lib/ruby/gems/1.8/gems/delayed_job-2.0.3/lib/delayed/command.rb:63:in `daemonize' script/delayed_job:5 Without send_later it's working fine... Is there any solution?

    Read the article

  • The lua stack overflow,is this a bug?

    - by xiayong
    Some days ago, our program crash. I found the crash in lua code. So I check lua code, found the stack overflow. Please look this code In function luaD_precall: 1 if (!cl->isC) { /* Lua function? prepare its call */ 2 CallInfo *ci; 3 StkId st, base; 4 Proto *p = cl->p; 5 luaD_checkstack(L, p->maxstacksize); 6 func = restorestack(L, funcr); 7 if (!p->is_vararg) { /* no varargs? */ 8 base = func + 1; 9 if (L->top > base + p->numparams) 10 L->top = base + p->numparams; 11 } 12 else { /* vararg function */ 13 int nargs = cast_int(L->top - func) - 1; 14 base = adjust_varargs(L, p, nargs); 15 func = restorestack(L, funcr); /* previous call may change the stack */ 16 } 17 ci = inc_ci(L); /* now `enter' new function */ 18 ci->func = func; 19 L->base = ci->base = base; 20 ci->top = L->base + p->maxstacksize; 21 lua_assert(ci->top <= L->stack_last); 22 L->savedpc = p->code; /* starting point */ 23 ci->tailcalls = 0; 24 ci->nresults = nresults; 25 for (st = L->top; st < ci->top; st++) 26 setnilvalue(st); 27 L->top = ci->top; In my program, the p->maxstacksize is 79 before line 5, the current stacksize is 51, after call luaD_checkstack, the stacksize grow to 130. The lua function use vararg, so will run to line 14. Function adjust_varargs will be called. static StkId adjust_varargs (lua_State *L, Proto *p, int actual) { int i; int nfixargs = p->numparams; Table *htab = NULL; StkId base, fixed; for (; actual < nfixargs; ++actual) setnilvalue(L->top++); #if defined(LUA_COMPAT_VARARG) if (p->is_vararg & VARARG_NEEDSARG) { /* compat. with old-style vararg? */ int nvar = actual - nfixargs; /* number of extra arguments */ lua_assert(p->is_vararg & VARARG_HASARG); luaC_checkGC(L); htab = luaH_new(L, nvar, 1); /* create `arg' table */ In function adjust_varargs(), the lua function use “arg”, So luaC_checkGC will be called. In luaC_checkGC, the current lua stack size will be reduce to 65! The call stack like this: luaC_step() singlestep() propagatemark() traversestack() checkstacksizes() luaD_reallocstack() But the p->maxstacksize is 79, the stacksize is not enough… When the program run to line 27,the L->top is bigger than L->stack_last, in the next operation, will cause crash!

    Read the article

  • Debugging matchit plugin in vim (under Cygwin)

    - by system PAUSE
    The "matchit" plugin for vim is supposed to allow you to use the % key to jump between matching start/end tags when editing HTML, as well as /* and */ comment delimiters when editing other kinds of code. I've followed the exact instructions in ":help matchit", but % still doesn't work for me. It seems silly to ask "Why doesn't this work?" so instead I'm asking How can I diagnose the problem? Pointers to references are welcome, but specific vim-plugin-debugging techniques are preferred. Here is the ~/.vim directory: $ ls -ltaGR ~/.vim /cygdrive/y/.vim: total 0 drwxr-xr-x 1 spause 0 Sep 17 13:20 .. drwxr-xr-x 1 spause 0 Sep 16 13:59 doc drwxr-xr-x 1 spause 0 Sep 16 13:58 . drwxr-xr-x 1 spause 0 Sep 16 13:58 plugin /cygdrive/y/.vim/doc: total 24 -rw-r--r-- 1 spause 1961 Sep 16 13:59 tags drwxr-xr-x 1 spause 0 Sep 16 13:59 . -rw-r--r-- 1 spause 19303 Sep 16 13:58 matchit.txt drwxr-xr-x 1 spause 0 Sep 16 13:58 .. /cygdrive/y/.vim/plugin: total 32 drwxr-xr-x 1 spause 0 Sep 16 13:58 .. -rw-r--r-- 1 spause 30714 Sep 16 13:58 matchit.vim drwxr-xr-x 1 spause 0 Sep 16 13:58 . I am running vim 7.2 under Cygwin (installed Fall 2008). cygcheck shows: 1829k 2008/06/12 C:\cygwin\bin\cygwin1.dll Cygwin DLL version info: DLL version: 1.5.25 DLL epoch: 19 DLL bad signal mask: 19005 DLL old termios: 5 DLL malloc env: 28 API major: 0 API minor: 156 Shared data: 4 DLL identifier: cygwin1 Mount registry: 2 Cygnus registry name: Cygnus Solutions Cygwin registry name: Cygwin Program options name: Program Options Cygwin mount registry name: mounts v2 Cygdrive flags: cygdrive flags Cygdrive prefix: cygdrive prefix Cygdrive default prefix: Build date: Thu Jun 12 19:34:46 CEST 2008 CVS tag: cr-0x5f1 Shared id: cygwin1S4 In vim, :set shows: --- Options --- autoindent fileformat=dos shiftwidth=3 background=dark filetype=html syntax=html cedit=^F scroll=24 tabstop=3 expandtab shelltemp textmode viminfo='20,<50,s10,h Notably, the syntax and filetype are both recognized as HTML. (The syntax colouring is just fine.) If additional info is needed, please comment. UPDATE: Per answer by too much php: After trying vim -V1, I had changed my .vimrc to include a line set nocp so the compatible option is not on. :let loadad_matchit loaded_matchit #1 :set runtimepath? runtimepath=~/.vim,/usr/share/vim/vimfiles,/usr/share/vim/vim72,/usr/share/vim/vimfiles/after,~/.vim/after (~ is /cygdrive/y) Per answer by michael: :scriptnames 1: /cygdrive/y/.vimrc 2: /usr/share/vim/vim72/syntax/syntax.vim 3: /usr/share/vim/vim72/syntax/synload.vim 4: /usr/share/vim/vim72/syntax/syncolor.vim 5: /usr/share/vim/vim72/filetype.vim 6: /usr/share/vim/vim72/colors/evening.vim 7: /cygdrive/y/.vim/plugin/matchit.vim 8: /cygdrive/y/.vim/plugin/python_match.vim 9: /usr/share/vim/vim72/plugin/getscriptPlugin.vim 10: /usr/share/vim/vim72/plugin/gzip.vim 11: /usr/share/vim/vim72/plugin/matchparen.vim 12: /usr/share/vim/vim72/plugin/netrwPlugin.vim 13: /usr/share/vim/vim72/plugin/rrhelper.vim 14: /usr/share/vim/vim72/plugin/spellfile.vim 15: /usr/share/vim/vim72/plugin/tarPlugin.vim 16: /usr/share/vim/vim72/plugin/tohtml.vim 17: /usr/share/vim/vim72/plugin/vimballPlugin.vim 18: /usr/share/vim/vim72/plugin/zipPlugin.vim 19: /usr/share/vim/vim72/syntax/html.vim 20: /usr/share/vim/vim72/syntax/javascript.vim 21: /usr/share/vim/vim72/syntax/vb.vim 22: /usr/share/vim/vim72/syntax/css.vim Note that matchit.vim, html.vim, tohtml.vim, css.vim, and javascript.vim are all present. :echo b:match_words E121: Undefined variable: b:match_words E15: Invalid expression: b:match_words Hm, this looks highly relevant. I'm now looking through :help matchit-debug to find out how to fix b:match_words.

    Read the article

  • How to decrypt a string in C# that was encrypted in Delphi

    - by Simon Linder
    Hi all, we have a project written in Delphi that we want to convert to C#. Problem is that we have some passwords and settings that are encrypted and written into the registry. When we need a specified password we get it from the registry and decrypt it so we can use it. For the conversion into C# we have to do it the same way so that the application can also be used by users that have the old version and want to upgrade it. Here is the code we use to encrypt/decrypt strings in Delphi: unit uCrypt; interface function EncryptString(strPlaintext, strPassword : String) : String; function DecryptString(strEncryptedText, strPassword : String) : String; implementation uses DCPcrypt2, DCPblockciphers, DCPdes, DCPmd5; const CRYPT_KEY = '1q2w3e4r5t6z7u8'; function EncryptString(strPlaintext) : String; var cipher : TDCP_3des; strEncryptedText : String; begin if strPlaintext <> '' then begin try cipher := TDCP_3des.Create(nil); try cipher.InitStr(CRYPT_KEY, TDCP_md5); strEncryptedText := cipher.EncryptString(strPlaintext); finally cipher.Free; end; except strEncryptedText := ''; end; end; Result := strEncryptedText; end; function DecryptString(strEncryptedText) : String; var cipher : TDCP_3des; strDecryptedText : String; begin if strEncryptedText <> '' then begin try cipher := TDCP_3des.Create(nil); try cipher.InitStr(CRYPT_KEY, TDCP_md5); strDecryptedText := cipher.DecryptString(strEncryptedText); finally cipher.Free; end; except strDecryptedText := ''; end; end; Result := strDecryptedText; end; end. So for example when we want to encrypt the string asdf1234 we get the result WcOb/iKo4g8=. We now want to decrypt that string in C#. Here is what we tried to do: public static void Main(string[] args) { string Encrypted = "WcOb/iKo4g8="; string Password = "1q2w3e4r5t6z7u8"; string DecryptedString = DecryptString(Encrypted, Password); } public static string DecryptString(string Message, string Passphrase) { byte[] Results; System.Text.UTF8Encoding UTF8 = new System.Text.UTF8Encoding(); // Step 1. We hash the passphrase using MD5 // We use the MD5 hash generator as the result is a 128 bit byte array // which is a valid length for the TripleDES encoder we use below MD5CryptoServiceProvider HashProvider = new MD5CryptoServiceProvider(); byte[] TDESKey = HashProvider.ComputeHash(UTF8.GetBytes(Passphrase)); // Step 2. Create a new TripleDESCryptoServiceProvider object TripleDESCryptoServiceProvider TDESAlgorithm = new TripleDESCryptoServiceProvider(); // Step 3. Setup the decoder TDESAlgorithm.Key = TDESKey; TDESAlgorithm.Mode = CipherMode.ECB; TDESAlgorithm.Padding = PaddingMode.None; // Step 4. Convert the input string to a byte[] byte[] DataToDecrypt = Convert.FromBase64String(Message); // Step 5. Attempt to decrypt the string try { ICryptoTransform Decryptor = TDESAlgorithm.CreateDecryptor(); Results = Decryptor.TransformFinalBlock(DataToDecrypt, 0, DataToDecrypt.Length); } finally { // Clear the TripleDes and Hashprovider services of any sensitive information TDESAlgorithm.Clear(); HashProvider.Clear(); } // Step 6. Return the decrypted string in UTF8 format return UTF8.GetString(Results); } Well the result differs from the expected result. After we call DecryptString() we expect to get asdf1234but we get something else. Does anyone have an idea of how to decrypt that correctly? Thanks in advance Simon

    Read the article

  • How do I use C# and ADO.NET to query an Oracle table with a spatial column of type SDO_GEOMETRY?

    - by John Donahue
    My development machine is running Windows 7 Enterprise, 64-bit version. I am using Visual Studio 2010 Release Candidate. I am connecting to an Oracle 11g Enterprise server version 11.1.0.7.0. I had a difficult time locating Oracle client software that is made for 64-bit Windows systems and eventually landed here to download what I assume is the proper client connectivity software. I added a reference to "Oracle.DataAccess" which is version 2.111.6.0 (Runtime Version is v2.0.50727). I am targeting .NET CLR version 4.0 since all properties of my VS Solution are defaults and this is 2010 RC. I was then able to write a console application in C# that established connectivity, executed a SELECT statement, and properly returned data when the table in question does NOT contain a spatial column. My problem is that this no longer works when the table I query has a column of type SDO_GEOMETRY in it. Below is the simple console application I am trying to run that reproduces the problem. When the code gets to the line with the "ExecuteReader" command, an exception is raised and the message is "Unsupported column datatype". using System; using System.Data; using Oracle.DataAccess.Client; namespace ConsoleTestOracle { class Program { static void Main(string[] args) { string oradb = string.Format("Data Source={0};User Id={1};Password={2};", "hostname/servicename", "login", "password"); try { using (OracleConnection conn = new OracleConnection(oradb)) { conn.Open(); OracleCommand cmd = new OracleCommand(); cmd.Connection = conn; cmd.CommandText = "select * from SDO_8307_2D_POINTS"; cmd.CommandType = CommandType.Text; OracleDataReader dr = cmd.ExecuteReader(); } } catch (Exception e) { string error = e.Message; } } } } The fact that this code works when used against a table that does not contain a spatial column of type SDO_GEOMETRY makes me think I have my windows 7 machine properly configured so I am surprised that I get this exception when the table contains different kinds of columns. I don't know if there is some configuration on my machine or the Oracle machine that needs to be done, or if the Oracle client software I have installed is wrong, or old and needs to be updated. Here is the SQL I used to create the table, populate it with some rows containing points in the spatial column, etc. if you want to try to reproduce this exactly. SQL Create Commands: create table SDO_8307_2D_Points (ObjectID number(38) not null unique, TestID number, shape SDO_GEOMETRY); Insert into SDO_8307_2D_Points values (1, 1, SDO_GEOMETRY(2001, 8307, null, SDO_ELEM_INFO_ARRAY(1, 1, 1), SDO_ORDINATE_ARRAY(10.0, 10.0))); Insert into SDO_8307_2D_Points values (2, 2, SDO_GEOMETRY(2001, 8307, null, SDO_ELEM_INFO_ARRAY(1, 1, 1), SDO_ORDINATE_ARRAY(10.0, 20.0))); insert into user_sdo_geom_metadata values ('SDO_8307_2D_Points', 'SHAPE', SDO_DIM_ARRAY(SDO_DIM_ELEMENT('Lat', -180, 180, 0.05), SDO_DIM_ELEMENT('Long', -90, 90, 0.05)), 8307); create index SDO_8307_2D_Point_indx on SDO_8307_2D_Points(shape) indextype is mdsys.spatial_index PARAMETERS ('sdo_indx_dims=2' ); Any advice or insights would be greatly appreciated. Thank you.

    Read the article

  • Is there a C pre-processor which eliminates #ifdef blocks based on values defined/undefined?

    - by Jonathan Leffler
    Original Question What I'd like is not a standard C pre-processor, but a variation on it which would accept from somewhere - probably the command line via -DNAME1 and -UNAME2 options - a specification of which macros are defined, and would then eliminate dead code. It may be easier to understand what I'm after with some examples: #ifdef NAME1 #define ALBUQUERQUE "ambidextrous" #else #define PHANTASMAGORIA "ghostly" #endif If the command were run with '-DNAME1', the output would be: #define ALBUQUERQUE "ambidextrous" If the command were run with '-UNAME1', the output would be: #define PHANTASMAGORIA "ghostly" If the command were run with neither option, the output would be the same as the input. This is a simple case - I'd be hoping that the code could handle more complex cases too. To illustrate with a real-world but still simple example: #ifdef USE_VOID #ifdef PLATFORM1 #define VOID void #else #undef VOID typedef void VOID; #endif /* PLATFORM1 */ typedef void * VOIDPTR; #else typedef mint VOID; typedef char * VOIDPTR; #endif /* USE_VOID */ I'd like to run the command with -DUSE_VOID -UPLATFORM1 and get the output: #undef VOID typedef void VOID; typedef void * VOIDPTR; Another example: #ifndef DOUBLEPAD #if (defined NT) || (defined OLDUNIX) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ Ideally, I'd like to run with -UOLDUNIX and get the output: #ifndef DOUBLEPAD #if (defined NT) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ This may be pushing my luck! Motivation: large, ancient code base with lots of conditional code. Many of the conditions no longer apply - the OLDUNIX platform, for example, is no longer made and no longer supported, so there is no need to have references to it in the code. Other conditions are always true. For example, features are added with conditional compilation so that a single version of the code can be used for both older versions of the software where the feature is not available and newer versions where it is available (more or less). Eventually, the old versions without the feature are no longer supported - everything uses the feature - so the condition on whether the feature is present or not should be removed, and the 'when feature is absent' code should be removed too. I'd like to have a tool to do the job automatically because it will be faster and more reliable than doing it manually (which is rather critical when the code base includes 21,500 source files). (A really clever version of the tool might read #include'd files to determine whether the control macros - those specified by -D or -U on the command line - are defined in those files. I'm not sure whether that's truly helpful except as a backup diagnostic. Whatever else it does, though, the pseudo-pre-processor must not expand macros or include files verbatim. The output must be source similar to, but usually simpler than, the input code.) Status Report (one year later) After a year of use, I am very happy with 'sunifdef' recommended by the selected answer. It hasn't made a mistake yet, and I don't expect it to. The only quibble I have with it is stylistic. Given an input such as: #if (defined(A) && defined(B)) || defined(C) || (defined(D) && defined(E)) and run with '-UC' (C is never defined), the output is: #if defined(A) && defined(B) || defined(D) && defined(E) This is technically correct because '&&' binds tighter than '||', but it is an open invitation to confusion. I would much prefer it to include parentheses around the sets of '&&' conditions, as in the original: #if (defined(A) && defined(B)) || (defined(D) && defined(E)) However, given the obscurity of some of the code I have to work with, for that to be the biggest nit-pick is a strong compliment; it is valuable tool to me. The New Kid on the Block Having checked the URL for inclusion in the information above, I see that (as predicted) there is an new program called Coan that is the successor to 'sunifdef'. It is available on SourceForge and has been since January 2010. I'll be checking it out...further reports later this year, or maybe next year, or sometime, or never.

    Read the article

  • Perl CGI that sends a temporary loading page to client then later sends the actual results page

    - by Kurt W. Leucht
    I've wasted at least a half day of my company's time searching the Internet for an answer and I'm getting wrapped around the axle here. I can't figure out the difference between all the different technology choices (long polling, ajax streaming, comet, XMPP, etc.) and I can't get a simple hello world example working on my PC. I am running Apache 2.2 and ActivePerl 5.10.0. JavaScript is completely acceptable for this solution. All I want to do is write a simple Perl CGI script that when accessed, it immediately returns some HTML that tells the user to wait or maybe sends an animated GIF. Then without any user intervention (no mouse clicks or anything) I want the CGI script to at some time later replace the wait message or the animated GIF with the actual HTML results from their query. I know this is simple stuff and websites do it all the time, but I can't find a single working example that I can cut and paste onto my machine that will work. Here is my simple Hello World example that I've compiled from various Internet sources, but it doesn't seem to work. When I refresh this CGI URL in my web browser it prints nothing for 5 seconds, then it prints the PLEASE BE PATIENT web page, but not the results web page. What am I doing wrong? #!C:\Perl\bin\perl.exe use CGI; use CGI::Carp qw/fatalsToBrowser warningsToBrowser/; sub Create_HTML { my $html = <<EOHTML; <html> <head> <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="expires" content="-1" /> <script type="text/javascript" > var xmlhttp=false; /*@cc_on @*/ /*@if (@_jscript_version >= 5) // JScript gives us Conditional compilation, we can cope with old IE versions. // and security blocked creation of the objects. try { xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } catch (E) { xmlhttp = false; } } @end @*/ if (!xmlhttp && typeof XMLHttpRequest!='undefined') { try { xmlhttp = new XMLHttpRequest(); } catch (e) { xmlhttp=false; } } if (!xmlhttp && window.createRequest) { try { xmlhttp = window.createRequest(); } catch (e) { xmlhttp=false; } } </script> <title>Ajax Streaming Connection Demo</title> </head> <body> Some header text. <p> <div id="response">PLEASE BE PATIENT</div> <p> Some footer text. </body> </html> EOHTML return $html; } my $cgi = new CGI; print $cgi->header; print Create_HTML(); sleep(5); print "<script type=\"text/javascript\">\n"; print "\$('response').innerHTML = 'Here are your results!';\n"; print "</script>\n";

    Read the article

  • Ninject 3.0 MVC kernel.bind error Auto Registration

    - by user295734
    Getting and error on kernel.Bind(scanner = ... "scanner" has the little error line under it in VS 2010. Cannot convert lambda expression to type 'System.Type[]' because it is not a delegate type Tyring to Auto Register like the old kernel.scan in 2.0. I can not figure out what i am doing wrong. Added and removed so many Ninject packages. completely lost, getting to be a big waste of time. using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; using Ninject; using Ninject.Web.Common; //using Ninject.Extensions.Conventions; using Ninject.Web.WebApi; using Ninject.Web.Mvc; using CommonServiceLocator.NinjectAdapter; using System.Reflection; using System.IO; using LR.Repository; using LR.Repository.Interfaces; using LR.Service.Interfaces; using System.Web.Http; public static class NinjectWebCommon { private static readonly Bootstrapper bootstrapper = new Bootstrapper(); /// <summary> /// Starts the application /// </summary> public static void Start() { DynamicModuleUtility.RegisterModule(typeof(OnePerRequestHttpModule)); DynamicModuleUtility.RegisterModule(typeof(NinjectHttpModule)); bootstrapper.Initialize(CreateKernel); } /// <summary> /// Stops the application. /// </summary> public static void Stop() { bootstrapper.ShutDown(); } /// <summary> /// Creates the kernel that will manage your application. /// </summary> /// <returns>The created kernel.</returns> private static IKernel CreateKernel() { var kernel = new StandardKernel(); kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel); kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>(); RegisterServices(kernel); return kernel; } /// <summary> /// Load your modules or register your services here! /// </summary> /// <param name="kernel">The kernel.</param> private static void RegisterServices(IKernel kernel) { kernel.Bind(scanner => scanner.FromAssembliesInPath(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)) .Select(IsServiceType) .BindToDefaultInterface() .Configure(binding => binding.InSingletonScope()) ); } private static bool IsServiceType(Type type) { // temp return true; // .Any() is not recognized either. return true; // type.IsClass && type.GetInterfaces().Any(intface => intface.Name == "I" + type.Name); }

    Read the article

  • Delphi: how to create Firebird database programmatically

    - by Brad
    I'm using D2K9, Zeos 7Alpha, and Firebird 2.1 I had this working before I added the autoinc field. Although I'm not sure I was doing it 100% correctly. I don' know what order to do the SQL code, with the triggers, Generators, etc.. I've tried several combinations, I'm guessing I'm doing something wrong other than just that for this not to work. SQL File From IB Expert : /********************************************/ /* Generated by IBExpert 5/4/2010 3:59:48 PM / /*********************************************/ /********************************************/ /* Following SET SQL DIALECT is just for the Database Comparer / /*********************************************/ SET SQL DIALECT 3; /********************************************/ /* Tables / /*********************************************/ CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; CREATE TABLE EMAIL_ACCOUNTS ( ID INTEGER NOT NULL, FNAME VARCHAR(35), LNAME VARCHAR(35), ADDRESS VARCHAR(100), CITY VARCHAR(35), STATE VARCHAR(35), ZIPCODE VARCHAR(20), BDAY DATE, PHONE VARCHAR(20), UNAME VARCHAR(255), PASS VARCHAR(20), EMAIL VARCHAR(255), CREATEDDATE DATE, "ACTIVE" BOOLEAN DEFAULT 0 NOT NULL /* BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) /, BANNED BOOLEAN DEFAULT 0 NOT NULL / BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) /, "PUBLIC" BOOLEAN DEFAULT 0 NOT NULL / BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) */, NOTES BLOB SUB_TYPE 0 SEGMENT SIZE 1024 ); /********************************************/ /* Primary Keys / /*********************************************/ ALTER TABLE EMAIL_ACCOUNTS ADD PRIMARY KEY (ID); /********************************************/ /* Triggers / /*********************************************/ SET TERM ^ ; /********************************************/ /* Triggers for tables / /*********************************************/ /* Trigger: EMAIL_ACCOUNTS_BI */ CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF (NEW.ID IS NULL) THEN NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1); END ^ SET TERM ; ^ /********************************************/ /* Privileges / /*********************************************/ Triggers: /********************************************/ /* Following SET SQL DIALECT is just for the Database Comparer / /*********************************************/ SET SQL DIALECT 3; CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; SET TERM ^ ; CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF (NEW.ID IS NULL) THEN NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1); END ^ SET TERM ; ^ Generators: CREATE SEQUENCE GEN_EMAIL_ACCOUNTS_ID; ALTER SEQUENCE GEN_EMAIL_ACCOUNTS_ID RESTART WITH 2; /* Old syntax is: CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; SET GENERATOR GEN_EMAIL_ACCOUNTS_ID TO 2; */ My Code: procedure TForm2.New1Click(Sender: TObject); var query:string; begin if JvOpenDialog1.Execute then begin ZConnection1.Disconnect; ZConnection1.Database:= jvOpenDialog1.FileName; if not FileExists(ZConnection1.database) then begin ZConnection1.Properties.Add('createnewdatabase=create database '''+ZConnection1.Database+''' user ''sysdba'' password ''masterkey'' page_size 4096 default character set iso8859_2;'); try ZConnection1.Connect; except ShowMessage('Error Connection To Database File'); application.Terminate; end; end else begin ShowMessage('Database File Already Exists.'); exit; end; end; query := 'CREATE DOMAIN BOOLEAN AS SMALLINT CHECK (value is null or value in (0, 1))'; Zconnection1.ExecuteDirect(query); query:='CREATE TABLE EMAIL_ACCOUNTS (ID INTEGER NOT NULL,FNAME VARCHAR(35),LNAME VARCHAR(35),'+ 'ADDRESS VARCHAR(100), CITY VARCHAR(35), STATE VARCHAR(35), ZIPCODE VARCHAR(20),' + 'BDAY DATE, PHONE VARCHAR(20), UNAME VARCHAR(255), PASS VARCHAR(20),' + 'EMAIL VARCHAR(255),CREATEDDATE DATE , '+ '"ACTIVE" BOOLEAN DEFAULT 0 NOT NULL,'+ 'BANNED BOOLEAN DEFAULT 0 NOT NULL,'+ '"PUBLIC" BOOLEAN DEFAULT 0 NOT NULL,' + 'NOTES BLOB SUB_TYPE 0 SEGMENT SIZE 1024)'; //ZConnection.ExecuteDirect('CREATE TABLE NOTES (noteTitle TEXT PRIMARY KEY,noteDate DATE,noteNote TEXT)'); Zconnection1.ExecuteDirect(query); { } query := 'CREATE SEQUENCE GEN_EMAIL_ACCOUNTS_ID;'+ 'ALTER SEQUENCE GEN_EMAIL_ACCOUNTS_ID RESTART WITH 1'; Zconnection1.ExecuteDirect(query); query := 'ALTER TABLE EMAIL_ACCOUNTS ADD PRIMARY KEY (ID)'; Zconnection1.ExecuteDirect(query); query := 'SET TERM ^'; Zconnection1.ExecuteDirect(query); query := 'CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS'+ 'ACTIVE BEFORE INSERT POSITION 0'+ 'AS'+ 'BEGIN'+ 'IF (NEW.ID IS NULL) THEN'+ 'NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1);'+ 'END'+ '^'+ 'SET TERM ; ^'; Zconnection1.ExecuteDirect(query); ZTable1.Active:=true; end;

    Read the article

  • Rails HABTM accepts_nested_attributes_for mapping table

    - by Rabbott
    Currently I have a habtm relationship between a datastream (stream), and a chart. I want to be able to use the 'typical' jquery "add new stream" method to add a new entry into a mapping table that holds the chart_id, and stream_id.. But I think that most examples out there are made to handle a one to many relationship.. The functionality provided here by Ryan Bates is what I'm looking for, But I dont want to use checkboxes, I want to use a select (drop down) to add new items to the mapping table. I need THIS and THIS combined chart.rb class Chart < ActiveRecord::Base has_and_belongs_to_many :streams, :readonly => false, :join_table => 'charts_streams' accepts_nested_attributes_for :streams, :reject_if => lambda { |a| a.values.all?(&:blank?) }, :allow_destroy => true stream.rb class Stream < ActiveRecord::Base has_and_belongs_to_many :charts, :readonly => false, :join_table => 'charts_streams' application.js /* * Method used to add new child form partials */ $('form a.add_child').click(function() { var assoc = $(this).attr('data-association'); var content = $('#' + assoc + '_fields_template').html(); var regexp = new RegExp('new_' + assoc, 'g'); var new_id = new Date().getTime(); var newElements = jQuery(content.replace(regexp, new_id)).hide(); $(this).parent().before(newElements).prev().slideFadeToggle(); return false; }); /* * Method used to remove child form partials */ $('form a.remove_child').live('click', function() { if(confirm('Are you sure?')) { var hidden_field = $(this).prev('input[type=hidden]')[0]; if(hidden_field) { hidden_field.value = '1'; } $(this).parents('.fields').slideFadeToggle(); } return false; }); _form.html.erb <div id="streams"> <h4>Streams</h4> <% form.fields_for :streams do |stream_form| %> <%= render :partial => "stream", :locals => {:f => stream_form} %> <% end %> </div> <p><%= add_child_link 'Add a stream', :streams %></p> <%= new_child_fields_template(form, :streams) %> _stream.html.erb <div class="fields"> <p> <%= f.label :stream_id %><br /> <%= select_tag "chart[stream_ids][]", options_for_select(@streams.map {|s| [s.label, s.id]}, f.object.id) %> </p> <p><%= remove_child_link 'Remove stream', f %></p> </div> This may be overkill and what I am looking to do could be much easier.. Basically when I create a chart I want to be able to add as many streams to it as I want, using the javascript for adding a new entry, and removing an old one... Thanks for the help! This is 'working' but gives me a ActiveRecord::ReadOnlyRecord error when it hits the chart.update_attributes method.. am I adding the the :readonly = false in the wrong spot? Or am I just doing it completely wrong?

    Read the article

< Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >