Search Results

Search found 34595 results on 1384 pages for 'vendor id'.

Page 476/1384 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Rails app deployment challenge, not finding database table in production.log

    - by Stefan M
    I'm trying to setup PasswordPusher as my first ruby app ever. Building and running the webrick server as instructed in README works fine. It was only when I tried to add Apache ProxyPass and ProxyPassReverse that the page load slowed down to several minutes. So I gave mod_passenger a whirl but now it's unable to find the password table. Here's what I get in log/production.log. Started GET "/" for 10.10.2.13 at Sun Jun 10 08:07:19 +0200 2012 Processing by PasswordsController#new as HTML Completed 500 Internal Server Error in 1ms ActiveRecord::StatementInvalid (Could not find table 'passwords'): app/controllers/passwords_controller.rb:77:in `new' app/controllers/passwords_controller.rb:77:in `new' While in log/private.log I get a lot more output so here's just a snippet but it looks to me like it's working with the database. Edit: This was actually old log output, maybe from db:create. Migrating to AddUserToPassword (20120220172426) (0.3ms) ALTER TABLE "passwords" ADD "user_id" integer (0.0ms) PRAGMA index_list("passwords") (0.2ms) CREATE INDEX "index_passwords_on_user_id" ON "passwords" ("user_id") (0.7ms) INSERT INTO "schema_migrations" ("version") VALUES ('20120220172426') (0.1ms) select sqlite_version(*) (0.1ms) SELECT "schema_migrations"."version" FROM "schema_migrations" (0.0ms) PRAGMA index_list("passwords") (0.0ms) PRAGMA index_info('index_passwords_on_user_id') (4.6ms) PRAGMA index_list("rails_admin_histories") (0.0ms) PRAGMA index_info('index_rails_admin_histories') (0.0ms) PRAGMA index_list("users") (4.8ms) PRAGMA index_info('index_users_on_unlock_token') (0.0ms) PRAGMA index_info('index_users_on_reset_password_token') (0.0ms) PRAGMA index_info('index_users_on_email') (0.0ms) PRAGMA index_list("views") In my vhost I have it set to use RailsEnv private. <VirtualHost *:80> # ProxyPreserveHost on # # ProxyPass / http://10.220.100.209:180/ # ProxyPassReverse / http://10.220.100.209:180/ DocumentRoot /var/www/pwpusher/public <Directory /var/www/pwpusher/public> allow from all Options -MultiViews </Directory> RailsEnv private ServerName pwpush.intranet ErrorLog /var/log/apache2/error.log LogLevel debug CustomLog /var/log/apache2/access.log combined </VirtualHost> My passenger.conf in mods-enabled is default for Debian. <IfModule mod_passenger.c> PassengerRoot /usr PassengerRuby /usr/bin/ruby </IfModule> In the apache error.log I get something more cryptic to me. [Sun Jun 10 06:25:07 2012] [notice] Apache/2.2.16 (Debian) Phusion_Passenger/2.2.11 PHP/5.3.3-7+squeeze9 with Suhosin-Patch mod_ssl/2.2.16 OpenSSL/0.9.8o configured -- resuming normal operations /var/www/pwpusher/vendor/bundle/ruby/1.8/bundler/gems/modernizr-rails-09e9e6a92d67/lib/modernizr/rails/version.rb:3: warning: already initialized constant VERSION cache: [GET /] miss [Sun Jun 10 08:07:19 2012] [debug] mod_deflate.c(615): [client 10.10.2.13] Zlib: Compressed 728 to 423 : URL / /var/www/pwpusher/vendor/bundle/ruby/1.8/bundler/gems/modernizr-rails-09e9e6a92d67/lib/modernizr/rails/version.rb:3: warning: already initialized constant VERSION cache: [GET /] miss [Sun Jun 10 10:17:16 2012] [debug] mod_deflate.c(615): [client 10.10.2.13] Zlib: Compressed 728 to 423 : URL / Maybe that's routine stuff. I can see the rake command create files in the relative app root db/. I have private.sqlite3, production.sqlite3 among others. And here's my config/database.yml. base: &base adapter: sqlite3 timeout: 5000 development: database: db/development.sqlite3 <<: *base test: database: db/test.sqlite3 <<: *base private: database: db/private.sqlite3 <<: *base production: database: db/production.sqlite3 <<: *base I've tried setting absolute paths in it but that did not help.

    Read the article

  • Setting up VPN client: L2TP with IPsec

    - by zachar
    I've got to connect to vpn server. It works on Windows, but in Ubuntu 10.04 not. Number of options is confusing for me. There is the input that I have: IP Address of VPN Pre-shared key to authenticate Information that MS-CHAPv2 is used Login and Password to VPN I was trying to achive that with network manager and with L2TP IPsec VPN Manager 1.0.9 but at failed. There is some logged information from L2TP IPsec VPN Manager 1.0.9: Nov 09 15:21:46.854 ipsec_setup: Stopping Openswan IPsec... Nov 09 15:21:48.088 Stopping xl2tpd: xl2tpd. Nov 09 15:21:48.132 ipsec_setup: Starting Openswan IPsec U2.6.23/K2.6.32-49-generic... Nov 09 15:21:48.308 ipsec__plutorun: Starting Pluto subsystem... Nov 09 15:21:48.318 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Nov 09 15:21:48.338 ipsec__plutorun: 002 added connection description "my_vpn_name" Nov 09 15:21:48.348 ipsec__plutorun: 003 NAT-Traversal: Trying new style NAT-T Nov 09 15:21:48.348 ipsec__plutorun: 003 NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Nov 09 15:21:48.349 ipsec__plutorun: 003 NAT-Traversal: Trying old style NAT-T Nov 09 15:21:48.994 104 "my_vpn_name" #1: STATE_MAIN_I1: initiate Nov 09 15:21:48.994 003 "my_vpn_name" #1: received Vendor ID payload [RFC 3947] method set to=109 Nov 09 15:21:48.994 003 "my_vpn_name" #1: received Vendor ID payload [Dead Peer Detection] Nov 09 15:21:48.994 106 "my_vpn_name" #1: STATE_MAIN_I2: sent MI2, expecting MR2 Nov 09 15:21:48.994 003 "my_vpn_name" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed Nov 09 15:21:48.994 108 "my_vpn_name" #1: STATE_MAIN_I3: sent MI3, expecting MR3 Nov 09 15:21:48.994 004 "my_vpn_name" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Nov 09 15:21:48.995 117 "my_vpn_name" #2: STATE_QUICK_I1: initiate Nov 09 15:21:48.995 004 "my_vpn_name" #2: STATE_QUICK_I2: sent QI2, IPsec SA established transport mode {ESP=>0x0c96795d <0x483e1a42 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none} Nov 09 15:21:49.996 [ERROR 210] Failed to open l2tp control file 'c my_vpn_name' and from syslog: Nov 9 15:21:46 o99 L2tpIPsecVpnControlDaemon: Opening client connection Nov 9 15:21:46 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec setup stop Nov 9 15:21:46 o99 ipsec_setup: Stopping Openswan IPsec... Nov 9 15:21:48 o99 kernel: [ 4350.245171] NET: Unregistered protocol family 15 Nov 9 15:21:48 o99 ipsec_setup: ...Openswan IPsec stopped Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec setup stop finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command invoke-rc.d xl2tpd stop Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command invoke-rc.d xl2tpd stop finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Opening client connection Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Closing client connection Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec setup start Nov 9 15:21:48 o99 kernel: [ 4350.312483] NET: Registered protocol family 15 Nov 9 15:21:48 o99 ipsec_setup: Starting Openswan IPsec U2.6.23/K2.6.32-49-generic... Nov 9 15:21:48 o99 ipsec_setup: Using NETKEY(XFRM) stack Nov 9 15:21:48 o99 kernel: [ 4350.410774] Initializing XFRM netlink socket Nov 9 15:21:48 o99 kernel: [ 4350.413601] padlock: VIA PadLock not detected. Nov 9 15:21:48 o99 kernel: [ 4350.427311] padlock: VIA PadLock Hash Engine not detected. Nov 9 15:21:48 o99 kernel: [ 4350.441533] padlock: VIA PadLock not detected. Nov 9 15:21:48 o99 ipsec_setup: ...Openswan IPsec started Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec setup start finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command invoke-rc.d xl2tpd start Nov 9 15:21:48 o99 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Nov 9 15:21:48 o99 pluto: adjusting ipsec.d to /etc/ipsec.d Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command invoke-rc.d xl2tpd start finished with exit code 0 Nov 9 15:21:48 o99 ipsec__plutorun: 002 added connection description "my_vpn_name" Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec auto --ready Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: Trying new style NAT-T Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: Trying old style NAT-T Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec auto --ready finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec auto --up my_vpn_name Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec auto --up my_vpn_name finished with exit code 0 Nov 9 15:21:49 o99 L2tpIPsecVpnControlDaemon: Closing client connection Can anyone tell me something more about that? Where is the mistake?

    Read the article

  • SNTP, why do you mock me?!

    - by Matthew
    --- SOLVED SEE EDIT 5 --- My w2k3 pdc is configured as an authoritative time server. Other servers on the domain are able to sync with it if I manually specify it in the peer list. By if I try to sync from flags 'domhier', it wont resync; I get the error message The computer did not resync because no time data was available. I can only think that it is not querying the pdc. I also tried setting the registry as shown here (http://support.microsoft.com/kb/193825). But no luck (I have not restarted the server, I am hoping I wont have to since it is the pdc) If you would like any further information on my config, please let me know. Edit 1: I have set the w32time service config AnnouceFlags to 0x05 as documented here www.krr.org/microsoft/authoritative_time_servers.php and a number of other places. The PDC syncs to an external time source (ntp). I can get the stripchart on the client from the pdc no problems. The loginserver for the host I am trying to configure is shown as the pdc. Edit 2: The packet capture has revealed something interesting. The client is contacting the correct server, and getting a valid response but I still get the same error message. Here is the NTP excerpt from the client to the server Flags: 11.. .... = Leap Indicator: alarm condition (clock not synchronized) (3) ..01 1... = Version number: NTP Version 3 (3) .... .011 = Mode: client (3) Peer Clock Stratum: unspecified or unavailable (0) Peer Polling Interval: 10 (1024 sec) Peer Clock Precision: 0.015625 sec Root Delay: 0.0000 sec Root Dispersion: 1.0156 sec Reference Clock ID: NULL Reference Clock Update Time: Sep 1, 2010 05:29:39.8170 UTC Originate Time Stamp: NULL Receive Time Stamp: NULL Transmit Time Stamp: Nov 8, 2010 01:44:44.1450 UTC Key ID: DC080000 Here is the reply NTP excerpt from the server to the client Flags: 0x1c 00.. .... = Leap Indicator: no warning (0) ..01 1... = Version number: NTP Version 3 (3) .... .100 = Mode: server (4) Peer Clock Stratum: secondary reference (3) Peer Polling Interval: 10 (1024 sec) Peer Clock Precision: 0.00001 sec Root Delay: 0.1484 sec Root Dispersion: 0.1060 sec Reference Clock ID: 192.189.54.17 Reference Clock Update Time: Nov 8,2010 01:18:04.6223 UTC Originate Time Stamp: Nov 8, 2010 01:44:44.1450 UTC Receive Time Stamp: Nov 8, 2010 01:46:44.1975 UTC Transmit Time Stamp: Nov 8, 2010 01:46:44.1975 UTC Key ID: 00000000 Edit 3: dumpreg for paramters on pdc Value Name Value Type Value Data ------------------------------------------------------------------------ ServiceMain REG_SZ SvchostEntry_W32Time ServiceDll REG_EXPAND_SZ C:\WINDOWS\system32\w32time.dll NtpServer REG_SZ bhvmmgt01.domain.com,0x1 Type REG_SZ AllSync and config Value Name Value Type Value Data -------------------------------------------------------------------------- LastClockRate REG_DWORD 156249 MinClockRate REG_DWORD 155860 MaxClockRate REG_DWORD 156640 FrequencyCorrectRate REG_DWORD 4 PollAdjustFactor REG_DWORD 5 LargePhaseOffset REG_DWORD 50000000 SpikeWatchPeriod REG_DWORD 900 HoldPeriod REG_DWORD 5 LocalClockDispersion REG_DWORD 10 EventLogFlags REG_DWORD 2 PhaseCorrectRate REG_DWORD 7 MinPollInterval REG_DWORD 6 MaxPollInterval REG_DWORD 10 UpdateInterval REG_DWORD 100 MaxNegPhaseCorrection REG_DWORD -1 MaxPosPhaseCorrection REG_DWORD -1 AnnounceFlags REG_DWORD 5 MaxAllowedPhaseOffset REG_DWORD 300 FileLogSize REG_DWORD 10000000 FileLogName REG_SZ C:\Windows\Temp\w32time.log FileLogEntries REG_SZ 0-300 Edit 4: Here are some notables from the ntp log file on the pdc. ReadConfig: failed. Use default one 'TimeJumpAuditOffset'=0x00007080 DomainHierachy: we are now the domain root. ClockDispln: we're a reliable time service with no time source: LS: 0, TN: 864000000000, WAIT: 86400000 Edit 5: F&^%ING SOLVED! Ok so I was reading about people with similar problems, some mentioned w32time server settings applied by GPO, but I tested this early on and there were no settings applied to this service by gpo. Others said that the reporting software may not be picking up some old gpo settings applied. So I searched the registry for all w32time instaces. I came across an interesting key that indicated there may be some other ntp software running on the server. Sure enough, I look through the installed software list and there the little F*&%ER is. Uninstalled and now working like a dream. FFFFFFFUUUUUUUUUUUU

    Read the article

  • Hanging of host network connections when starting KVM guest on bridge

    - by Chris Phillips
    Hi, I've a KVM system upon which I'm running a network bridge directly between all VM's and a bond0 (eth0, eth1) on the host OS. As such, all machines are presented on the same subnet, available outside of the box. The bond is doing mode 1 active / passive, with an arp_ip_target set to the default gateway, which has caused some issues in itself, but I can't see the bond configs mattering here myself. I'm seeing odd things most times when I stop and start a guest on the platform, in that on the host I lose network connectivity (icmp, ssh) for about 30 seconds. I don't lose connectivity on the other already running VM's though... they can always ping the default GW, but the host can't. I say "about 30 seconds" but from some tests it actually seems to be 28 seconds usually (or at least, I lose 28 pings...) and I'm wondering if this somehow relates to the bridge config. I'm not running STP on the bridge at all, and the forwarding delay is set to 1 second, path cost on the bond0 lowered to 10 and port priority of bond0 also lowered to 1. As such I don't think that the bridge should ever be able to think that bond0 is not connected just fine (as continued guest connectivity implies) yet the IP of the host, which is on the bridge device (... could that matter?? ) becomes unreachable. I'm fairly sure it's about the bridged networking, but at the same time as this happens when a VM is started there are clearly loads of other things also happening so maybe I'm way off the mark. Lack of connectivity: # ping 10.20.11.254 PING 10.20.11.254 (10.20.11.254) 56(84) bytes of data. 64 bytes from 10.20.11.254: icmp_seq=1 ttl=255 time=0.921 ms 64 bytes from 10.20.11.254: icmp_seq=2 ttl=255 time=0.541 ms type=1700 audit(1293462808.589:325): dev=vnet6 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.604:326): dev=vnet7 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.618:327): dev=vnet8 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x130079 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xffdd694a kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x530079 64 bytes from 10.20.11.254: icmp_seq=30 ttl=255 time=0.514 ms 64 bytes from 10.20.11.254: icmp_seq=31 ttl=255 time=0.551 ms 64 bytes from 10.20.11.254: icmp_seq=32 ttl=255 time=0.437 ms 64 bytes from 10.20.11.254: icmp_seq=33 ttl=255 time=0.392 ms brctl output of relevant bridge: # brctl showstp brdev brdev bridge id 8000.b2e1378d1396 designated root 8000.b2e1378d1396 root port 0 path cost 0 max age 19.99 bridge max age 19.99 hello time 1.99 bridge hello time 1.99 forward delay 0.99 bridge forward delay 0.99 ageing time 299.95 hello timer 0.50 tcn timer 0.00 topology change timer 0.00 gc timer 0.04 flags vnet5 (3) port id 8003 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8003 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags vnet0 (2) port id 8002 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8002 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags bond0 (1) port id 0001 state forwarding designated root 8000.b2e1378d1396 path cost 10 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 0001 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags I do see the new port listed as learning, but in line with the forward delay, only for 1 or 2 seconds when polling the brctl output on a loop. All pointers, tips or stabs in the dark appreciated.

    Read the article

  • Bridging a non-persistent PPP connection to wireless (or wired) in Windows XP

    - by phooze
    I have a 3G modem-like device (eMobile's D01NX, PC card style, for any Japan nerds out there) that I use to connect my PC to the Internet. I'd like to bridge this connection with another computer either via an ad-hoc wireless network, or a simple cross-over cable (either are options). However, when I open "Network Connections", I do not see the PPP connection (otherwise I could click both and bridge). I believe this is because there is software (provided by the vendor) that is handling the card directly and registering a PPP connection dynamically. When connected, an ipconfig at the command line yields: Ethernet adapter wireless: Connection-specific DNS Suffix . : Autoconfiguration IP Address. . . : 169.254.5.169 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : Ethernet adapter lan: Media State . . . . . . . . . . . : Media disconnected PPP adapter {B59EEDDE-A22B-48DF-93E5-04842B641257}: Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 114.xx.xxx.xx Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 114.xx.xxx.xx (I've commented out my IP address for privacy reasons, but what does appear there is a functional Internet IP address.) When I disconnect the adapter with the vendor software, the PPP connection disappears completely from the ipconfig list. Any ideas on how to do this?

    Read the article

  • Where's my memory?! Nginx + PHP-FPM front end webserver slows to a crawl...

    - by incredimike
    I'm not sure if I have a problem with a memory leak (as my hosting company suggests), or if we both need to read http://linuxatemyram.com. Maybe you clever people can help us out? This is a front-end webserver VM running essentially only nginx & php-fpm on RHEL 5.5. This server is powering Magento, a PHP eCommerce thinggy. The server is running in a shared environment, but we're changing that soon. Anyway.. after a reboot the server runs just fine, but within a day it will grind itself into nothingness. Pages will take literally 2 minutes to load, CPU spikes like crazy, etc.. The console is even sluggish when I SSH in. It's like my whole server is being brought to its knees. I've also been monitoring the DB server via top and tcpdumping incoming traffic. The DB stays idle for a good portion of that "slow" load time. When i start seeing queries coming from the front-end server, the page loads soon afterward. Here are some stats after me logging in during a slow-down, after restarting php-fpm: [mike@front01 ~]$ free -m total used free shared buffers cached Mem: 5963 5217 745 0 192 314 -/+ buffers/cache: 4711 1252 Swap: 4047 4 4042 [mike@front01 ~]$ top top - 11:38:55 up 2 days, 1:01, 3 users, load average: 0.06, 0.17, 0.21 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.3%sy, 0.0%ni, 99.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 6106800k total, 5361288k used, 745512k free, 199960k buffers Swap: 4144728k total, 4976k used, 4139752k free, 328480k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31806 apache 15 0 601m 120m 37m S 0.0 2.0 0:22.23 php-fpm 31805 apache 15 0 549m 66m 31m S 0.0 1.1 0:14.54 php-fpm 31809 apache 16 0 547m 65m 32m S 0.0 1.1 0:12.84 php-fpm 32285 apache 15 0 546m 63m 33m S 0.0 1.1 0:09.22 php-fpm 32373 apache 15 0 546m 62m 32m S 0.0 1.1 0:09.66 php-fpm 31808 apache 16 0 543m 60m 35m S 0.0 1.0 0:18.93 php-fpm 31807 apache 16 0 533m 49m 30m S 0.0 0.8 0:08.93 php-fpm 32092 apache 15 0 535m 48m 27m S 0.0 0.8 0:06.67 php-fpm 4392 root 18 0 194m 10m 7184 S 0.0 0.2 0:06.96 cvd 4064 root 15 0 154m 8304 4220 S 0.0 0.1 3:55.57 snmpd 4394 root 15 0 119m 5660 2944 S 0.0 0.1 0:02.84 EvMgrC 31804 root 15 0 519m 5180 932 S 0.0 0.1 0:00.46 php-fpm 4138 ntp 15 0 23396 5032 3904 S 0.0 0.1 0:02.38 ntpd 643 nginx 15 0 95276 4408 1524 S 0.0 0.1 0:01.15 nginx 5131 root 16 0 90128 3340 2600 S 0.0 0.1 0:01.41 sshd 28467 root 15 0 90128 3340 2600 S 0.0 0.1 0:00.35 sshd 32602 root 16 0 90128 3332 2600 S 0.0 0.1 0:00.36 sshd 1614 root 16 0 90128 3308 2588 S 0.0 0.1 0:00.02 sshd 2817 root 5 -10 7216 3140 1724 S 0.0 0.1 0:03.80 iscsid 4161 root 15 0 66948 2340 800 S 0.0 0.0 0:10.35 sendmail 1617 nicole 17 0 53876 2000 1516 S 0.0 0.0 0:00.02 sftp-server ... Is there anything else I should be looking at, or any more information that might be useful? I'm just a developer, but the slowdowns on this system worry me and make it hard to do my work.. Help me out, ServerFault!

    Read the article

  • "Service Unavailable" when browsing to static HTML page in non-application IIS website on Windows 2003 (possibly SharePoint WSS 2.0 related?)

    - by Jordan Rieger
    Background: My client has an old Pentium III Windows 2003 server whose 16/36 GB disks are dying. On it he has a database-driven web site and email application that needs further customization by a developer (me). First we need to get it working on the new server. The original developer is no longer available to provide a system setup guide. So my client got a tech who imaged the old drives over to the new server and managed to get it booting. But the IIS-driven site no longer works. In fact it seems that IIS itself does not work. Problem: Service Unavailable when attempting to browse from the server itself to the URL for a local Web Site called test which I setup in IIS to serve a single static index.htm file. This I did to isolate the problem, and eliminate the client's application from the equation. The site is setup on port 80 with the host header "test.myclientsdomain.com", and I used the etc\hosts file to point that host at the local IP. I know the host entry took effect because I can ping it. When doing an iisreset, I get: Attempting start... Restart attempt failed. IIS Admin Service or a service dependent on IIS Admin is not active. It most likely failed to start, which may mean that it's disabled. Despite this message, the services all stay in the Started state. The only relevant System event logs I found are: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1002 Date: 11/4/2012 Time: 11:04:47 PM User: N/A Computer: ALPHA1 Description: Application pool 'DefaultAppPool' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1039 Date: 11/4/2012 Time: 11:13:12 PM User: N/A Computer: ALPHA1 Description: A process serving application pool 'DefaultAppPool' reported a failure. The process id was '5636'. The data field contains the error number. Data: 0000: 7e 00 07 80 ~.. And one Application event log: Event Type: Error Event Source: Windows SharePoint Services 2.0 Event Category: None Event ID: 1000 Date: 11/4/2012 Time: 11:34:04 PM User: N/A Computer: ALPHA1 Description: #50070: Unable to connect to the database STS_Config on ALPHA2\SharePoint. Check the database connection information and make sure that the database server is running. That last log tells me that the tech may have initially tried to have both the old and the new server running, by renaming the new server from ALPHA1 to ALPHA2. And perhaps SharePoint grabbed onto that change, and now can't tell that the machine name has been switched back to the old ALPHA1. But why would SharePoint interfere with a static IIS web site serving a single HTML file? The test site is not even within an Application pool (I clicked the Remove button.) What I have tried/eliminated: No relevant services seem to be disabled: IIS Admin, WWW Publishing, Sharepoint Timer Giving Full Control to All Users/Everyone on the c:\inetpub\test folder serving my test site. I can connect to and query the local SharePoint config database (ALPHA1\SHAREPOINT\STS_CONFIG) from SSMS. But when I try to do stsadm -o setconfigdb -connect -databaseserver ALPHA1\SHAREPOINT it tells me The SharePoint admininstration port does not exist. Please use stsadm.exe to create it. And when I do that, using the port 9487 specified in the IIS SharePoint Admin site config, it tells me the port is already in use. Needless to say, simply browsing to the admin site gives me a similar error about being unable to reach the config database. I didn't want to go further down the SharePoint path as it may be completed unrelated to my IIS issue, and I don't even know yet if SharePoint is required for this application to work. The app itself is ASP.Net/C#/Silverlight and a little MS Word integration (maybe that's where the SharePoint stuff comes in.)

    Read the article

  • jqgrid with asp.net webmethod and json working with sorting, paging, searching and LINQ

    - by aimlessWonderer
    THIS WORKS! Most topics covering jqgrid and asp.net seem to relate to just receiving JSON, or working in the MVC framework, or utilizing other handlers or web services... but not many dealt with actually passing parameters back to an actual webmethod in the codebehind. Furthermore, scarce are the examples that contain successful implementation the AJAX paging, sorting, or searching along with LINQ to SQL for asp.net jqGrid. Below is a working example that may help others who need help to pass parameters to jqGrid in order to have correct paging, sorting, filtering.. it uses pieces from here and there... ================================================== First, THE JAVASCRIPT <script type="text/javascript"> $(document).ready(function() { var grid = $("#list"); $("#list").jqGrid({ // setup custom parameter names to pass to server prmNames: { search: "isSearch", nd: null, rows: "numRows", page: "page", sort: "sortField", order: "sortOrder" }, // add by default to avoid webmethod parameter conflicts postData: { searchString: '', searchField: '', searchOper: '' }, // setup ajax call to webmethod datatype: function(postdata) { mtype: "GET", $.ajax({ url: 'PageName.aspx/getGridData', type: "POST", contentType: "application/json; charset=utf-8", data: JSON.stringify(postdata), dataType: "json", success: function(data, st) { if (st == "success") { var grid = jQuery("#list")[0]; grid.addJSONData(JSON.parse(data.d)); } }, error: function() { alert("Error with AJAX callback"); } }); }, // this is what jqGrid is looking for in json callback jsonReader: { root: "rows", page: "page", total: "totalpages", records: "totalrecords", cell: "cell", id: "id", //index of the column with the PK in it userdata: "userdata", repeatitems: true }, colNames: ['Id', 'First Name', 'Last Name'], colModel: [ { name: 'id', index: 'id', width: 55, search: false }, { name: 'fname', index: 'fname', width: 200, searchoptions: { sopt: ['eq', 'ne', 'cn']} }, { name: 'lname', index: 'lname', width: 200, searchoptions: { sopt: ['eq', 'ne', 'cn']} } ], rowNum: 10, rowList: [10, 20, 30], pager: jQuery("#pager"), sortname: "fname", sortorder: "asc", viewrecords: true, caption: "Grid Title Here" }).jqGrid('navGrid', '#pager', { edit: false, add: false, del: false }, {}, // default settings for edit {}, // add {}, // delete { closeOnEscape: true, closeAfterSearch: true}, //search {} ) }); </script> ================================================== Second, THE C# WEBMETHOD [WebMethod] public static string getGridData(int? numRows, int? page, string sortField, string sortOrder, bool isSearch, string searchField, string searchString, string searchOper) { string result = null; MyDataContext db = null; try { //--- retrieve the data db = new MyDataContext("my connection string path"); var query = from u in db.TBL_USERs select u; //--- determine if this is a search filter if (isSearch) { searchOper = getOperator(searchOper); // need to associate correct operator to value sent from jqGrid string whereClause = String.Format("{0} {1} {2}", searchField, searchOper, "@" + searchField); //--- associate value to field parameter Dictionary<string, object> param = new Dictionary<string, object>(); param.Add("@" + searchField, searchString); query = query.Where(whereClause, new object[1] { param }); } //--- setup calculations int pageIndex = page ?? 1; //--- current page int pageSize = numRows ?? 10; //--- number of rows to show per page int totalRecords = query.Count(); //--- number of total items from query int totalPages = (int)Math.Ceiling((decimal)totalRecords / (decimal)pageSize); //--- number of pages //--- filter dataset for paging and sorting IQueryable<TBL_USER> orderedRecords = query.OrderBy(sortfield); IEnumerable<TBL_USER> sortedRecords = orderedRecords.ToList(); if (sortorder == "desc") sortedRecords= sortedRecords.Reverse(); sortedRecords= sortedRecords .Skip((pageIndex - 1) * pageSize) //--- page the data .Take(pageSize); //--- format json var jsonData = new { totalpages = totalPages, //--- number of pages page = pageIndex, //--- current page totalrecords = totalRecords, //--- total items rows = ( from row in sortedRecords select new { i = row.USER_ID, cell = new string[] { row.USER_ID.ToString(), row.FNAME.ToString(), row.LNAME } } ).ToArray() }; result = Newtonsoft.Json.JsonConvert.SerializeObject(jsonData); } catch (Exception ex) { Debug.WriteLine(ex); } finally { if (db != null) db.Dispose(); } return result; } ================================================== Third, NECESSITIES In order to have dynamic OrderBy clauses in the LINQ, I had to pull in a class to my AppCode folder called 'Dynamic.cs'. You can retrieve the file from downloading here. You will find the file in the "DynamicQuery" folder. That file will give you the ability to utilized dynamic ORDERBY clause since we don't know what column we're filtering by except on the initial load. To serialize the JSON back from the C-sharp to the JS, I incorporated the James Newton-King JSON.net DLL found here : http://json.codeplex.com/releases/view/37810. After downloading, there is a "Newtonsoft.Json.Compact.dll" which you can add in your Bin folder as a reference Here's my USING's block using System; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Web.UI.WebControls; using System.Web.Services; using System.Linq.Dynamic; For the Javascript references, I'm using the following scripts in respective order in case that helps some folks: 1) jquery-1.3.2.min.js ... 2) jquery-ui-1.7.2.custom.min.js ... 3) json.min.js ... 4) i18n/grid.locale-en.js ... 5) jquery.jqGrid.min.js For the CSS, I'm using jqGrid's necessities as well as the jQuery UI Theme: 1) jquery_theme/jquery-ui-1.7.2.custom.css ... 2) ui.jqgrid.css The key to getting the parameters from the JS to the WebMethod without having to parse an unserialized string on the backend or having to setup some JS logic to switch methods for different numbers of parameters was this block postData: { searchString: '', searchField: '', searchOper: '' }, Those parameters will still be set correctly when you actually do a search and then reset to empty when you "reset" or want the grid to not do any filtering Hope this helps some others!!!! Please reply if you find major issues or ways of refactoring or doing better that I haven't considered.

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • IIS 7 Authentication: Certain users can't authenticate, while almost all others can.

    - by user35335
    I'm using IIS 7 Digest authentication to control access to a certain directory containing files. Users access the files through a department website from inside our network and outside. I've set NTFS permissions on the directory to allow a certain AD group to view the files. When I click a link to one of those files on the website I get prompted for a username and password. With most users everything works fine, but with a few of them it prompts for a password 3 times and then get: 401 - Unauthorized: Access is denied due to invalid credentials. But other users that are in the group can get in without a problem. If I switch it over to Windows Authentication, then the trouble users can log in fine. That directory is also shared, and users that can't log in through the website are able to browse to the share and view files in it, so I know that the permissions are ok. Here's the portion of the IIS log where I tried to download the file (/assets/files/secure/WWGNL.pdf): 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bullet.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bgOFF.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:21 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 2 5 0 2010-02-19 19:47:36 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 0 2010-02-19 19:47:43 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:46 xxx.xxx.xxx.xxx GET /manager/media/script/_session.gif 0.19665693119168282 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 203 2010-02-19 19:47:46 xxx.xxx.xxx.xxx POST /manager/index.php - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 296 2010-02-19 19:47:56 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:59 xxx.xxx.xxx.xxx GET /favicon.ico - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 404 0 2 0 Here's the Failed Logon attempt in the Security Log: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 2/19/2010 11:47:43 AM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: WEB4.net.domain.org Description: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: jim.lastname Account Domain: net.domain.org Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: 10.5.16.138 Source Port: 50065 Detailed Authentication Information: Logon Process: WDIGEST Authentication Package: WDigest Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-a5ba-3e3b0328c30d}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2010-02-19T19:47:43.890Z" /> <EventRecordID>2276316</EventRecordID> <Correlation /> <Execution ProcessID="612" ThreadID="692" /> <Channel>Security</Channel> <Computer>WEB4.net.domain.org</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">jim.lastname</Data> <Data Name="TargetDomainName">net.domain.org</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc000006a</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">WDIGEST</Data> <Data Name="AuthenticationPackageName">WDigest</Data> <Data Name="WorkstationName">-</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">10.5.16.138</Data> <Data Name="IpPort">50065</Data> </EventData> </Event>

    Read the article

  • SUPER CSV write bean to CSV.

    - by ButtersB
    Here is my class, public class FreebasePeopleResults { public String intendedSearch; public String weight; public Double heightMeters; public Integer age; public String type; public String parents; public String profession; public String alias; public String children; public String siblings; public String spouse; public String degree; public String institution; public String wikipediaId; public String guid; public String id; public String gender; public String name; public String ethnicity; public String articleText; public String dob; public String getWeight() { return weight; } public void setWeight(String weight) { this.weight = weight; } public Double getHeightMeters() { return heightMeters; } public void setHeightMeters(Double heightMeters) { this.heightMeters = heightMeters; } public String getParents() { return parents; } public void setParents(String parents) { this.parents = parents; } public Integer getAge() { return age; } public void setAge(Integer age) { this.age = age; } public String getProfession() { return profession; } public void setProfession(String profession) { this.profession = profession; } public String getAlias() { return alias; } public void setAlias(String alias) { this.alias = alias; } public String getChildren() { return children; } public void setChildren(String children) { this.children = children; } public String getSpouse() { return spouse; } public void setSpouse(String spouse) { this.spouse = spouse; } public String getDegree() { return degree; } public void setDegree(String degree) { this.degree = degree; } public String getInstitution() { return institution; } public void setInstitution(String institution) { this.institution = institution; } public String getWikipediaId() { return wikipediaId; } public void setWikipediaId(String wikipediaId) { this.wikipediaId = wikipediaId; } public String getGuid() { return guid; } public void setGuid(String guid) { this.guid = guid; } public String getId() { return id; } public void setId(String id) { this.id = id; } public String getGender() { return gender; } public void setGender(String gender) { this.gender = gender; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getEthnicity() { return ethnicity; } public void setEthnicity(String ethnicity) { this.ethnicity = ethnicity; } public String getArticleText() { return articleText; } public void setArticleText(String articleText) { this.articleText = articleText; } public String getDob() { return dob; } public void setDob(String dob) { this.dob = dob; } public String getType() { return type; } public void setType(String type) { this.type = type; } public String getSiblings() { return siblings; } public void setSiblings(String siblings) { this.siblings = siblings; } public String getIntendedSearch() { return intendedSearch; } public void setIntendedSearch(String intendedSearch) { this.intendedSearch = intendedSearch; } } Here is my CSV writer method import java.io.FileWriter; import java.io.IOException; import java.util.ArrayList; import org.supercsv.io.CsvBeanWriter; import org.supercsv.prefs.CsvPreference; public class CSVUtils { public static void writeCSVFromList(ArrayList<FreebasePeopleResults> people, boolean writeHeader) throws IOException{ //String[] header = new String []{"title","acronym","globalId","interfaceId","developer","description","publisher","genre","subGenre","platform","esrb","reviewScore","releaseDate","price","cheatArticleId"}; FileWriter file = new FileWriter("/brian/brian/Documents/people-freebase.csv", true); // write the partial data CsvBeanWriter writer = new CsvBeanWriter(file, CsvPreference.EXCEL_PREFERENCE); for(FreebasePeopleResults person:people){ writer.write(person); } writer.close(); // show output } } I keep getting output errors. Here is the error: There is no content to write for line 2 context: Line: 2 Column: 0 Raw line: null Now, I know it is now totally null, so I am confused.

    Read the article

  • MvcExtensions – Bootstrapping

    - by kazimanzurrashid
    When you create a new ASP.NET MVC application you will find that the global.asax contains the following lines: namespace MvcApplication1 { // Note: For instructions on enabling IIS6 or IIS7 classic mode, // visit http://go.microsoft.com/?LinkId=9394801 public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } } } As the application grows, there are quite a lot of plumbing code gets into the global.asax which quickly becomes a design smell. Lets take a quick look at the code of one of the open source project that I recently visited: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute("Default","{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" }); } protected override void OnApplicationStarted() { Error += OnError; EndRequest += OnEndRequest; var settings = new SparkSettings() .AddNamespace("System") .AddNamespace("System.Collections.Generic") .AddNamespace("System.Web.Mvc") .AddNamespace("System.Web.Mvc.Html") .AddNamespace("MvcContrib.FluentHtml") .AddNamespace("********") .AddNamespace("********.Web") .SetPageBaseType("ApplicationViewPage") .SetAutomaticEncoding(true); #if DEBUG settings.SetDebug(true); #endif var viewFactory = new SparkViewFactory(settings); ViewEngines.Engines.Add(viewFactory); #if !DEBUG PrecompileViews(viewFactory); #endif RegisterAllControllersIn("********.Web"); log4net.Config.XmlConfigurator.Configure(); RegisterRoutes(RouteTable.Routes); Factory.Load(new Components.WebDependencies()); ModelBinders.Binders.DefaultBinder = new Binders.GenericBinderResolver(Factory.TryGet<IModelBinder>); ValidatorConfiguration.Initialize("********"); HtmlValidationExtensions.Initialize(ValidatorConfiguration.Rules); } private void OnEndRequest(object sender, System.EventArgs e) { if (((HttpApplication)sender).Context.Handler is MvcHandler) { CreateKernel().Get<ISessionSource>().Close(); } } private void OnError(object sender, System.EventArgs e) { CreateKernel().Get<ISessionSource>().Close(); } protected override IKernel CreateKernel() { return Factory.Kernel; } private static void PrecompileViews(SparkViewFactory viewFactory) { var batch = new SparkBatchDescriptor(); batch.For<HomeController>().For<ManageController>(); viewFactory.Precompile(batch); } As you can see there are quite a few of things going on in the above code, Registering the ViewEngine, Compiling the Views, Registering the Routes/Controllers/Model Binders, Settings up Logger, Validations and as you can imagine the more it becomes complex the more things will get added in the application start. One of the goal of the MVCExtensions is to reduce the above design smell. Instead of writing all the plumbing code in the application start, it contains BootstrapperTask to register individual services. Out of the box, it contains BootstrapperTask to register Controllers, Controller Factory, Action Invoker, Action Filters, Model Binders, Model Metadata/Validation Providers, ValueProvideraFactory, ViewEngines etc and it is intelligent enough to automatically detect the above types and register into the ASP.NET MVC Framework. Other than the built-in tasks you can create your own custom task which will be automatically executed when the application starts. When the BootstrapperTasks are in action you will find the global.asax pretty much clean like the following: public class MvcApplication : UnityMvcApplication { public void ErrorLog_Filtering(object sender, ExceptionFilterEventArgs e) { Check.Argument.IsNotNull(e, "e"); HttpException exception = e.Exception.GetBaseException() as HttpException; if ((exception != null) && (exception.GetHttpCode() == (int)HttpStatusCode.NotFound)) { e.Dismiss(); } } } The above code is taken from my another open source project Shrinkr, as you can see the global.asax is longer cluttered with any plumbing code. One special thing you have noticed that it is inherited from the UnityMvcApplication rather than regular HttpApplication. There are separate version of this class for each IoC Container like NinjectMvcApplication, StructureMapMvcApplication etc. Other than executing the built-in tasks, the Shrinkr also has few custom tasks which gets executed when the application starts. For example, when the application starts, we want to ensure that the default users (which is specified in the web.config) are created. The following is the custom task that is used to create those default users: public class CreateDefaultUsers : BootstrapperTask { protected override TaskContinuation ExecuteCore(IServiceLocator serviceLocator) { IUserRepository userRepository = serviceLocator.GetInstance<IUserRepository>(); IUnitOfWork unitOfWork = serviceLocator.GetInstance<IUnitOfWork>(); IEnumerable<User> users = serviceLocator.GetInstance<Settings>().DefaultUsers; bool shouldCommit = false; foreach (User user in users) { if (userRepository.GetByName(user.Name) == null) { user.AllowApiAccess(ApiSetting.InfiniteLimit); userRepository.Add(user); shouldCommit = true; } } if (shouldCommit) { unitOfWork.Commit(); } return TaskContinuation.Continue; } } There are several other Tasks in the Shrinkr that we are also using which you will find in that project. To create a custom bootstrapping task you have create a new class which either implements the IBootstrapperTask interface or inherits from the abstract BootstrapperTask class, I would recommend to start with the BootstrapperTask as it already has the required code that you have to write in case if you choose the IBootstrapperTask interface. As you can see in the above code we are overriding the ExecuteCore to create the default users, the MVCExtensions is responsible for populating the  ServiceLocator prior calling this method and in this method we are using the service locator to get the dependencies that are required to create the users (I will cover the custom dependencies registration in the next post). Once the users are created, we are returning a special enum, TaskContinuation as the return value, the TaskContinuation can have three values Continue (default), Skip and Break. The reason behind of having this enum is, in some  special cases you might want to skip the next task in the chain or break the complete chain depending upon the currently running task, in those cases you will use the other two values instead of the Continue. The last thing I want to cover in the bootstrapping task is the Order. By default all the built-in tasks as well as newly created task order is set to the DefaultOrder(a static property), in some special cases you might want to execute it before/after all the other tasks, in those cases you will assign the Order in the Task constructor. For Example, in Shrinkr, we want to run few background services when the all the tasks are executed, so we assigned the order as DefaultOrder + 1. Here is the code of that Task: public class ConfigureBackgroundServices : BootstrapperTask { private IEnumerable<IBackgroundService> backgroundServices; public ConfigureBackgroundServices() { Order = DefaultOrder + 1; } protected override TaskContinuation ExecuteCore(IServiceLocator serviceLocator) { backgroundServices = serviceLocator.GetAllInstances<IBackgroundService>().ToList(); backgroundServices.Each(service => service.Start()); return TaskContinuation.Continue; } protected override void DisposeCore() { backgroundServices.Each(service => service.Stop()); } } That’s it for today, in the next post I will cover the custom service registration, so stay tuned.

    Read the article

  • SCCM 2012 unable to update boot images with pxe enabled

    - by Adam
    we are fighting an error in sccm 2012. When we attempt to distribute boot images (after selecting the pxe option) we receive an error that the pxe image cannot be expanded (distmgr log). Can you give us any direction on what to try or attempt in this scenario? We only have one dp in our environment at the moment, however we have found that by creating another dp on a different server we don’t have this problem. However we really need the primary site to be a dp. We have tried: Removing and reinstalling the dp Removing and reinstalling the WDS Reinstalled the OS ... ouch Reinstalled SQL We even attempted to manually mount these wims in the remote install folder, no luck... And we have been working on this for days... Any and all help is appreciated! Our log is below Thank you very much, Small Town IT guy Attempting to add or update a package on a distribution point. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) The distribution point is on the siteserver and the package is a content type package. There is nothing to be copied over. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) STATMSG: ID=2342 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=6924 GMTDATE=Fri Jun 22 19:49:41.559 2012 ISTR0="Boot image (x86)" ISTR1="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=2 AID0=400 AVAL0="IVC00001" AID1=404 AVAL1="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) The current user context will be used for connecting to ["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) No network connection is needed to ["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\ as this is the local machine. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Signature share exists on distribution point path \OURSERVER.ourdomain.cc.ia.us\SMSSIG$ SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Ignoring drive C:. File C:\NO_SMS_ON_DRIVE.SMS exists. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) user(NT AUTHORITY\SYSTEM) runing application(SMS_DISTRIBUTION_MANAGER) from machine (OURSERVER.ourdomain.cc.ia.us) is submitting SDK changes from site(IVC) SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Share SMSPKGD$ exists on distribution point \OURSERVER.ourdomain.cc.ia.us\SMSPKGD$ SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Creating, reading and or updating Operations Management server role registry keys for a Distribution Point ... SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) user(NT AUTHORITY\SYSTEM) runing application(SMS_DISTRIBUTION_MANAGER) from machine (OURSERVER.ourdomain.cc.ia.us) is submitting SDK changes from site(IVC) SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Creating, reading or updating IIS registry key for a distribution point. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISPortsList in the SCF is "80". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISSSLPortsList in the SCF is "443". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISWebSiteName in the SCF is "". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) IISSSLState in the SCF is 224. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:41 PM 6924 (0x1B0C) Virtual Directory SMS_DP_SMSPKG$ for the physical path F:\SCCMContentLib already exists. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) STATMSG: ID=2375 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=6924 GMTDATE=Fri Jun 22 19:49:42.058 2012 ISTR0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" ISTR1="" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=404 AVAL0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Creating, reading or updating IIS registry key for a distribution point. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISPortsList in the SCF is "80". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISSSLPortsList in the SCF is "443". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISWebSiteName in the SCF is "". SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) IISSSLState in the SCF is 224. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Virtual Directory SMS_DP_SMSSIG$ for the physical path D:\SMSSIG$ already exists. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) STATMSG: ID=2375 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=6924 GMTDATE=Fri Jun 22 19:49:42.105 2012 ISTR0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" ISTR1="" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=404 AVAL0="["Display=\OURSERVER.ourdomain.cc.ia.us\"]MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) user(NT AUTHORITY\SYSTEM) runing application(SMS_DISTRIBUTION_MANAGER) from machine (OURSERVER.ourdomain.cc.ia.us) is submitting SDK changes from site(IVC) SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) RDC:Successfully created package signature file from \?\F:\SMSPKGSIG\IVC00001.3 to \OURSERVER.ourdomain.cc.ia.us\SMSSIG$\IVC00001.3.tar SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Setting permissions on file MSWNET:["SMS_SITE=IVC"]\OURSERVER.ourdomain.cc.ia.us\SMSSIG$\IVC00001.3.tar. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) ExpandPXEImage: IVC00001, 1024 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) CContentDefinition::GetFileProperties failed; 0x80070003 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) CContentDefinition::TotalFileSizes failed; 0x80070003 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) ExpandPXEImage failed; 0x80070003 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) Error occurred. Performing error cleanup prior to returning. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 6924 (0x1B0C) DP thread with array index 0 ended. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) DP thread with thread handle 00000000000013A4 and thread ID 6924 ended. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Updating package info for package IVC00001 SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Package IVC00001 does not have a preferred sender. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) The package and/or program properties for package IVC00001 have not changed, need to determine which site(s) need updated package info. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) StoredPkgVersion (3) of package IVC00001. StoredPkgVersion in database is 3. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) SourceVersion (3) of package IVC00001. SourceVersion in database is 3. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) STATMSG: ID=2302 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=OURSERVER.ourdomain.cc.ia.us SITE=IVC PID=3600 TID=4492 GMTDATE=Fri Jun 22 19:49:42.292 2012 ISTR0="Boot image (x86)" ISTR1="IVC00001" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=400 AVAL0="IVC00001" SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Failed to process package IVC00001 after 0 retries, will retry 100 more times SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C) Exiting package processing thread. SMS_DISTRIBUTION_MANAGER 6/22/2012 2:49:42 PM 4492 (0x118C)

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Sending an Activation Email when a New User Registers

    - by John
    Hello, The code below is a login system that I am using. It is supposed to allow a new user to register and then send the new user an activation email. It is inserting the new user into the MySQL database, but it is not sending the activation email. Any ideas why it's not sending the activation email? Thanks in advance, John header.php: <?php //error_reporting(0); session_start(); require_once ('db_connect.inc.php'); require_once ("function.inc.php"); $seed="0dAfghRqSTgx"; $domain = "...com"; ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>The Sandbox - <?php echo $domain; ?></title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <link rel="stylesheet" type="text/css" href="sandbox.css"> <div class="hslogo"><a href="http://www...com/sandbox/"><img src="images/hslogo.png" alt="Example" border="0"/></a></div> </head> <body> login.php: <?php if (!isLoggedIn()) { // user is not logged in. if (isset($_POST['cmdlogin'])) { // retrieve the username and password sent from login form & check the login. if (checkLogin($_POST['username'], $_POST['password'])) { show_userbox(); } else { echo "Incorrect Login information !"; show_loginform(); } } else { // User is not logged in and has not pressed the login button // so we show him the loginform show_loginform(); } } else { // The user is already loggedin, so we show the userbox. show_userbox(); } ?> function show_loginform($disabled = false) { echo '<form name="login-form" id="login-form" method="post" action="./index.php?'.$_SERVER['QUERY_STRING'].'"> <div class="usernameformtext"><label title="Username">Username: </label></div> <div class="usernameformfield"><input tabindex="1" accesskey="u" name="username" type="text" maxlength="30" id="username" /></div> <div class="passwordformtext"><label title="Password">Password: </label></div> <div class="passwordformfield"><input tabindex="2" accesskey="p" name="password" type="password" maxlength="15" id="password" /></div> <div class="registertext"><a href="http://www...com/sandbox/register.php" title="Register">Register</a></div> <div class="lostpasswordtext"><a href="http://www...com/sandbox/lostpassword.php" title="Lost Password">Lost password?</a></div> <p class="loginbutton"><input tabindex="3" accesskey="l" type="submit" name="cmdlogin" value="Login" '; if ($disabled == true) { echo 'disabled="disabled"'; } echo ' /></p></form>'; } register.php: <?php require_once "header.php"; if (isset($_POST['register'])){ if (registerNewUser($_POST['username'], $_POST['password'], $_POST['password2'], $_POST['email'])){ echo "<div class='registration'>Thank you for registering, an email has been sent to your inbox, Please activate your account. <a href='http://www...com/sandbox/index.php'>Click here to login.</a> </div>"; }else { echo "Registration failed! Please try again."; show_registration_form(); } } else { // has not pressed the register button show_registration_form(); } ?> New User Function: function registerNewUser($username, $password, $password2, $email) { global $seed; if (!valid_username($username) || !valid_password($password) || !valid_email($email) || $password != $password2 || user_exists($username)) { return false; } $code = generate_code(20); $sql = sprintf("insert into login (username,password,email,actcode) value ('%s','%s','%s','%s')", mysql_real_escape_string($username), mysql_real_escape_string(sha1($password . $seed)) , mysql_real_escape_string($email), mysql_real_escape_string($code)); if (mysql_query($sql)) { $id = mysql_insert_id(); if (sendActivationEmail($username, $password, $id, $email, $code)) { return true; } else { return false; } } else { return false; } return false; } Send Activation Email function: function sendActivationEmail($username, $password, $uid, $email, $actcode) { global $domain; $link = "http://www.$domain/sandbox/activate.php?uid=$uid&actcode=$actcode"; $message = " Thank you for registering on http://www.$domain/, Your account information: username: $username password: $password Please click the link below to activate your account. $link Regards $domain Administration "; if (sendMail($email, "Please activate your account.", $message, "no-reply@$domain")) { return true; } else { return false; } }

    Read the article

  • South Florida Code Camp 2010 &ndash; VI &ndash; 2010-02-27

    - by Dave Noderer
    Catching up after our sixth code camp here in the Ft Lauderdale, FL area. Website at: http://www.fladotnet.com/codecamp. For the 5th time, DeVry University hosted the event which makes everything else really easy! Statistics from 2010 South Florida Code Camp: 848 registered (we use Microsoft Group Events) ~ 600 attended (516 took name badges) 64 speakers (including speaker idol) 72 sessions 12 parallel tracks Food 400 waters 600 sodas 900 cups of coffee (it was cold!) 200 pounds of ice 200 pizza's 10 large salad trays 900 mouse pads Photos on facebook Dave Noderer: http://www.facebook.com/home.php#!/album.php?aid=190812&id=693530361 Joe Healy: http://www.facebook.com/devfish?ref=mf#!/album.php?aid=202787&id=720054950 Will Strohl:http://www.facebook.com/home.php#!/album.php?aid=2045553&id=1046966128&ref=mf Veronica Gonzalez: http://www.facebook.com/home.php#!/album.php?aid=150954&id=672439484 Florida Speaker Idol One of the sessions at code camp was the South Florida Regional speaker idol competition. After user group level competitions there are five competitors. I acted as MC and score keeper while Ed Hill, Bob O’Connell, John Dunagan and Shervin Shakibi were judges. This statewide competition is being run by Roy Lawsen in Lakeland and the winner, Jeff Truman from Naples will move on to the state finals to be held at the Orlando Code Camp on 3/27/2010: http://www.orlandocodecamp.com/. Each speaker has 10 minutes. The participants were: Alex Koval Jeff Truman Jared Nielsen Chris Catto Venkat Narayanasamy They all did a great job and I’m working with each to make sure they don’t stop there and start speaking at meetings. Thanks to everyone involved! Volunteers As always events like this don’t happen without a lot of help! The key people were: Ed Hill, Bob O’Connell – DeVry For the months leading up to the event, Ed collects all of the swag, books, etc and stores them. He holds meeting with various DeVry departments to coordinate the day, he works with the students in the days  before code camp to stuff bags, print signs, arrange tables and visit BJ’s for our supplies (I go and pay but have a small car!). And of course the day of the event he is there at 5:30 am!! We took two SUV’s to BJ’s, i was really worried that the 36 cases of water were going to break his rear axle! He also helps with the students and works very hard before and after the event. Rainer Haberman – Speakers and Volunteer of the Year Rainer has helped over the past couple of years but this time he took full control of arranging the tracks. I did some preliminary work solicitation speakers but he took over all communications after that. We have tried various organizations around speakers, chair per track, central team but having someone paying attention to the details is definitely the way to go! This was the first year I did not have to jump in at the last minute and re-arrange everything. There were lots of kudo’s from the speakers too saying they felt it was more organized than they have experienced in the past from any code camp. Thanks Rainer! Ray Alamonte – Book Swap We saw the idea of a book swap from the Alabama Code Camp and thought we would give it a try. Ray jumped in and took control. The idea was to get people to bring their old technical books to swap or for others to buy. You got a ticket for each book you brought that you could then turn in to buy another book. If you did not have a ticket you could buy a book for $1. Net proceeds were $153 which I rounded up and donated to the Red Cross. There is plenty going on in Haiti and Chile! I don’t think we really got a count of how many books came in. I many cases the books barely hit the table before being picked up again. At the end we were left with a dozen books which we donated to the DeVry library. A great success we will definitely do again! Jace Weiss / Ratchelen Hut – Coffee and Snacks Wow, this was an eye opener. In past years a few of us would struggle to give some attention to coffee, snacks, etc. But it was always tenuous and always ended up running out of coffee. In the past we have tried buying Dunkin Donuts coffee, renting urns, borrowing urns, etc. This year I actually purchased 2 – 100 cup Westbend commercial brewers plus a couple of small urns (30 and 60 cup we used for decaf). We got them both started early (although i forgot to push the on button on one!) and primed it with 10 boxes of Joe from Dunkin. then Jace and Rachelen took over.. once a batch was brewed they would refill the boxes, keep the area clean and at one point were filling cups. We never ran out of coffee and served a few hundred more than last  year. We did look but next year I’ll get a large insulated (like gatorade) dispensing container. It all went very smoothly and having help focused on that one area was a big win. Thanks Jace and Rachelen! Ken & Shirley Golding / Roberta Barbosa – Registration Ken & Shirley showed up and took over registration. This year we printed small name tags for everyone registered which was great because it is much easier to remember someone’s name when they are labeled! In any case it went the smoothest it has ever gone. All three were actively pulling people through the registration, answering questions, directing them to bags and information very quickly. I did not see that there was too big a line at any time. Thanks!! Scott Katarincic / Vishal Shukla – Website For the 3rd?? year in a row, Scott was in charge of the website starting in August or September when I start on code camp. He handles all the requests, makes changes to the site and admin. I think two years ago he wrote all the backend administration and tunes it and the website a bit but things are pretty stable. The only thing I do is put up the sponsors. It is a big pressure off of me!! Thanks Scott! Vishal jumped into the web end this year and created a new Silverlight agenda page to replace the old ajax page. We will continue to enhance this but it is definitely a good step forward! Thanks! Alex Funkhouser – T-shirts/Mouse pads/tables/sponsors Alex helps in many areas. He helps me bring in sponsors and handles all the logistics for t-shirts, sponsor tables and this year the mouse pads. He is also a key person to help promote the event as well not to mention the after after party which I did not attend and don’t want to know much about! Students There were a number of student volunteers but don’t have all of their names. But thanks to them, they stuffed bags, patrolled pizza and helped with moving things around. Sponsors We had a bunch of great sponsors which allowed us to feed people and give a way a lot of great swag. Our major sponsors of DeVry, Microsoft (both DPE and UGSS), Infragistics, Telerik, SQL Share (End to End, SQL Saturdays), and Interclick are very much appreciated. The other sponsors Applied Innovations (also supply code camp hosting), Ultimate Software (a great local SW company), Linxter (reliable cloud messaging we are lucky to have here!), Mediascend (a media startup), SoftwareFX (another local SW company we are happy to have back participating in CC), CozyRoc (if you do SSIS, check them out), Arrow Design (local DNN and Silverlight experts),Boxes and Arrows (a local SW consulting company) and Robert Half. One thing we did this year besides a t-shirt was a mouse pad. I like it because it will be around for a long time on many desks. After much investigation and years of using mouse pad’s I’ve determined that the 1/8” fabric top is the best and that is what we got!   So now I get a break for a few months before starting again!

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to manually patch Blogger template to use Disqus

    - by user317944
    I'm trying to add disqus to my blog and I tried following this guide to do so: http://disqus.com/docs/patch-blogger/ However their instructions are completely off with what I have on my custom template. Here is the template: <b:skin><![CDATA[/*----------------------------------------------- Blogger Template Style Name: Picture Window Designer: Josh Peterson URL: www.noaesthetic.com ----------------------------------------------- */ /* Variable definitions ==================== */ /* Content ----------------------------------------------- */ body { font: $(body.font); color: $(body.text.color); } html body .region-inner { min-width: 0; max-width: 100%; width: auto; } .content-outer { font-size: 90%; } a:link { text-decoration:none; color: $(link.color); } a:visited { text-decoration:none; color: $(link.visited.color); } a:hover { text-decoration:underline; color: $(link.hover.color); } .body-fauxcolumn-outer { background: $(body.background); } .content-outer { background: $(content.background); -moz-border-radius: $(content.border.radius); -webkit-border-radius: $(content.border.radius); -goog-ms-border-radius: $(content.border.radius); border-radius: $(content.border.radius); -moz-box-shadow: 0 0 $(content.shadow.spread) rgba(0, 0, 0, .15); -webkit-box-shadow: 0 0 $(content.shadow.spread) rgba(0, 0, 0, .15); -goog-ms-box-shadow: 0 0 $(content.shadow.spread) rgba(0, 0, 0, .15); box-shadow: 0 0 $(content.shadow.spread) rgba(0, 0, 0, .15); margin: $(content.margin) auto; } .content-inner { padding: $(content.padding); } /* Header ----------------------------------------------- */ .header-outer { background: $(header.background.color) $(header.background.gradient) repeat-x scroll top left; _background-image: none; color: $(header.text.color); -moz-border-radius: $(header.border.radius); -webkit-border-radius: $(header.border.radius); -goog-ms-border-radius: $(header.border.radius); border-radius: $(header.border.radius); } .Header img, .Header #header-inner { -moz-border-radius: $(header.border.radius); -webkit-border-radius: $(header.border.radius); -goog-ms-border-radius: $(header.border.radius); border-radius: $(header.border.radius); } .header-inner .Header .titlewrapper, .header-inner .Header .descriptionwrapper { padding-left: $(header.padding); padding-right: $(header.padding); } .Header h1 { font: $(header.font); text-shadow: 1px 1px 3px rgba(0, 0, 0, 0.3); } .Header h1 a { color: $(header.text.color); } .Header .description { font-size: 130%; } /* Tabs ----------------------------------------------- */ .tabs-inner { margin: .5em $(tabs.margin.sides) $(tabs.margin.bottom); padding: 0; } .tabs-inner .section { margin: 0; } .tabs-inner .widget ul { padding: 0; background: $(tabs.background.color) $(tabs.background.gradient) repeat scroll bottom; -moz-border-radius: $(tabs.border.radius); -webkit-border-radius: $(tabs.border.radius); -goog-ms-border-radius: $(tabs.border.radius); border-radius: $(tabs.border.radius); } .tabs-inner .widget li { border: none; } .tabs-inner .widget li a { display: block; padding: .5em 1em; margin-$endSide: $(tabs.spacing); color: $(tabs.text.color); font: $(tabs.font); -moz-border-radius: $(tab.border.radius) $(tab.border.radius) 0 0; -webkit-border-top-left-radius: $(tab.border.radius); -webkit-border-top-right-radius: $(tab.border.radius); -goog-ms-border-radius: $(tab.border.radius) $(tab.border.radius) 0 0; border-radius: $(tab.border.radius) $(tab.border.radius) 0 0; background: $(tab.background); border-$endSide: 1px solid $(tabs.separator.color); } .tabs-inner .widget li:first-child a { padding-$startSide: 1.25em; -moz-border-radius-top$startSide: $(tab.first.border.radius); -moz-border-radius-bottom$startSide: $(tabs.border.radius); -webkit-border-top-$startSide-radius: $(tab.first.border.radius); -webkit-border-bottom-$startSide-radius: $(tabs.border.radius); -goog-ms-border-top-$startSide-radius: $(tab.first.border.radius); -goog-ms-border-bottom-$startSide-radius: $(tabs.border.radius); border-top-$startSide-radius: $(tab.first.border.radius); border-bottom-$startSide-radius: $(tabs.border.radius); } .tabs-inner .widget li.selected a, .tabs-inner .widget li a:hover { position: relative; z-index: 1; background: $(tabs.selected.background.color) $(tab.selected.background.gradient) repeat scroll bottom; color: $(tabs.selected.text.color); -moz-box-shadow: 0 0 $(region.shadow.spread) rgba(0, 0, 0, .15); -webkit-box-shadow: 0 0 $(region.shadow.spread) rgba(0, 0, 0, .15); -goog-ms-box-shadow: 0 0 $(region.shadow.spread) rgba(0, 0, 0, .15); box-shadow: 0 0 $(region.shadow.spread) rgba(0, 0, 0, .15); } /* Headings ----------------------------------------------- */ h2 { font: $(widget.title.font); text-transform: $(widget.title.text.transform); color: $(widget.title.text.color); margin: .5em 0; } /* Main ----------------------------------------------- */ .main-outer { background: $(main.background); -moz-border-radius: $(main.border.radius.top) $(main.border.radius.top) 0 0; -webkit-border-top-left-radius: $(main.border.radius.top); -webkit-border-top-right-radius: $(main.border.radius.top); -webkit-border-bottom-left-radius: 0; -webkit-border-bottom-right-radius: 0; -goog-ms-border-radius: $(main.border.radius.top) $(main.border.radius.top) 0 0; border-radius: $(main.border.radius.top) $(main.border.radius.top) 0 0; -moz-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); -webkit-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); -goog-ms-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); } .main-inner { padding: 15px $(main.padding.sides) 20px; } .main-inner .column-center-inner { padding: 0 0; } .main-inner .column-left-inner { padding-left: 0; } .main-inner .column-right-inner { padding-right: 0; } /* Posts ----------------------------------------------- */ h3.post-title { margin: 0; font: $(post.title.font); } .comments h4 { margin: 1em 0 0; font: $(post.title.font); } .post-outer { background-color: $(post.background.color); border: solid 1px $(post.border.color); -moz-border-radius: $(post.border.radius); -webkit-border-radius: $(post.border.radius); border-radius: $(post.border.radius); -goog-ms-border-radius: $(post.border.radius); padding: 15px 20px; margin: 0 $(post.margin.sides) 20px; } .post-body { line-height: 1.4; font-size: 110%; } .post-header { margin: 0 0 1.5em; color: $(post.footer.text.color); line-height: 1.6; } .post-footer { margin: .5em 0 0; color: $(post.footer.text.color); line-height: 1.6; } blog-pager { font-size: 140% } comments .comment-author { padding-top: 1.5em; border-top: dashed 1px #ccc; border-top: dashed 1px rgba(128, 128, 128, .5); background-position: 0 1.5em; } comments .comment-author:first-child { padding-top: 0; border-top: none; } .avatar-image-container { margin: .2em 0 0; } /* Widgets ----------------------------------------------- */ .widget ul, .widget #ArchiveList ul.flat { padding: 0; list-style: none; } .widget ul li, .widget #ArchiveList ul.flat li { border-top: dashed 1px #ccc; border-top: dashed 1px rgba(128, 128, 128, .5); } .widget ul li:first-child, .widget #ArchiveList ul.flat li:first-child { border-top: none; } .widget .post-body ul { list-style: disc; } .widget .post-body ul li { border: none; } /* Footer ----------------------------------------------- */ .footer-outer { color:$(footer.text.color); background: $(footer.background); -moz-border-radius: $(footer.border.radius.top) $(footer.border.radius.top) $(footer.border.radius.bottom) $(footer.border.radius.bottom); -webkit-border-top-left-radius: $(footer.border.radius.top); -webkit-border-top-right-radius: $(footer.border.radius.top); -webkit-border-bottom-left-radius: $(footer.border.radius.bottom); -webkit-border-bottom-right-radius: $(footer.border.radius.bottom); -goog-ms-border-radius: $(footer.border.radius.top) $(footer.border.radius.top) $(footer.border.radius.bottom) $(footer.border.radius.bottom); border-radius: $(footer.border.radius.top) $(footer.border.radius.top) $(footer.border.radius.bottom) $(footer.border.radius.bottom); -moz-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); -webkit-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); -goog-ms-box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); box-shadow: 0 $(region.shadow.offset) $(region.shadow.spread) rgba(0, 0, 0, .15); } .footer-inner { padding: 10px $(main.padding.sides) 20px; } .footer-outer a { color: $(footer.link.color); } .footer-outer a:visited { color: $(footer.link.visited.color); } .footer-outer a:hover { color: $(footer.link.hover.color); } .footer-outer .widget h2 { color: $(footer.widget.title.text.color); } ]] <b:template-skin> <b:variable default='930px' name='content.width' type='length' value='930px'/> <b:variable default='0' name='main.column.left.width' type='length' value='180px'/> <b:variable default='360px' name='main.column.right.width' type='length' value='180px'/> <![CDATA[ body { min-width: $(content.width); } .content-outer, .region-inner { min-width: $(content.width); max-width: $(content.width); _width: $(content.width); } .main-inner .columns { padding-left: $(main.column.left.width); padding-right: $(main.column.right.width); } .main-inner .fauxcolumn-center-outer { left: $(main.column.left.width); right: $(main.column.right.width); /* IE6 does not respect left and right together */ _width: expression(this.parentNode.offsetWidth - parseInt("$(main.column.left.width)") - parseInt("$(main.column.right.width)") + 'px'); } .main-inner .fauxcolumn-left-outer { width: $(main.column.left.width); } .main-inner .fauxcolumn-right-outer { width: $(main.column.right.width); } .main-inner .column-left-outer { width: $(main.column.left.width); right: $(main.column.left.width); margin-right: -$(main.column.left.width); } .main-inner .column-right-outer { width: $(main.column.right.width); margin-right: -$(main.column.right.width); } #layout { min-width: 0; } #layout .content-outer { min-width: 0; width: 800px; } #layout .region-inner { min-width: 0; width: auto; } ]]> </b:template-skin> <div class='main-cap-bottom cap-bottom'> <div class='cap-left'/> <div class='cap-right'/> </div> </div> <footer> <div class='footer-outer'> <div class='footer-cap-top cap-top'> <div class='cap-left'/> <div class='cap-right'/> </div> <div class='fauxborder-left footer-fauxborder-left'> <div class='fauxborder-right footer-fauxborder-right'/> <div class='region-inner footer-inner'> <macro:include id='footer-sections' name='sections'> <macro:param default='2' name='num' value='3'/> <macro:param default='footer' name='idPrefix'/> <macro:param default='foot' name='class'/> <macro:param default='false' name='includeBottom'/> </macro:include> <!-- outside of the include in order to lock Attribution widget --> <b:section class='foot' id='footer-3' showaddelement='no'> document.body.className = document.body.className.replace('loading', ''); <macro:if cond='data:col.num &gt;= 2'> <table border='0' cellpadding='0' cellspacing='0' mexpr:class='&quot;section-columns columns-&quot; + data:col.num'> <tbody> <tr> <td class='first columns-cell'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-2-1&quot;'/> </td> <td class='columns-cell'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-2-2&quot;'/> </td> <macro:if cond='data:col.num &gt;= 3'> <td class='columns-cell'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-2-3&quot;'/> </td> </macro:if> <macro:if cond='data:col.num &gt;= 4'> <td class='columns-cell'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-2-4&quot;'/> </td> </macro:if> </tr> </tbody> </table> <macro:if cond='data:col.includeBottom'> <b:section mexpr:class='data:col.class' mexpr:id='data:col.idPrefix + &quot;-3&quot;' showaddelement='no'/> </macro:if> </macro

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • The Proper Use of the VM Role in Windows Azure

    - by BuckWoody
    At the Professional Developer’s Conference (PDC) in 2010 we announced an addition to the Computational Roles in Windows Azure, called the VM Role. This new feature allows a great deal of control over the applications you write, but some have confused it with our full infrastructure offering in Windows Hyper-V. There is a proper architecture pattern for both of them. Virtualization Virtualization is the process of taking all of the hardware of a physical computer and replicating it in software alone. This means that a single computer can “host” or run several “virtual” computers. These virtual computers can run anywhere - including at a vendor’s location. Some companies refer to this as Cloud Computing since the hardware is operated and maintained elsewhere. IaaS The more detailed definition of this type of computing is called Infrastructure as a Service (Iaas) since it removes the need for you to maintain hardware at your organization. The operating system, drivers, and all the other software required to run an application are still under your control and your responsibility to license, patch, and scale. Microsoft has an offering in this space called Hyper-V, that runs on the Windows operating system. Combined with a hardware hosting vendor and the System Center software to create and deploy Virtual Machines (a process referred to as provisioning), you can create a Cloud environment with full control over all aspects of the machine, including multiple operating systems if you like. Hosting machines and provisioning them at your own buildings is sometimes called a Private Cloud, and hosting them somewhere else is often called a Public Cloud. State-ful and Stateless Programming This paradigm does not create a new, scalable way of computing. It simply moves the hardware away. The reason is that when you limit the Cloud efforts to a Virtual Machine, you are in effect limiting the computing resources to what that single system can provide. This is because much of the software developed in this environment maintains “state” - and that requires a little explanation. “State-ful programming” means that all parts of the computing environment stay connected to each other throughout a compute cycle. The system expects the memory, CPU, storage and network to remain in the same state from the beginning of the process to the end. You can think of this as a telephone conversation - you expect that the other person picks up the phone, listens to you, and talks back all in a single unit of time. In “Stateless” computing the system is designed to allow the different parts of the code to run independently of each other. You can think of this like an e-mail exchange. You compose an e-mail from your system (it has the state when you’re doing that) and then you walk away for a bit to make some coffee. A few minutes later you click the “send” button (the network has the state) and you go to a meeting. The server receives the message and stores it on a mail program’s database (the mail server has the state now) and continues working on other mail. Finally, the other party logs on to their mail client and reads the mail (the other user has the state) and responds to it and so on. These events might be separated by milliseconds or even days, but the system continues to operate. The entire process doesn’t maintain the state, each component does. This is the exact concept behind coding for Windows Azure. The stateless programming model allows amazing rates of scale, since the message (think of the e-mail) can be broken apart by multiple programs and worked on in parallel (like when the e-mail goes to hundreds of users), and only the order of re-assembling the work is important to consider. For the exact same reason, if the system makes copies of those running programs as Windows Azure does, you have built-in redundancy and recovery. It’s just built into the design. The Difference Between Infrastructure Designs and Platform Designs When you simply take a physical server running software and virtualize it either privately or publicly, you haven’t done anything to allow the code to scale or have recovery. That all has to be handled by adding more code and more Virtual Machines that have a slight lag in maintaining the running state of the system. Add more machines and you get more lag, so the scale is limited. This is the primary limitation with IaaS. It’s also not as easy to deploy these VM’s, and more importantly, you’re often charged on a longer basis to remove them. your agility in IaaS is more limited. Windows Azure is a Platform - meaning that you get objects you can code against. The code you write runs on multiple nodes with multiple copies, and it all works because of the magic of Stateless programming. you don’t worry, or even care, about what is running underneath. It could be Windows (and it is in fact a type of Windows Server), Linux, or anything else - but that' isn’t what you want to manage, monitor, maintain or license. You don’t want to deploy an operating system - you want to deploy an application. You want your code to run, and you don’t care how it does that. Another benefit to PaaS is that you can ask for hundreds or thousands of new nodes of computing power - there’s no provisioning, it just happens. And you can stop using them quicker - and the base code for your application does not have to change to make this happen. Windows Azure Roles and Their Use If you need your code to have a user interface, in Visual Studio you add a Web Role to your project, and if the code needs to do work that doesn’t involve a user interface you can add a Worker Role. They are just containers that act a certain way. I’ll provide more detail on those later. Note: That’s a general description, so it’s not entirely accurate, but it’s accurate enough for this discussion. So now we’re back to that VM Role. Because of the name, some have mistakenly thought that you can take a Virtual Machine running, say Linux, and deploy it to Windows Azure using this Role. But you can’t. That’s not what it is designed for at all. If you do need that kind of deployment, you should look into Hyper-V and System Center to create the Private or Public Infrastructure as a Service. What the VM Role is actually designed to do is to allow you to have a great deal of control over the system where your code will run. Let’s take an example. You’ve heard about Windows Azure, and Platform programming. You’re convinced it’s the right way to code. But you have a lot of things you’ve written in another way at your company. Re-writing all of your code to take advantage of Windows Azure will take a long time. Or perhaps you have a certain version of Apache Web Server that you need for your code to work. In both cases, you think you can (or already have) code the the software to be “Stateless”, you just need more control over the place where the code runs. That’s the place where a VM Role makes sense. Recap Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.

    Read the article

  • Help! cant login to my forum after .htaccess changes

    - by MrRioku
    I'm running a phpbb forum and I installed an SEO friendly URL Mod. After I installed it I wasn't able to login as the admin nor any other users... Well I've been messing around with this problem for a bit and I did find out that it was the .htaccess file that was causing me not to be able to login. I found out if I comment the canonical part out in the .htaccess file, my website allows me to login and see the working mod just fine but I get "http://phone7forum.com/" instead of what I want which is "http://www.phone7forum.com/ : enter code here # HERE IS A GOOD PLACE TO FORCE CANONICAL DOMAIN # RewriteCond %{HTTP_HOST} !^www\.phone7forum\.com$ [NC] # RewriteRule ^(.*)$ http://www.phone7forum.com/$1 [QSA,L,R=301] enter code here I definitely want the "http://www." since its the best method and holds the best weight.. But when I un-comment it, it will not allow me to login as the admin nor any user... Is there anything I can do to fix this matter??? I'd like to add, before I found this website I installed the Mod "Canonical URL" found here: http://www.phpbb.com/customise/db/mod/canonical_url/ Possibly this might be part of my issue? Maybe I should go undo everything that Mod told me to do and then try un-commenting the canonical section in the .htaccess file? Here is my complete .htaccess file as of now that produces the "http://phone7forum.com/" and allows me to login: [code] # Lines That should already be in your .htacess <Files "config.php"> Order Allow,Deny Deny from All </Files> <Files "common.php"> Order Allow,Deny Deny from All </Files> # You may need to un-comment the following lines # Options +FollowSymlinks # To make sure that rewritten dir or file (/|.html) will not load dir.php in case it exist # Options -MultiViews # REMEBER YOU ONLY NEED TO STARD MOD REWRITE ONCE RewriteEngine On # Uncomment the statement below if you want to make use of # HTTP authentication and it does not already work. # This could be required if you are for example using PHP via Apache CGI. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # REWRITE BASE RewriteBase / # HERE IS A GOOD PLACE TO FORCE CANONICAL DOMAIN # RewriteCond %{HTTP_HOST} !^www\.phone7forum\.com$ [NC] # RewriteRule ^(.*)$ http://www.phone7forum.com/$1 [QSA,L,R=301] # DO NOT GO FURTHER IF THE REQUESTED FILE / DIR DOES EXISTS RewriteCond %{REQUEST_FILENAME} -f RewriteCond %{REQUEST_FILENAME} -d RewriteRule . - [L] ##################################################### # PHPBB SEO REWRITE RULES ALL MODES ##################################################### # AUTHOR : dcz www.phpbb-seo.com # STARTED : 01/2006 ################################# # FORUMS PAGES ############### # FORUM INDEX REWRITERULE WOULD STAND HERE IF USED. "forum" REQUIRES TO BE SET AS FORUM INDEX # RewriteRule ^forum\.html$ index.php [QSA,L,NC] # FORUM ALL MODES RewriteRule ^(forum|[a-z0-9_-]*-f)([0-9]+)/?(page([0-9]+)\.html)?$ viewforum.php?f=$2&start=$4 [QSA,L,NC] # TOPIC WITH VIRTUAL FOLDER ALL MODES RewriteRule ^(forum|[a-z0-9_-]*-f)([0-9]+)/(topic|[a-z0-9_-]*-t)([0-9]+)(-([0-9]+))?\.html$ viewtopic.php?f=$2&t=$4&start=$6 [QSA,L,NC] # TOPIC WITHOUT FORUM ID & DELIM ALL MODES RewriteRule ^([a-z0-9_-]*)/?(topic|[a-z0-9_-]*-t)([0-9]+)(-([0-9]+))?\.html$ viewtopic.php?forum_uri=$1&t=$3&start=$5 [QSA,L,NC] # PHPBB FILES ALL MODES RewriteRule ^resources/[a-z0-9_-]+/(thumb/)?([0-9]+)$ download/file.php?id=$2&t=$1 [QSA,L,NC] # PROFILES ALL MODES WITH ID RewriteRule ^(member|[a-z0-9_-]*-u)([0-9]+)\.html$ memberlist.php?mode=viewprofile&u=$2 [QSA,L,NC] # USER MESSAGES ALL MODES WITH ID RewriteRule ^(member|[a-z0-9_-]*-u)([0-9]+)-(topics|posts)(-([0-9]+))?\.html$ search.php?author_id=$2&sr=$3&start=$5 [QSA,L,NC] # GROUPS ALL MODES RewriteRule ^(group|[a-z0-9_-]*-g)([0-9]+)(-([0-9]+))?\.html$ memberlist.php?mode=group&g=$2&start=$4 [QSA,L,NC] # POST RewriteRule ^post([0-9]+)\.html$ viewtopic.php?p=$1 [QSA,L,NC] # ACTIVE TOPICS RewriteRule ^active-topics(-([0-9]+))?\.html$ search.php?search_id=active_topics&start=$2&sr=topics [QSA,L,NC] # UNANSWERED TOPICS RewriteRule ^unanswered(-([0-9]+))?\.html$ search.php?search_id=unanswered&start=$2&sr=topics [QSA,L,NC] # NEW POSTS RewriteRule ^newposts(-([0-9]+))?\.html$ search.php?search_id=newposts&start=$2&sr=topics [QSA,L,NC] # UNREAD POSTS RewriteRule ^unreadposts(-([0-9]+))?\.html$ search.php?search_id=unreadposts&start=$2 [QSA,L,NC] # THE TEAM RewriteRule ^the-team\.html$ memberlist.php?mode=leaders [QSA,L,NC] # HERE IS A GOOD PLACE TO ADD OTHER PHPBB RELATED REWRITERULES # FORUM WITHOUT ID & DELIM ALL MODES # THESE THREE LINES MUST BE LOCATED AT THE END OF YOUR HTACCESS TO WORK PROPERLY RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([a-z0-9_-]+)/?(page([0-9]+)\.html)?$ viewforum.php?forum_uri=$1&start=$3 [QSA,L,NC] # FIX RELATIVE PATHS : FILES RewriteRule ^.+/(style\.php|ucp\.php|mcp\.php|faq\.php|download/file.php)$ $1 [QSA,L,NC,R=301] # FIX RELATIVE PATHS : IMAGES RewriteRule ^.+/(styles/.*|images/.*)/$ $1 [QSA,L,NC,R=301] # END PHPBB PAGES ##################################################### [/code] Does anyone have any clues as to what I need to do? Any help will be greatly appreciated! Thanks, Justin

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >