Search Results

Search found 16473 results on 659 pages for 'game logic'.

Page 602/659 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • How to add IP range to a server?

    - by Efe Cakinberk
    Hello, First of all I must say that I'm ver inexperienced server and network user. But I rented a unmanaged dedicated server. Well I didn't know what unmanaged really means, then I learned it when I needed support. Well I must do everything by myself. I have a problem. I had already 4 IPs on my server when I rented it. But then I needed more Ips and my server assigned me 32 Ips in which I can only use 27 of them. 85.25.230.0 - 85.25.230.31 this is my Ip range and they say the following Ips must not be configured on the server: 85.25.230.0 - network address 85.25.230.1 - gateway address 85.25.230.2 - router redundancy 85.25.230.3 - router redundancy 85.25.230.31 - broadcast address But the problem is ok Ips are assigned to me but they are not setup on my server. How will I setup Ips to work on my server? I did this after my reseach on google: I used this command on command prompt: route add 85.25.230.0 mask 255.255.255.224 85.25.230.1 metric 1 if then it said OK!. and I thought they should be working. (btw, mask is given to me by my ISP and I don't know metric 1 and if means I just saw it on the net and write it here) I setup my domains via using Plesk Kontrol Panel. So i added one domain and setup one of my new Ips 85.25.230.5 to it. But no it is not working. When I visit the domain via browser, there is a Plesk page comes and says this domain is not configured on the server. Then I changed the domains Ip to one of my old Ips which are given to me with the server and which I have been using for my other domains for a long time. Ok in a second, domain started working. I set it back to my new Ip and domain did not work. As I said, I'm not an expert and do not now the logic. But I'm eager to learn. Can you tell me what might couse the problem and did I do wrong while setting up IP RANGE to my server, if so how can I set them up? Thank you, Efe

    Read the article

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

  • Where to get laptop replacement parts?

    - by wai
    I am sure some of us have older laptops that still works but just one or two parts started not working very well. However, to send it back to the manufacturer for repair might not be worth the money (as some of them even charge consultation fees just to have it checked) as these old laptops are most probably well beyond warranty. Replacing these one or two parts could give your laptop a new lease of life. For example, I have a Dell Inspiron 640m which the fan recently failed. Everything else still works. I know enough to open up the laptop (after reading the service manual) and put it back together again, hence a replacement fan could save it. However, finding these parts turned out to be difficult. I guess I wouldn't be able to get it from Dell since I have been searching around the Dell forums and concluded if I were going to get Dell to at least talk to me, I would have to pay some money (since my warranty expired). I found this online:- http://www.parts-people.com/index.php?action=item&id=3725 However, as I don't live in the United States, shipping alone would cost me 3 times more than the part in question. In addition, if there's a problem, it would be difficult because I probably would have to mail the parts back (costing more money). One of my friend spilled water on her Macbook, she sent it back to Apple and they told her the cost of repairing it exceeds that of buying a new Macbook. They probably have to change everything except the LCD screen. I opened it up and found that most of the parts are still working, only the RAM had fried. Replaced it and it's working again. For some time before discovering only the RAM fried, I was toying with the idea of replacing the logic board myself but looks like I didn't have to. =) Now I know there's a site for Macbooks used replacement parts:- http://www.ifixit.com/ The question is, does anyone know where to get replacement parts at their own locality? It might benefit the community as a whole. Someone might know a local computer store that sells precisely just the things we needed (for now, I hope to find a shop that sells just the fan). If you own a laptop which you had an easy experience finding the replacement parts, do post that as well. PS: I wish laptop manufacturers have outlets where they sell these replacement parts (or at least let you place an order).

    Read the article

  • Planning trunk capacity for multiple GbE switches

    - by wuckachucka
    Without measuring throughput (it's at the top of the list; this is just theoretical), I want to know the most standard method for trunking VLANs on multiple Gigabit (GbE) switches to a core Layer 3 GbE switch. Say you have three VLANs: VLAN10 (10.0.0.0/24) Servers: your typical Windows DC/file server, Exchange, and an Accounting/SQL server. VLAN20: (10.0.1.0/24) Sales: needs access to everything on VLAN10; doesn't need access to VLAN30 and vice-versa. VLAN20: (10.0.1.0/24) Support: needs access to everything on VLAN10; doesn't need access to VLAN20 and vice-versa. Here's how I think this should work in my head: Switch #1: Ports 2-20 are assigned to VLAN20; all the Sales workstations and printers are connected here. Optional 10GbE combo port #1 is trunked to L3 switch's 10 GbE combo port #1. Switch #2: Ports 2-20 are assigned to VLAN30; all the Support workstations and printers are connected here. Optional 10GbE combo port #1 is trunked to L3 switch's 10 GbE combo port #2. Core L3 switch: Ports 2-10 are assigned to VLAN10; all three servers are connected here. With a standard 10/100 x 24 switch, it'll usually come with one or two 1 GbE uplink ports; carrying over this logic to a 10/100/1000 x 24, the "optional" 10 GbE combo ports that most higher-end switches can get shouldn't really be an option. Keep in mind I haven't tested anything yet, I'm primarily moving in this direction for growth (don't want to buy 10/100 switches and have to replace those within a couple of years) and security (being able to control access between VLANs with L3 routing/packet filtering ACLs). Does this sound right? Do I really need the 10 GbE ports? It seems very non-standard and expensive, but it "feels" right when you think about 40 or 50 workstations trunking up to the L3 switch over 1 GbE standard ports. If say 20 workstations want to download a 10 GB image from the servers concurrently, wouldn't the trunk be the bottleneck? At least if the trunk was 10 GbE, you'd have 10x1GbE nodes being able to reach their theoretical max. What about switch stacking? Some of the D-Links I've been looking at have HDMI interfaces for stacking. As far as I know, stacking two switches creates one logical switch, but is this just for management I/O or does the switches use the (assuming it's HDMI 1.3) 10.2 Gbps for carrying data back and forth?

    Read the article

  • Windows DHCP Server - get notification when a non-AD joined device gets an IP address

    - by TheCleaner
    SCENARIO To simplify this down to it's easiest example: I have a Windows 2008 R2 standard DC with the DHCP server role. It hands out IPs via various IPv4 scopes, no problem there. WHAT I'D LIKE I would like a way to create a notification/eventlog entry/similar whenever a device gets a DHCP address lease and that device IS NOT a domain joined computer in Active Directory. It doesn't matter to me whether it is custom Powershell, etc. Bottom line = I'd like a way to know when non-domain devices are on the network without using 802.1X at the moment. I know this won't account for static IP devices. I do have monitoring software that will scan the network and find devices, but it isn't quite this granular in detail. RESEARCH DONE/OPTIONS CONSIDERED I don't see any such possibilities with the built in logging. Yes, I'm aware of 802.1X and have the ability to implement it long-term at this location but we are some time away from a project like that, and while that would solve network authentication issues, this is still helpful to me outside of 802.1X goals. I've looked around for some script bits, etc. that might prove useful but the things I'm finding lead me to believe that my google-fu is failing me at the moment. I believe the below logic is sound (assuming there isn't some existing solution): Device receives DHCP address Event log entry is recorded (event ID 10 in the DHCP audit log should work (since a new lease is what I'd be most interested in, not renewals): http://technet.microsoft.com/en-us/library/dd759178.aspx) At this point a script of some kind would probably have to take over for the remaining "STEPS" below. Somehow query this DHCP log for these event ID 10's (I would love push, but I'm guessing pull is the only recourse here) Parse the query for the name of the device being assigned the new lease Query AD for the device's name IF not found in AD, send a notification email If anyone has any ideas on how to properly do this, I'd really appreciate it. I'm not looking for a "gimme the codez" but would love to know if there are alternatives to the above list or if I'm not thinking clear and another method exists for gathering this information. If you have code snippets/PS commands you'd like to share to help accomplish this, all the better.

    Read the article

  • Why is Windows Task Scheduler trying to launch multiple instances?

    - by Paul H
    We have a number of Windows Scheduled tasks that run on one Server 2008 Webserver (not R2) which is in a cluster. We recently moved from an original webserver Cluster to a new webserver Cluser (Server 2008 - not R2). The new webserver (in the cluster) running the Windows Tasks is setup the same as on the original we believe. BUT we now find that on the new Windows Server the Windows Task Scheduler seems to want to instantly start each task three times. If we set the option to queue up a new task we get: Event ID 324 Task Scheduler queued instance "{9a1a8411-b042-45ff-8e6b-89874df230d7}" of task "\Client Reporting" and will launch it as soon as instance "{2bcc3df6-ea3b-4453-90c2-75b8b1946388}" completes. If we set the option to stop an existing task we get: Event ID 323 Task Scheduler stopped instance "{e685a910-b32b-414e-85fd-96bbe54314a2}" of task "\Client Reporting" in order to launch new instance "{4db66265-1f51-4ede-8535-ac7c3cb5c4c1}" . Ticked settings: Allow task to be run on demand. Run task as soon as possible after a scheduled start is missed. Stop the task if running for longer than 1 hour. If the running task does not end when requested force it to stop. Start the task only if the computer is on AC power. Stop the task if the computer switches to battery power. Selected option: If the task is already running - stop the existing instance. Note: We moved the tasks from one server to another in the cluster to see if it the Task Scheduler on the particular server we'd picked causing the problem. Same behaviour. Could it be something to do with the build of the new servers? We have very similar tasks set up on another server cluster that work OK without all this multiple starting. Comparing those tasks to the ones here - there does not seem to be anything obviously different in terms of settings available to us through the options within the Task Scheduler. Trigger: The task is scheduled to be triggered daily, once an hour - and to be stopped if it exceeds this time. Action: Runs a .bat file. What could be causing this/where we can look to see what logic is causing the tasks to start multiple times in this way?

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • How to make Nginx fire 504 immediately is server is not available?

    - by Georgiy Ivankin
    I have Nginx set up as a load balancer with cookie-based stickiness. The logic is: If the cookie is NOT there, use round-robbing to choose a server from cluster. If the cookie is there, go to the server that is associated with the cookie value. Server is then responsible for setting the cookie. What I want to add is this: If the cookie is there, but server is down, fallback to round-robbing step to choose next available server. So actually I have load balancing and want to add failover support on top of it. I have managed to do that with the help of error_page directive, but it doesn't work as I expected it to. The problem: 504 (and the fallback associated with it) fires only after 30s timeout even if the server is not physically available. So what I want Nginx to do is fire a 504 (or any other error, doesn't matter) immediately (I suppose this means: when TCP connection fails). This is the behavior we can see in browsers: if we go directly to server when it is down, browser immediately tells us that it can't connect. Moreover, Nginx seems to be doing this for 502 error: if I intentionally misconfigure my servers, Nginx fires 502 immediately. Configuration (stripped down to basics): http { upstream my_cluster { server 192.168.73.210:1337; server 192.168.73.210:1338; } map $cookie_myCookie $http_sticky_backend { default 0; value1 192.168.73.210:1337; value2 192.168.73.210:1338; } server { listen 8080; location @fallback { proxy_pass http://my_cluster; } location / { error_page 504 = @fallback; # Create a map of choices # see https://gist.github.com/jrom/1760790 set $test HTTP; if ($http_sticky_backend) { set $test "${test}-STICKY"; } if ($test = HTTP-STICKY) { proxy_pass http://$http_sticky_backend$uri?$args; break; } if ($test = HTTP) { proxy_pass http://my_cluster; break; } return 500 "Misconfiguration"; } } } Disclaimer: I am pretty far from systems administration of any kind, so there may be some basics that I miss here. EDIT: I'm interested in solution with standard free version of Nginx, not Nginx Plus. Thanks.

    Read the article

  • mod_rewrite all but two files causing loop

    - by mpounsett
    I'm trying to set up a web site to allow the creation of a semaphore file to close the site. The logic I want to follow is: when the semaphore file exists and the request is not for /style.css or /favicon.icon show the content of /closed.html I have 1 and 3 working, but my exceptions for 2 result in a processing loop when style.css or favicon.ico are requested. This is my most recent attempt: RewriteEngine on RewriteCond %{REQUEST_URI} !^/style.css RewriteCond %{REQUEST_URI} !^/favicon.ico RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] This is in a VirtualHost block, not in a Directory. There is no .htaccess file in play. I have also recently tried this, based on an answer I found elsewhere, but with the same (looping) result: RewriteCond %{REQUEST_URI} ^/style.css [OR] RewriteCond %{REQUEST_URI} ^/favicon.ico RewriteRule ^.*$ - [L] RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] I expect a request for /style.css or /favicon.ico to fail to match one of the first two rewrite conditions, which should prevent the URI from being rewritten, which should stop the mod_rewrite iteration. However, mod_rewrite seems to think the URI has been rewritten in those cases, and iterates over the rules again (and again, and again). The above works properly in all cases except for style.css or favicon.ico. In those cases I exceed the loop limits. What am I missing here to cause the rewrite iteration to stop when someone requests style.css or favicon.ico? EDIT: Here's a loglevel 9 example of what happens using the first ruleset when a request arrives for /style.css. This is just the first two iterations.. it continues to loop identically until the limit is reached. 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (1) pass through /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (1) pass through /style.css

    Read the article

  • ASP.NET Podcast Show #148 - ASP.NET WebForms to build a Mobile Web Application

    - by Wallym
    Check the podcast site for the original url. This is the video and source code for an ASP.NET WebForms app that I wrote that is optimized for the iPhone and mobile environments.  Subscribe to everything. Subscribe to WMV. Subscribe to M4V for iPhone/iPad. Subscribe to MP3. Download WMV. Download M4V for iPhone/iPad. Download MP3. Link to iWebKit. Source Code: <%@ Page Title="MapSplore" Language="C#" MasterPageFile="iPhoneMaster.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="AT_iPhone_Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"></asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="Content" Runat="Server" ClientIDMode="Static">    <asp:ScriptManager ID="sm" runat="server"         EnablePartialRendering="true" EnableHistory="false" EnableCdn="true" />    <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script>    <script  language="javascript"  type="text/javascript">    <!--    Sys.WebForms.PageRequestManager.getInstance().add_endRequest(endRequestHandle);    function endRequestHandle(sender, Args) {        setupMapDiv();        setupPlaceIveBeen();    }    function setupPlaceIveBeen() {        var mapPlaceIveBeen = document.getElementById('divPlaceIveBeen');        if (mapPlaceIveBeen != null) {            var PlaceLat = document.getElementById('<%=hdPlaceIveBeenLatitude.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceIveBeenLongitude.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=lblPlaceIveBeenName.ClientID %>').innerHTML;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapPlaceIveBeen, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }    }    function setupMapDiv() {        var mapdiv = document.getElementById('divImHere');        if (mapdiv != null) {            var PlaceLat = document.getElementById('<%=hdPlaceLat.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceLon.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=hdPlaceTitle.ClientID %>').value;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapdiv, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }     }    -->    </script>    <asp:HiddenField ID="Latitude" runat="server" />    <asp:HiddenField ID="Longitude" runat="server" />    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js%22%3E%3C/script>    <script language="javascript" type="text/javascript">        $(document).ready(function () {            GetLocation();            setupMapDiv();            setupPlaceIveBeen();        });        function GetLocation() {            if (navigator.geolocation != null) {                navigator.geolocation.getCurrentPosition(getData);            }            else {                var mess = document.getElementById('<%=Message.ClientID %>');                mess.innerHTML = "Sorry, your browser does not support geolocation. " +                    "Try the latest version of Safari on the iPhone, Android browser, or the latest version of FireFox.";            }        }        function UpdateLocation_Click() {            GetLocation();        }        function getData(position) {            var latitude = position.coords.latitude;            var longitude = position.coords.longitude;            var hdLat = document.getElementById('<%=Latitude.ClientID %>');            var hdLon = document.getElementById('<%=Longitude.ClientID %>');            hdLat.value = latitude;            hdLon.value = longitude;        }    </script>    <asp:Label ID="Message" runat="server" />    <asp:UpdatePanel ID="upl" runat="server">        <ContentTemplate>    <asp:Panel ID="pnlStart" runat="server" Visible="true">    <div id="topbar">        <div id="title">MapSplore</div>    </div>    <div id="content">        <ul class="pageitem">            <li class="menu">                <asp:LinkButton ID="lbLocalDeals" runat="server" onclick="lbLocalDeals_Click">                <asp:Image ID="imLocalDeals" runat="server" ImageUrl="~/Images/ArtFavor_Money_Bag_Icon.png" Height="30" />                <span class="name">Local Deals.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbLocalPlaces" runat="server" onclick="lbLocalPlaces_Click">                <asp:Image ID="imLocalPlaces" runat="server" ImageUrl="~/Images/Andy_Houses_on_the_horizon_-_Starburst_remix.png" Height="30" />                <span class="name">Local Places.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbWhereIveBeen" runat="server" onclick="lbWhereIveBeen_Click">                <asp:Image ID="imImHere" runat="server" ImageUrl="~/Images/ryanlerch_flagpole.png" Height="30" />                <span class="name">I've been here.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbMyStats" runat="server">                <asp:Image ID="imMyStats" runat="server" ImageUrl="~/Images/Anonymous_Spreadsheet.png" Height="30" />                <span class="name">My Stats.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbAddAPlace" runat="server" onclick="lbAddAPlace_Click">                <asp:Image ID="imAddAPlace" runat="server" ImageUrl="~/Images/jean_victor_balin_add.png" Height="30" />                <span class="name">Add a Place.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="button">                <input type="button" value="Update Your Current Location" onclick="UpdateLocation_Click()">                </li>        </ul>    </div>    </asp:Panel>    <div>    <asp:Panel ID="pnlCoupons" runat="server" Visible="false">        <div id="topbar">        <div id="title">MapSplore</div>        <div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromDeals" OnClick="ReturnFromDeals_Click" /></div></div>    <div class="content">    <asp:ListView ID="lvCoupons" runat="server">        <LayoutTemplate>            <ul class="pageitem" runat="server">                <asp:PlaceHolder ID="itemPlaceholder" runat="server" />            </ul>        </LayoutTemplate>        <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Place.Name") %>' OnClick="lbBusiness_Click">                    <span class="comment">                    <asp:Label ID="lblAddress" runat="server" Text='<%#Eval("Place.Address1") %>' />                    <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Place.Distance"))) + " meters" %>' CssClass="smallText" />                    <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                    <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                    </span>                    <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView><asp:GridView ID="gvCoupons" runat="server" AutoGenerateColumns="false">            <HeaderStyle BackColor="Silver" />            <AlternatingRowStyle BackColor="Wheat" />            <Columns>                <asp:TemplateField AccessibleHeaderText="Business" HeaderText="Business">                    <ItemTemplate>                        <asp:Image ID="imPlaceType" runat="server" Text='<%#Eval("Type") %>' ImageUrl='<%#Eval("Image") %>' />                        <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Name") %>' OnClick="lbBusiness_Click" />                        <asp:LinkButton ID="lblAddress" runat="server" Text='<%#Eval("Address1") %>' CssClass="smallText" />                        <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>' CssClass="smallText" />                        <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                        <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                        <asp:Label ID="lblInfo" runat="server" Visible="false" />                    </ItemTemplate>                </asp:TemplateField>            </Columns>        </asp:GridView>    </div>    </asp:Panel>    <asp:Panel ID="pnlPlaces" runat="server" Visible="false">    <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromPlaces" OnClick="ReturnFromPlaces_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvPlaces" runat="server">            <LayoutTemplate>                <ul id="ulPlaces" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                    <li class="menu">                        <asp:LinkButton ID="lbNotListed" runat="server" CssClass="name"                            OnClick="lbNotListed_Click">                            Place not listed                            <span class="arrow"></span>                            </asp:LinkButton>                    </li>                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbImHere" runat="server" CssClass="name"                     OnClick="lbImHere_Click">                <%#DisplayName(Eval("Name")) %>&nbsp;                <%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>                <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView>    </div>    </asp:Panel>    <asp:Panel ID="pnlImHereNow" runat="server" Visible="false">        <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Places"                 ID="lbImHereNowReturn" OnClick="lbImHereNowReturn_Click" /></div></div>            <div id="rightbutton">            <asp:LinkButton runat="server" Text="Beginning"                ID="lbBackToBeginning" OnClick="lbBackToBeginning_Click" />            </div>        <div id="content">        <ul class="pageitem">        <asp:HiddenField ID="hdPlaceId" runat="server" />        <asp:HiddenField ID="hdPlaceLat" runat="server" />        <asp:HiddenField ID="hdPlaceLon" runat="server" />        <asp:HiddenField ID="hdPlaceTitle" runat="server" />        <asp:Button ID="btnImHereNow" runat="server"             Text="I'm here" OnClick="btnImHereNow_Click" />             <asp:Label ID="lblPlaceTitle" runat="server" /><br />        <asp:TextBox ID="txtWhatsHappening" runat="server" TextMode="MultiLine" Rows="2" style="width:300px" /><br />        <div id="divImHere" style="width:300px; height:300px"></div>        </div>        </ul>    </asp:Panel>    <asp:Panel runat="server" ID="pnlIveBeenHere" Visible="false">        <div id="topbar">        <div id="title">            Where I've been</div><div id="leftbutton">            <asp:LinkButton ID="lbIveBeenHereBack" runat="server" Text="Back" OnClick="lbIveBeenHereBack_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvWhereIveBeen" runat="server">            <LayoutTemplate>                <ul id="ulWhereIveBeen" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu" runat="server">                <asp:LinkButton ID="lbPlaceIveBeen" runat="server" OnClick="lbPlaceIveBeen_Click" CssClass="name">                    <asp:Label ID="lblPlace" runat="server" Text='<%#Eval("PlaceName") %>' /> at                    <asp:Label ID="lblTime" runat="server" Text='<%#Eval("ATTime") %>' CssClass="content" />                    <asp:HiddenField ID="hdATID" runat="server" Value='<%#Eval("ATID") %>' />                    <span class="arrow"></span>                </asp:LinkButton>            </li>            </ItemTemplate>        </asp:ListView>        </div>        </asp:Panel>    <asp:Panel runat="server" ID="pnlPlaceIveBeen" Visible="false">        <div id="topbar">        <div id="title">            I've been here        </div>        <div id="leftbutton">            <asp:LinkButton ID="lbPlaceIveBeenBack" runat="server" Text="Back" OnClick="lbPlaceIveBeenBack_Click" />        </div>        <div id="rightbutton">            <asp:LinkButton ID="lbPlaceIveBeenBeginning" runat="server" Text="Beginning" OnClick="lbPlaceIveBeenBeginning_Click" />        </div>        </div>        <div id="content">            <ul class="pageitem">            <li>            <asp:HiddenField ID="hdPlaceIveBeenPlaceId" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLatitude" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLongitude" runat="server" />            <asp:Label ID="lblPlaceIveBeenName" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenAddress" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCity" runat="server" />,             <asp:Label ID="lblPlaceIveBeenState" runat="server" />            <asp:Label ID="lblPlaceIveBeenZipCode" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCountry" runat="server" /><br />            <div id="divPlaceIveBeen" style="width:300px; height:300px"></div>            </li>            </ul>        </div>                </asp:Panel>         <asp:Panel ID="pnlAddPlace" runat="server" Visible="false">                <div id="topbar"><div id="title">MapSplore</div><div id="leftbutton"><asp:LinkButton ID="lbAddPlaceReturn" runat="server" Text="Back" OnClick="lbAddPlaceReturn_Click" /></div><div id="rightnav"></div></div><div id="content">    <ul class="pageitem">        <li id="liPlaceAddMessage" runat="server" visible="false">        <asp:Label ID="PlaceAddMessage" runat="server" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtPlaceName" runat="server" placeholder="Name of Establishment" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtAddress1" runat="server" placeholder="Address 1" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtCity" runat="server" placeholder="City" />        </li>        <li class="select">        <asp:DropDownList ID="ddlProvince" runat="server" placeholder="Select State" />          <span class="arrow"></span>              </li>        <li class="bigfield">        <asp:TextBox ID="txtZipCode" runat="server" placeholder="Zip Code" />        </li>        <li class="select">        <asp:DropDownList ID="ddlCountry" runat="server"             onselectedindexchanged="ddlCountry_SelectedIndexChanged" />        <span class="arrow"></span>        </li>        <li class="bigfield">        <asp:TextBox ID="txtPhoneNumber" runat="server" placeholder="Phone Number" />        </li>        <li class="checkbox">            <span class="name">You Here Now:</span> <asp:CheckBox ID="cbYouHereNow" runat="server" Checked="true" />        </li>        <li class="button">        <asp:Button ID="btnAdd" runat="server" Text="Add Place"             onclick="btnAdd_Click" />        </li>    </ul></div>        </asp:Panel>        <asp:Panel ID="pnlImHere" runat="server" Visible="false">            <asp:TextBox ID="txtImHere" runat="server"                 TextMode="MultiLine" Rows="3" Columns="40" /><br />            <asp:DropDownList ID="ddlPlace" runat="server" /><br />            <asp:Button ID="btnHere" runat="server" Text="Tell Everyone I'm Here"                 onclick="btnHere_Click" /><br />        </asp:Panel>     </div>    </ContentTemplate>    </asp:UpdatePanel> </asp:Content> Code Behind .cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using LocationDataModel; public partial class AT_iPhone_Default : ViewStatePage{    private iPhoneDevice ipd;     protected void Page_Load(object sender, EventArgs e)    {        LocationDataEntities lde = new LocationDataEntities();        if (!Page.IsPostBack)        {            var Countries = from c in lde.Countries select c;            foreach (Country co in Countries)            {                ddlCountry.Items.Add(new ListItem(co.Name, co.CountryId.ToString()));            }            ddlCountry_SelectedIndexChanged(ddlCountry, null);            if (AppleIPhone.IsIPad())                ipd = iPhoneDevice.iPad;            if (AppleIPhone.IsIPhone())                ipd = iPhoneDevice.iPhone;            if (AppleIPhone.IsIPodTouch())                ipd = iPhoneDevice.iPodTouch;        }    }    protected void btnPlaces_Click(object sender, EventArgs e)    {    }    protected void btnAdd_Click(object sender, EventArgs e)    {        bool blImHere = cbYouHereNow.Checked;        string Place = txtPlaceName.Text,            Address1 = txtAddress1.Text,            City = txtCity.Text,            ZipCode = txtZipCode.Text,            PhoneNumber = txtPhoneNumber.Text,            ProvinceId = ddlProvince.SelectedItem.Value,            CountryId = ddlCountry.SelectedItem.Value;        int iProvinceId, iCountryId;        double dLatitude, dLongitude;        DataAccess da = new DataAccess();        if ((!String.IsNullOrEmpty(ProvinceId)) &&            (!String.IsNullOrEmpty(CountryId)))        {            iProvinceId = Convert.ToInt32(ProvinceId);            iCountryId = Convert.ToInt32(CountryId);            if (blImHere)            {                dLatitude = Convert.ToDouble(Latitude.Value);                dLongitude = Convert.ToDouble(Longitude.Value);                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber,                    dLatitude, dLongitude);            }            else            {                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber);            }            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Awesome, your place has been added. Add Another!";            txtPlaceName.Text = String.Empty;            txtAddress1.Text = String.Empty;            txtCity.Text = String.Empty;            ddlProvince.SelectedIndex = -1;            txtZipCode.Text = String.Empty;            txtPhoneNumber.Text = String.Empty;        }        else        {            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Please select a State and a Country.";        }    }    protected void ddlCountry_SelectedIndexChanged(object sender, EventArgs e)    {        string CountryId = ddlCountry.SelectedItem.Value;        if (!String.IsNullOrEmpty(CountryId))        {            int iCountryId = Convert.ToInt32(CountryId);            LocationDataModel.LocationDataEntities lde = new LocationDataModel.LocationDataEntities();            var prov = from p in lde.Provinces where p.CountryId == iCountryId                        orderby p.ProvinceName select p;                        ddlProvince.Items.Add(String.Empty);            foreach (Province pr in prov)            {                ddlProvince.Items.Add(new ListItem(pr.ProvinceName, pr.ProvinceId.ToString()));            }        }        else        {            ddlProvince.Items.Clear();        }    }    protected void btnImHere_Click(object sender, EventArgs e)    {        int i = 0;        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value),            Lon = Convert.ToDouble(Longitude.Value);        List<Place> lp = da.NearByLocations(Lat, Lon);        foreach (Place p in lp)        {            ListItem li = new ListItem(p.Name, p.PlaceId.ToString());            if (i == 0)            {                li.Selected = true;            }            ddlPlace.Items.Add(li);            i++;        }        pnlAddPlace.Visible = false;        pnlImHere.Visible = true;    }    protected void lbImHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        ListViewItem lvi = (ListViewItem)(((LinkButton)sender).Parent);        HiddenField hd = (HiddenField)lvi.FindControl("hdPlaceId");        long PlaceId = Convert.ToInt64(hd.Value);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        Place pl = da.GetPlace(PlaceId);        pnlImHereNow.Visible = true;        pnlPlaces.Visible = false;        hdPlaceId.Value = PlaceId.ToString();        hdPlaceLat.Value = pl.Latitude.ToString();        hdPlaceLon.Value = pl.Longitude.ToString();        hdPlaceTitle.Value = pl.Name;        lblPlaceTitle.Text = pl.Name;    }    protected void btnHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        string WhatsH = txtImHere.Text;        long PlaceId = Convert.ToInt64(ddlPlace.SelectedValue);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsH,            dLatitude, dLongitude);    }    protected void btnLocalCoupons_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();     }    protected void lbBusiness_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        GridViewRow gvr = (GridViewRow)(((LinkButton)sender).Parent.Parent);        HiddenField hd = (HiddenField)gvr.FindControl("hdPlaceId");        string sPlaceId = hd.Value;        Int64 PlaceId;        if (!String.IsNullOrEmpty(sPlaceId))        {            PlaceId = Convert.ToInt64(sPlaceId);        }    }    protected void lbLocalDeals_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        pnlCoupons.Visible = true;        pnlStart.Visible = false;        List<GeoPromotion> lgp = da.NearByDeals(dLatitude, dLongitude);        lvCoupons.DataSource = lgp;        lvCoupons.DataBind();    }    protected void lbLocalPlaces_Click(object sender, EventArgs e)    {        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value);        double Lon = Convert.ToDouble(Longitude.Value);        List<LocationDataModel.Place> places = da.NearByLocations(Lat, Lon);        lvPlaces.DataSource = places;        lvPlaces.SelectedIndex = -1;        lvPlaces.DataBind();        pnlPlaces.Visible = true;        pnlStart.Visible = false;    }    protected void ReturnFromPlaces_Click(object sender, EventArgs e)    {        pnlPlaces.Visible = false;        pnlStart.Visible = true;    }    protected void ReturnFromDeals_Click(object sender, EventArgs e)    {        pnlCoupons.Visible = false;        pnlStart.Visible = true;    }    protected void btnImHereNow_Click(object sender, EventArgs e)    {        long PlaceId = Convert.ToInt32(hdPlaceId.Value);        string UserName = Membership.GetUser().UserName;        string WhatsHappening = txtWhatsHappening.Text;        double UserLat = Convert.ToDouble(Latitude.Value);        double UserLon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsHappening,             UserLat, UserLon);    }    protected void lbImHereNowReturn_Click(object sender, EventArgs e)    {        pnlImHereNow.Visible = false;        pnlPlaces.Visible = true;    }    protected void lbBackToBeginning_Click(object sender, EventArgs e)    {        pnlStart.Visible = true;        pnlImHereNow.Visible = false;    }    protected void lbWhereIveBeen_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        pnlStart.Visible = false;        pnlIveBeenHere.Visible = true;        DataAccess da = new DataAccess();        lvWhereIveBeen.DataSource = da.UserATs(UserName, 0, 15);        lvWhereIveBeen.DataBind();    }    protected void lbIveBeenHereBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = false;        pnlStart.Visible = true;    }     protected void lbPlaceIveBeen_Click(object sender, EventArgs e)    {        LinkButton lb = (LinkButton)sender;        ListViewItem lvi = (ListViewItem)lb.Parent.Parent;        HiddenField hdATID = (HiddenField)lvi.FindControl("hdATID");        Int64 ATID = Convert.ToInt64(hdATID.Value);        DataAccess da = new DataAccess();        pnlIveBeenHere.Visible = false;        pnlPlaceIveBeen.Visible = true;        var plac = da.GetPlaceViaATID(ATID);        hdPlaceIveBeenPlaceId.Value = plac.PlaceId.ToString();        hdPlaceIveBeenLatitude.Value = plac.Latitude.ToString();        hdPlaceIveBeenLongitude.Value = plac.Longitude.ToString();        lblPlaceIveBeenName.Text = plac.Name;        lblPlaceIveBeenAddress.Text = plac.Address1;        lblPlaceIveBeenCity.Text = plac.City;        lblPlaceIveBeenState.Text = plac.Province.ProvinceName;        lblPlaceIveBeenZipCode.Text = plac.ZipCode;        lblPlaceIveBeenCountry.Text = plac.Country.Name;    }     protected void lbNotListed_Click(object sender, EventArgs e)    {        SetupAddPoint();        pnlPlaces.Visible = false;    }     protected void lbAddAPlace_Click(object sender, EventArgs e)    {        SetupAddPoint();    }     private void SetupAddPoint()    {        double lat = Convert.ToDouble(Latitude.Value);        double lon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        var zip = da.WhereAmIAt(lat, lon);        if (zip.Count > 0)        {            var z0 = zip[0];            txtCity.Text = z0.City;            txtZipCode.Text = z0.ZipCode;            ddlProvince.ClearSelection();            if (z0.ProvinceId.HasValue == true)            {                foreach (ListItem li in ddlProvince.Items)                {                    if (li.Value == z0.ProvinceId.Value.ToString())                    {                        li.Selected = true;                        break;                    }                }            }        }        pnlAddPlace.Visible = true;        pnlStart.Visible = false;    }    protected void lbAddPlaceReturn_Click(object sender, EventArgs e)    {        pnlAddPlace.Visible = false;        pnlStart.Visible = true;        liPlaceAddMessage.Visible = false;        PlaceAddMessage.Text = String.Empty;    }    protected void lbPlaceIveBeenBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = true;        pnlPlaceIveBeen.Visible = false;            }    protected void lbPlaceIveBeenBeginning_Click(object sender, EventArgs e)    {        pnlPlaceIveBeen.Visible = false;        pnlStart.Visible = true;    }    protected string DisplayName(object val)    {        string strVal = Convert.ToString(val);         if (AppleIPhone.IsIPad())        {            ipd = iPhoneDevice.iPad;        }        if (AppleIPhone.IsIPhone())        {            ipd = iPhoneDevice.iPhone;        }        if (AppleIPhone.IsIPodTouch())        {            ipd = iPhoneDevice.iPodTouch;        }        return (iPhoneHelper.DisplayContentOnMenu(strVal, ipd));    }} iPhoneHelper.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; public enum iPhoneDevice{    iPhone, iPodTouch, iPad}/// <summary>/// Summary description for iPhoneHelper/// </summary>/// public class iPhoneHelper{ public iPhoneHelper() {  //  // TODO: Add constructor logic here  // } // This code is stupid in retrospect. Use css to solve this problem      public static string DisplayContentOnMenu(string val, iPhoneDevice ipd)    {        string Return = val;        string Elipsis = "...";        int iPadMaxLength = 30;        int iPhoneMaxLength = 15;        if (ipd == iPhoneDevice.iPad)        {            if (Return.Length > iPadMaxLength)            {                Return = Return.Substring(0, iPadMaxLength - Elipsis.Length) + Elipsis;            }        }        else        {            if (Return.Length > iPhoneMaxLength)            {                Return = Return.Substring(0, iPhoneMaxLength - Elipsis.Length) + Elipsis;            }        }        return (Return);    }}  Source code for the ViewStatePage: using System;using System.Data;using System.Data.SqlClient;using System.Configuration;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Web.UI.HtmlControls; /// <summary>/// Summary description for BasePage/// </summary>#region Base class for a page.public class ViewStatePage : System.Web.UI.Page{     PageStatePersisterToDatabase myPageStatePersister;        public ViewStatePage()        : base()    {        myPageStatePersister = new PageStatePersisterToDatabase(this);    }     protected override PageStatePersister PageStatePersister    {        get        {            return myPageStatePersister;        }    } }#endregion #region This class will override the page persistence to store page state in a database.public class PageStatePersisterToDatabase : PageStatePersister{    private string ViewStateKeyField = "__VIEWSTATE_KEY";    private string _exNoConnectionStringFound = "No Database Configuration information is in the web.config.";     public PageStatePersisterToDatabase(Page page)        : base(page)    {    }     public override void Load()    {         // Get the cache key from the web form data        System.Int64 key = Convert.ToInt64(Page.Request.Params[ViewStateKeyField]);         Pair state = this.LoadState(key);         // Abort if cache object is not of type Pair        if (state == null)            throw new ApplicationException("Missing valid " + ViewStateKeyField);         // Set view state and control state        ViewState = state.First;        ControlState = state.Second;    }     public override void Save()    {         // No processing needed if no states available        if (ViewState == null && ControlState != null)            return;         System.Int64 key;        IStateFormatter formatter = this.StateFormatter;        Pair statePair = new Pair(ViewState, ControlState);         // Serialize the statePair object to a string.        string serializedState = formatter.Serialize(statePair);         // Save the ViewState and get a unique identifier back.        key = SaveState(serializedState);         // Register hidden field to store cache key in        // Page.ClientScript does not work properly with Atlas.        //Page.ClientScript.RegisterHiddenField(ViewStateKeyField, key.ToString());        ScriptManager.RegisterHiddenField(this.Page, ViewStateKeyField, key.ToString());    }     private System.Int64 SaveState(string PageState)    {        System.Int64 i64Key = 0;        string strConn = String.Empty,            strProvider = String.Empty;         string strSql = "insert into tblPageState ( SerializedState ) values ( '" + SqlEscape(PageState) + "');select scope_identity();";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);            sqlCn.Open();            i64Key = Convert.ToInt64(sqlCm.ExecuteScalar());            if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return i64Key;    }     private Pair LoadState(System.Int64 iKey)    {        string strConn = String.Empty,            strProvider = String.Empty,            SerializedState = String.Empty,            strMinutesInPast = GetMinutesInPastToDelete();        Pair PageState;        string strSql = "select SerializedState from tblPageState where tblPageStateID=" + iKey.ToString() + ";" +            "delete from tblPageState where DateUpdated<DateAdd(mi, " + strMinutesInPast + ", getdate());";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);             sqlCn.Open();            SerializedState = Convert.ToString(sqlCm.ExecuteScalar());            IStateFormatter formatter = this.StateFormatter;             if ((null == SerializedState) ||                (String.Empty == SerializedState))            {                throw (new ApplicationException("No ViewState records were returned."));            }             // Deserilize returns the Pair object that is serialized in            // the Save method.            PageState = (Pair)formatter.Deserialize(SerializedState);             if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return PageState;    }     private string SqlEscape(string Val)    {        string ReturnVal = String.Empty;        if (null != Val)        {            ReturnVal = Val.Replace("'", "''");        }        return (ReturnVal);    }    private void GetDBConnectionString(ref string ConnectionStringValue, ref string ProviderNameValue)    {        if (System.Configuration.ConfigurationManager.ConnectionStrings.Count > 0)        {            ConnectionStringValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString;            ProviderNameValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ProviderName;        }        else        {            throw new ConfigurationErrorsException(_exNoConnectionStringFound);        }    }    private string GetMinutesInPastToDelete()    {        string strReturn = "-60";        if (null != System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"])        {            strReturn = System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"].ToString();        }        return (strReturn);    }}#endregion AppleiPhone.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; /// <summary>/// Summary description for AppleIPhone/// </summary>public class AppleIPhone{ public AppleIPhone() {  //  // TODO: Add constructor logic here  // }     static public bool IsIPhoneOS()    {        return (IsIPad() || IsIPhone() || IsIPodTouch());    }     static public bool IsIPhone()    {        return IsTest("iPhone");    }     static public bool IsIPodTouch()    {        return IsTest("iPod");    }     static public bool IsIPad()    {        return IsTest("iPad");    }     static private bool IsTest(string Agent)    {        bool bl = false;        string ua = HttpContext.Current.Request.UserAgent.ToLower();        try        {            bl = ua.Contains(Agent.ToLower());        }        catch { }        return (bl);        }} Master page .cs: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls; public partial class MasterPages_iPhoneMaster : System.Web.UI.MasterPage{    protected void Page_Load(object sender, EventArgs e)    {            HtmlHead head = Page.Header;            HtmlMeta meta = new HtmlMeta();            if (AppleIPhone.IsIPad() == true)            {                meta.Content = "width=400,user-scalable=no";                head.Controls.Add(meta);             }            else            {                meta.Content = "width=device-width, user-scalable=no";                meta.Attributes.Add("name", "viewport");            }            meta.Attributes.Add("name", "viewport");            head.Controls.Add(meta);            HtmlLink cssLink = new HtmlLink();            HtmlGenericControl script = new HtmlGenericControl("script");            script.Attributes.Add("type", "text/javascript");            script.Attributes.Add("src", ResolveUrl("~/Scripts/iWebKit/javascript/functions.js"));            head.Controls.Add(script);            cssLink.Attributes.Add("rel", "stylesheet");            cssLink.Attributes.Add("href", ResolveUrl("~/Scripts/iWebKit/css/style.css") );            cssLink.Attributes.Add("type", "text/css");            head.Controls.Add(cssLink);            HtmlGenericControl jsLink = new HtmlGenericControl("script");            //jsLink.Attributes.Add("type", "text/javascript");            //jsLink.Attributes.Add("src", ResolveUrl("~/Scripts/jquery-1.4.1.min.js") );            //head.Controls.Add(jsLink);            HtmlLink appleIcon = new HtmlLink();            appleIcon.Attributes.Add("rel", "apple-touch-icon");            appleIcon.Attributes.Add("href", ResolveUrl("~/apple-touch-icon.png"));            HtmlMeta appleMobileWebAppStatusBarStyle = new HtmlMeta();            appleMobileWebAppStatusBarStyle.Attributes.Add("name", "apple-mobile-web-app-status-bar-style");            appleMobileWebAppStatusBarStyle.Attributes.Add("content", "black");            head.Controls.Add(appleMobileWebAppStatusBarStyle);    }     internal string FindPath(string Location)    {        string Url = Server.MapPath(Location);        return (Url);    }}

    Read the article

  • MVC Portable Areas Enhancement &ndash; Embedded Resource Controller

    - by Steve Michelotti
    MvcContrib contains a feature called Portable Areas which I’ve recently blogged about. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. This is an extremely cool feature but once you start building robust portable areas, you’ll also want to be able to access other external files like css and javascript.  After my recent post suggesting portable areas be expanded to include other embedded resources, Eric Hexter asked me if I’d like to contribute the code to MvcContrib (which of course I did!). Embedded resources are stored in a case-sensitive way in .NET assemblies and the existing embedded view engine inside MvcContrib already took this into account. Obviously, we’d want the same case sensitivity handling to be taken into account for any embedded resource so my job consisted of 1) adding the Embedded Resource Controller, and 2) a little refactor to extract the logic that deals with embedded resources so that the embedded view engine and the embedded resource controller could both leverage it and, therefore, keep the code DRY. The embedded resource controller targets these scenarios: External image files that are referenced in an <img> tag External files referenced like css or JavaScript files Image files referenced inside css files Embedded Resources Walkthrough This post will describe a walkthrough of using the embedded resource controller in your portable areas to include the scenarios outlined above. I will build a trivial “Quick Links” widget to illustrate the concepts. The portable area registration is the starting point for all portable areas. The MvcContrib.PortableAreas.EmbeddedResourceController is optional functionality – you must opt-in if you want to use it.  To do this, you simply “register” it by providing a route in your area registration that uses it like this: 1: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 2: new { controller = "EmbeddedResource", action = "Index" }, 3: new string[] { "MvcContrib.PortableAreas" }); First, notice that I can specify any route I want (e.g., “quicklinks/resources/…”).  Second, notice that I need to include the “MvcContrib.PortableAreas” namespace as the fourth parameter so that the framework is able to find the EmbeddedResourceController at runtime. The handling of embedded views and embedded resources have now been merged.  Therefore, the call to: 1: RegisterTheViewsInTheEmmeddedViewEngine(GetType()); has now been removed (breaking change).  It has been replaced with: 1: RegisterAreaEmbeddedResources(); Other than that, the portable area registration remains unchanged. The solution structure for the static files in my portable area looks like this: I’ve got a css file in a folder called “Content” as well as a couple of image files in a folder called “images”. To reference these in my aspx/ascx code, all of have to do is this: 1: <link href="<%= Url.Resource("Content.QuickLinks.css") %>" rel="stylesheet" type="text/css" /> 2: <img src="<%= Url.Resource("images.globe.png") %>" /> This results in the following HTML mark up: 1: <link href="/quicklinks/resource/Content.QuickLinks.css" rel="stylesheet" type="text/css" /> 2: <img src="/quicklinks/resource/images.globe.png" /> The Url.Resource() method is now included in MvcContrib as well. Make sure you import the “MvcContrib” namespace in your views. Next, I have to following html to render the quick links: 1: <ul class="links"> 2: <li><a href="http://www.google.com">Google</a></li> 3: <li><a href="http://www.bing.com">Bing</a></li> 4: <li><a href="http://www.yahoo.com">Yahoo</a></li> 5: </ul> Notice the <ul> tag has a class called “links”. This is defined inside my QuickLinks.css file and looks like this: 1: ul.links li 2: { 3: background: url(/quicklinks/resource/images.navigation.png) left 4px no-repeat; 4: padding-left: 20px; 5: margin-bottom: 4px; 6: } On line 3 we’re able to refer to the url for the background property. As a final note, although we already have complete control over the location of the embedded resources inside the assembly, what if we also want control over the physical URL routes as well. This point was raised by John Nelson in this post. This has been taken into account as well. For example, suppose you want your physical url to look like this: 1: <img src="/quicklinks/images/globe.png" /> instead of the same corresponding URL shown above (i.e., “/quicklinks/resources/images.globe.png”). You can do this easily by specifying another route for it which includes a “resourcePath” parameter that is pre-pended. Here is the complete code for the area registration with the custom route for the images shown on lines 9-11: 1: public class QuickLinksRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 6: new { controller = "EmbeddedResource", action = "Index" }, 7: new string[] { "MvcContrib.PortableAreas" }); 8:   9: context.MapRoute("ResourceImageRoute", "quicklinks/images/{resourceName}", 10: new { controller = "EmbeddedResource", action = "Index", resourcePath = "images" }, 11: new string[] { "MvcContrib.PortableAreas" }); 12:   13: context.MapRoute("quicklink", "quicklinks/{controller}/{action}", 14: new {controller = "links", action = "index"}); 15:   16: this.RegisterAreaEmbeddedResources(); 17: } 18:   19: public override string AreaName 20: { 21: get 22: { 23: return "QuickLinks"; 24: } 25: } 26: } The Quick Links portable area results in the following requests (including custom route formats): The complete code for this post is now included in the Portable Areas sample solution in the latest MvcContrib source code. You can get the latest code now.  Portable Areas open up exciting new possibilities for MVC development!

    Read the article

  • New MySQL Cluster 7.3 Previews: Foreign Keys, NoSQL Node.js API and Auto-Tuned Clusters

    - by Mat Keep
    At this weeks MySQL Connect conference, Oracle previewed an exciting new wave of developments for MySQL Cluster, further extending its simplicity and flexibility by expanding the range of use-cases, adding new NoSQL options, and automating configuration. What’s new: Development Release 1: MySQL Cluster 7.3 with Foreign Keys Early Access “Labs” Preview: MySQL Cluster NoSQL API for Node.js Early Access “Labs” Preview: MySQL Cluster GUI-Based Auto-Installer In this blog, I'll introduce you to the features being previewed. Review the blogs listed below for more detail on each of the specific features discussed. Save the date!: A live webinar is scheduled for Thursday 25th October at 0900 Pacific Time / 1600UTC where we will discuss each of these enhancements in more detail. Registration will be open soon and published to the MySQL webinars page MySQL Cluster 7.3: Development Release 1 The first MySQL Cluster 7.3 Development Milestone Release (DMR) previews Foreign Keys, bringing powerful new functionality to MySQL Cluster while eliminating development complexity. Foreign Key support has been one of the most requested enhancements to MySQL Cluster – enabling users to simplify their data models and application logic – while extending the range of use-cases for both custom projects requiring referential integrity and packaged applications, such as eCommerce, CRM, CMS, etc. Implementation The Foreign Key functionality is implemented directly within the MySQL Cluster data nodes, allowing any client API accessing the cluster to benefit from them – whether they are SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA, HTTP/REST or the new Node.js API - discussed later.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation.  This represents a further enhancement to MySQL Cluster's support for on0line schema changes, ie adding and dropping indexes, adding columns, etc.  Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  Getting Started with MySQL Cluster 7.3 DMR1: Users can download either the source or binary and evaluate the MySQL Cluster 7.3 DMR with Foreign Keys now! (Select the Development Release tab). MySQL Cluster NoSQL API for Node.js Node.js is hot! In a little over 3 years, it has become one of the most popular environments for developing next generation web, cloud, mobile and social applications. Bringing JavaScript from the browser to the server, the design goal of Node.js is to build new real-time applications supporting millions of client connections, serviced by a single CPU core. Making it simple to further extend the flexibility and power of Node.js to the database layer, we are previewing the Node.js Javascript API for MySQL Cluster as an Early Access release, available for download now from http://labs.mysql.com/. Select the following build: MySQL-Cluster-NoSQL-Connector-for-Node-js Alternatively, you can clone the project at the MySQL GitHub page.  Implemented as a module for the V8 engine, the new API provides Node.js with a native, asynchronous JavaScript interface that can be used to both query and receive results sets directly from MySQL Cluster, without transformations to SQL. Figure 1: MySQL Cluster NoSQL API for Node.js enables end-to-end JavaScript development Rather than just presenting a simple interface to the database, the Node.js module integrates the MySQL Cluster native API library directly within the web application itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The new Node.js API joins a rich array of NoSQL interfaces available for MySQL Cluster. Whichever API is chosen for an application, SQL and NoSQL can be used concurrently across the same data set, providing the ultimate in developer flexibility.  Get started with MySQL Cluster NoSQL API for Node.js tutorial MySQL Cluster GUI-Based Auto-Installer Compatible with both MySQL Cluster 7.2 and 7.3, the Auto-Installer makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments – whether on-premise or in the cloud. Implemented with a standard HTML GUI and Python-based web server back-end, the Auto-Installer intelligently configures MySQL Cluster based on application requirements and auto-discovered hardware resources Figure 2: Automated Tuning and Configuration of MySQL Cluster Developed by the same engineering team responsible for the MySQL Cluster database, the installer provides standardized configurations that make it simple, quick and easy to build stable and high performance clustered environments. The auto-installer is previewed as an Early Access release, available for download now from http://labs.mysql.com/, by selecting the MySQL-Cluster-Auto-Installer build. You can read more about getting started with the MySQL Cluster auto-installer here. Watch the YouTube video for a demonstration of using the MySQL Cluster auto-installer Getting Started with MySQL Cluster If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3 and the Early Access previews). Or use the new MySQL Cluster Auto-Installer! Download the Guide to Scaling Web Databases with MySQL Cluster (to learn more about its architecture, design and ideal use-cases). Post any questions to the MySQL Cluster forum where our Engineering team and the MySQL Cluster community will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section here or in the blogs referenced in this article. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. The first Development Release of MySQL Cluster 7.3 and the Early Access previews give you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases, by using the comments of this blog.

    Read the article

  • Non-Dom Element Event Binding with jQuery

    - by Rick Strahl
    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake ‘events’ on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didn’t know about) you can also set up binding to non-DOM element ‘event’ bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and you want to be notified when the foo function is called. You can use jQuery to bind the handler like this: $(item).bind("foo", function () { alert('foo Hook called'); } ); Binding alone won’t actually cause the handler to be triggered so when you call: item.foo(); you only get the ‘original’ message. In order to fire both the original handler and the bound event hook you have to use the .trigger() function: $(item).trigger("foo"); Now if you do the following complete sequence: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; $(item).bind("foo", function () { alert('foo hook called'); } ); $(item).trigger("foo"); You’ll see the ‘hook’ message first followed by the ‘original’ message fired in succession. In other words, using this mechanism you can hook standard object functions and chain events to them in a way similar to the way you can do with DOM elements. The main difference is that the ‘event’ has to be explicitly triggered in order for this to happen rather than just calling the method directly. .trigger() relies on some internal logic that checks for event bindings on the object (attached via an expando property) which .trigger() searches for in its bound event list. Once the ‘event’ is found it’s called prior to execution of the original function. This is pretty useful as it allows you to create standard JavaScript objects that can act as event handlers and are effectively hookable without having to explicitly override event definitions with JavaScript function handlers. You get all the benefits of jQuery’s event methods including the ability to hook up multiple events to the same handler function and the ability to uniquely identify each specific event instance with post fix string names (ie. .bind("MyEvent.MyName") and .unbind("MyEvent.MyName") to bind MyEvent). Watch out for an .unbind() Bug Note that there appears to be a bug with .unbind() in jQuery that doesn’t reliably unbind an event and results in a elem.removeEventListener is not a function error. The following code demonstrates: var item = { sku: "wwhelp", foo: function () { alert('orginal foo function'); } }; $(item).bind("foo.first", function () { alert('foo hook called'); }); $(item).bind("foo.second", function () { alert('foo hook2 called'); }); $(item).trigger("foo"); setTimeout(function () { $(item).unbind("foo"); // $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); The setTimeout call delays the unbinding and is supposed to remove the event binding on the foo function. It fails both with the foo only value (both if assigned only as “foo” or “foo.first/second” as well as when removing both of the postfixed event handlers explicitly. Oddly the following that removes only one of the two handlers works: setTimeout(function () { //$(item).unbind("foo"); $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); this actually works which is weird as the code in unbind tries to unbind using a DOM method that doesn’t exist. <shrug> A partial workaround for unbinding all ‘foo’ events is the following: setTimeout(function () { $.event.special.foo = { teardown: function () { alert('teardown'); return true; } }; $(item).unbind("foo"); $(item).trigger("foo"); }, 3000); which is a bit cryptic to say the least but it seems to work more reliably. I can’t take credit for any of this – thanks to Dave Reed and Damien Edwards who pointed out some of these behaviors. I didn’t find any good descriptions of the process so thought it’d be good to write it down here. Hope some of you find this helpful.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>”. This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks”. By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". Carl Dacosta ASP.NET QA Team

    Read the article

  • Form, function and complexity in rule processing

    - by Charles Young
    Tim Bass posted on ‘Orwellian Event Processing’. I was involved in a heated exchange in the comments, and he has more recently published a post entitled ‘Disadvantages of Rule-Based Systems (Part 1)’. Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions. I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation of my comments. For me, the ‘red rag’ lay in Tim’s claim that “...rules alone are highly inefficient for most classes of (not simple) problems” and a later paragraph that appears to equate the simplicity of form (‘IF-THEN-ELSE’) with simplicity of function.   It is not the first time Tim has expressed these views and not the first time I have responded to his assertions.   Indeed, Tim has a long history of commenting on the subject of complex event processing (CEP) and, less often, rule processing in ‘robust’ terms, often asserting that very many other people’s opinions on this subject are mistaken.   In turn, I am of the opinion that, certainly in terms of rule processing, which is an area in which I have a specific interest and knowledge, he is often mistaken. There is no simple answer to the fundamental question ‘what is a rule?’ We use the word in a very fluid fashion in English. Likewise, the term ‘rule processing’, as used widely in IT, is equally difficult to define simplistically. The best way to envisage the term is as a ‘centre of gravity’ within a wider domain. That domain contains many other ‘centres of gravity’, including CEP, statistical analytics, neural networks, natural language processing and so much more. Whole communities tend to gravitate towards and build themselves around some of these centres. The term 'rule processing' is associated with many different technology types, various software products, different architectural patterns, the functional capability of many applications and services, etc. There is considerable variation amongst these different technologies, techniques and products. Very broadly, a common theme is their ability to manage certain types of processing and problem solving through declarative, or semi-declarative, statements of propositional logic bound to action-based consequences. It is generally important to be able to decouple these statements from other parts of an overall system or architecture so that they can be managed and deployed independently.  As a centre of gravity, ‘rule processing’ is no island. It exists in the context of a domain of discourse that is, itself, highly interconnected and continuous.   Rule processing does not, for example, exist in splendid isolation to natural language processing.   On the contrary, an on-going theme of rule processing is to find better ways to express rules in natural language and map these to executable forms.   Rule processing does not exist in splendid isolation to CEP.   On the contrary, an event processing agent can reasonably be considered as a rule engine (a theme in ‘Power of Events’ by David Luckham).   Rule processing does not live in splendid isolation to statistical approaches such as Bayesian analytics. On the contrary, rule processing and statistical analytics are highly synergistic.   Rule processing does not even live in splendid isolation to neural networks. For example, significant research has centred on finding ways to translate trained nets into explicit rule sets in order to support forms of validation and facilitate insight into the knowledge stored in those nets. What about simplicity of form?   Many rule processing technologies do indeed use a very simple form (‘If...Then’, ‘When...Do’, etc.)   However, it is a fundamental mistake to equate simplicity of form with simplicity of function.   It is absolutely mistaken to suggest that simplicity of form is a barrier to the efficient handling of complexity.   There are countless real-world examples which serve to disprove that notion.   Indeed, simplicity of form is often the key to handling complexity. Does rule processing offer a ‘one size fits all’. No, of course not.   No serious commentator suggests it does.   Does the design and management of large knowledge bases, expressed as rules, become difficult?   Yes, it can do, but that is true of any large knowledge base, regardless of the form in which knowledge is expressed.   The measure of complexity is not a function of rule set size or rule form.  It tends to be correlated more strongly with the size of the ‘problem space’ (‘search space’) which is something quite different.   Analysis of the problem space and the algorithms we use to search through that space are, of course, the very things we use to derive objective measures of the complexity of a given problem. This is basic computer science and common practice. Sailing a Dreadnaught through the sea of information technology and lobbing shells at some of the islands we encounter along the way does no one any good.   Building bridges and causeways between islands so that the inhabitants can collaborate in open discourse offers hope of real progress.

    Read the article

  • HSDPA modem only working on certain USB ports

    - by nabulke
    Depending on which USB port I use to connect my HSDPA modem, the network manager will connect to the internet or not. I used to work (i.e. established a internet connection automatically) on all ports, but over time it simply stopped on some ports. lsusb output in all cases looks like that (Device ID varies depending on USB port): Bus 001 Device 009: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Any ideas what could cause this behaviour? What can I do to fix this? ADDED One additional information about the modem: if connected via USB it will be available as as harddrive AND as a HSDPA modem (kind of a duality...). In the error case, it will only be shown as a harddrive. ADDITIONAL INFO AS REQUESTED MODEM NOT WORKING Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:8000 Dell Computer Corp. BC02 Bluetooth Adapter Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 007: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Bus 001 Device 005: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 001 Device 004: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Bus 001 Device 003: ID 413c:0058 Dell Computer Corp. Port Replicator Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub laptop:~$ dmesg | grep 'usb' [ 0.225371] usbcore: registered new interface driver usbfs [ 0.225387] usbcore: registered new interface driver hub [ 0.225418] usbcore: registered new device driver usb [ 0.504291] usb usb1: configuration #1 chosen from 1 choice [ 0.504767] usb usb2: configuration #1 chosen from 1 choice [ 0.505046] usb usb3: configuration #1 chosen from 1 choice [ 0.505601] usb usb4: configuration #1 chosen from 1 choice [ 1.061064] usb 1-6: new high speed USB device using ehci_hcd and address 3 [ 1.192636] usb 1-6: configuration #1 chosen from 1 choice [ 1.447006] usb 2-2: new full speed USB device using uhci_hcd and address 2 [ 1.634908] usb 2-2: configuration #1 chosen from 1 choice [ 1.708164] usb 1-6.1: new high speed USB device using ehci_hcd and address 4 [ 1.801668] usb 1-6.1: configuration #1 chosen from 1 choice [ 2.076279] usb 1-6.1.1: new low speed USB device using ehci_hcd and address 5 [ 2.174932] usb 1-6.1.1: configuration #1 chosen from 1 choice [ 6.580315] usb 1-6.1.2: new high speed USB device using ehci_hcd and address6 [ 6.683479] usb 1-6.1.2: configuration #1 chosen from 1 choice [ 20.018671] usbcore: registered new interface driver btusb [ 20.131703] usbcore: registered new interface driver usb-storage [ 20.131988] usb-storage: device found at 6 [ 20.131991] usb-storage: waiting for device to settle before scanning [ 20.207981] usb 1-6.1.2: USB disconnect, address 6 [ 20.291499] usbcore: registered new interface driver hiddev [ 20.297052] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6.1/1-6.1.1/1-6.1.1:1.0/input/input6 [ 20.297465] generic-usb 0003:046D:C00C.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:1d.7-6.1.1/input0 [ 20.297534] usbcore: registered new interface driver usbhid [ 20.297803] usbhid: v2.6:USB HID core driver [ 26.552360] usb 1-6.1.2: new high speed USB device using ehci_hcd and address 7 [ 26.663506] usb 1-6.1.2: configuration #1 chosen from 1 choice [ 26.709628] usb-storage: device found at 7 [ 26.709631] usb-storage: waiting for device to settle before scanning [ 26.732387] usb-storage: device found at 7 [ 26.732390] usb-storage: waiting for device to settle before scanning [ 31.709568] usb-storage: device scan complete [ 31.733676] usb-storage: device scan complete MODEM WORKING Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:8000 Dell Computer Corp. BC02 Bluetooth Adapter Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub dmesg | grep 'usb' [ 0.134811] usbcore: registered new interface driver usbfs [ 0.134826] usbcore: registered new interface driver hub [ 0.134858] usbcore: registered new device driver usb [ 0.360327] usb usb1: configuration #1 chosen from 1 choice [ 0.360783] usb usb2: configuration #1 chosen from 1 choice [ 0.361061] usb usb3: configuration #1 chosen from 1 choice [ 0.361611] usb usb4: configuration #1 chosen from 1 choice [ 1.144122] usb 2-2: new full speed USB device using uhci_hcd and address 2 [ 1.346896] usb 2-2: configuration #1 chosen from 1 choice [ 1.588072] usb 3-1: new low speed USB device using uhci_hcd and address 2 [ 1.761204] usb 3-1: configuration #1 chosen from 1 choice [ 5.972042] usb 1-1: new high speed USB device using ehci_hcd and address 4 [ 6.115438] usb 1-1: configuration #1 chosen from 1 choice [ 19.990565] usbcore: registered new interface driver usbserial [ 19.991429] usb-storage: device found at 4 [ 19.991432] usb-storage: waiting for device to settle before scanning [ 20.017260] usbcore: registered new interface driver usb-storage [ 20.017305] usbcore: registered new interface driver usbserial_generic [ 20.017308] usbserial: USB Serial Driver core [ 20.017817] usb-storage: device found at 4 [ 20.017820] usb-storage: waiting for device to settle before scanning [ 20.070796] usbcore: registered new interface driver btusb [ 20.229525] usb 1-1: GSM modem (1-port) converter now attached to ttyUSB0 [ 20.229776] usb 1-1: GSM modem (1-port) converter now attached to ttyUSB1 [ 20.229843] usbcore: registered new interface driver option [ 20.230396] usbcore: registered new interface driver hiddev [ 20.246280] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input6 [ 20.246438] generic-usb 0003:046D:C00C.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:1d.1-1/input0 [ 20.246479] usbcore: registered new interface driver usbhid [ 20.246483] usbhid: v2.6:USB HID core driver [ 25.436579] usb-storage: device scan complete [ 25.437674] usb-storage: device scan complete

    Read the article

  • ASP.NET MVC HandleError Attribute

    - by Ben Griswold
    Last Wednesday, I took a whopping 15 minutes out of my day and added ELMAH (Error Logging Modules and Handlers) to my ASP.NET MVC application.  If you haven’t heard the news (I hadn’t until recently), ELMAH does a killer job of logging and reporting nearly all unhandled exceptions.  As for handled exceptions, I’ve been using NLog but since I was already playing with the ELMAH bits I thought I’d see if I couldn’t replace it. Atif Aziz provided a quick solution in his answer to a Stack Overflow question.  I’ll let you consult his answer to see how one can subclass the HandleErrorAttribute and override the OnException method in order to get the bits working.  I pretty much took rolled the recommended logic into my application and it worked like a charm.  Along the way, I did uncover a few HandleError fact to which I wasn’t already privy.  Most of my learning came from Steven Sanderson’s book, Pro ASP.NET MVC Framework.  I’ve flipped through a bunch of the book and spent time on specific sections.  It’s a really good read if you’re looking to pick up an ASP.NET MVC reference. Anyway, my notes are found a comments in the following code snippet.  I hope my notes clarify a few things for you too. public class LogAndHandleErrorAttribute : HandleErrorAttribute {     public override void OnException(ExceptionContext context)     {         // A word from our sponsors:         //      http://stackoverflow.com/questions/766610/how-to-get-elmah-to-work-with-asp-net-mvc-handleerror-attribute         //      and Pro ASP.NET MVC Framework by Steven Sanderson         //         // Invoke the base implementation first. This should mark context.ExceptionHandled = true         // which stops ASP.NET from producing a "yellow screen of death." This also sets the         // Http StatusCode to 500 (internal server error.)         //         // Assuming Custom Errors aren't off, the base implementation will trigger the application         // to ultimately render the "Error" view from one of the following locations:         //         //      1. ~/Views/Controller/Error.aspx         //      2. ~/Views/Controller/Error.ascx         //      3. ~/Views/Shared/Error.aspx         //      4. ~/Views/Shared/Error.ascx         //         // "Error" is the default view, however, a specific view may be provided as an Attribute property.         // A notable point is the Custom Errors defaultRedirect is not considered in the redirection plan.         base.OnException(context);           var e = context.Exception;                  // If the exception is unhandled, simply return and let Elmah handle the unhandled exception.         // Otherwise, try to use error signaling which involves the fully configured pipeline like logging,         // mailing, filtering and what have you). Failing that, see if the error should be filtered.         // If not, the error simply logged the exception.         if (!context.ExceptionHandled                || RaiseErrorSignal(e)                   || IsFiltered(context))                  return;           LogException(e); // FYI. Simple Elmah logging doesn't handle mail notifications.     }

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • What’s New for Oracle Commerce? Executive QA with John Andrews, VP Product Management, Oracle Commerce

    - by Katrina Gosek
    Oracle Commerce was for the fifth time positioned as a leader by Gartner in the Magic Quadrant for E-Commerce. This inspired me to sit down with Oracle Commerce VP of Product Management, John Andrews to get his perspective on what continues to make Oracle a leader in the industry and what’s new for Oracle Commerce in 2013. Q: Why do you believe Oracle Commerce continues to be a leader in the industry? John: Oracle has a great acquisition strategy – it brings best-of-breed technologies into the product fold and then continues to grow and innovate them. This is particularly true with products unified into the Oracle Commerce brand. Oracle acquired ATG in late 2010 – and then Endeca in late 2011. This means that under the hood of Oracle Commerce you have market-leading technologies for cross-channel commerce and customer experience, both designed and developed in direct response to the unique challenges online businesses face. And we continue to innovate on capabilities core to what our customers need to be successful – contextual and personalized experience delivery, merchant-inspired tools, and architecture for performance and scalability. Q: It’s not a slow moving industry. What are you doing to keep the pace of innovation at Oracle Commerce? John: Oracle owes our customers the most innovative commerce capabilities. By unifying the core components of ATG and Endeca we are delivering on this promise. Oracle Commerce is continuing to innovate and redefine how commerce is done and in a way that drive business results and keeps customers coming back for experiences tailored just for them. Our January and May 2013 releases not only marked the seventh significant releases for the solution since the acquisitions of ATG and Endeca, we also continue to demonstrate rapid and significant progress on the unification of commerce and customer experience capabilities of the two commerce technologies. Q: Can you tell us what was notable about these latest releases under the Oracle Commerce umbrella? John: Specifically, our latest product innovations give businesses selling online the ability to get to market faster with more personalized commerce experiences in the following ways: Mobile: the latest Commerce Reference Application in this release offers a wider range of examples for online businesses to leverage for iOS development and specifically new iPad reference capabilities. This release marks the first release of the iOS Universal application that serves both the iPhone and iPad devices from a single download or binary. Business users can now drive page content management and layout of search results and category pages, as well as create additional storefront elements such as categories, facets / dimensions, and breadcrumbs through Experience Manager tools. Cross-Channel Commerce: key commerce platform capabilities have been added to support cross-channel commerce, including an expanded inventory model to maintain inventory for stores, pickup in stores and Web-based returns. Online businesses with in-store operations can now offer advanced shipping options on the web and make returns and exchange logic easily available on the web. Multi-Site Capabilities: significant enhancements to the Commerce Platform multi-site architecture that allows business users to quickly launch and manage multiple sites on the same cluster and share data, carts, and other components. First introduced in 2010, with this latest release business users can now partition or share customer profiles, control users’ site-based access, and manage personalization assets using site groups. Internationalization: continued language support and enhancements for business user tools as well and search and navigation. Guided Search now supports 35 total languages with 11 new languages (including Danish, Arabic, Norwegian, Serbian Cyrillic) added in this release. Commerce Platform tools now include localized support for 17 locales with 4 new languages (Danish, Portuguese (European), Finnish, and Thai). No development or customization is required in order for business users to use the applications in any of these supported languages. Business Tool Experience: valuable new Commerce Merchandising features include a new workflow for making emergency changes quickly and increased visibility into promotions rules and qualifications in preview mode. Oracle Commerce business tools continue to become more and more feature rich to provide intuitive, easy- to-use (yet powerful) capabilities to allow business users to manage content and the shopping experience. Commerce & Experience Unification: demonstrable unification of commerce and customer experience capabilities include – productized cartridges that provide supported integration between the Commerce Platform and Experience Management tools, cross-channel returns, Oracle Service Cloud integration, and integrated iPad application. The mission guiding our product development is to deliver differentiated, personalized user experiences across any device in a contextual manner – and to give the business the best tools to tune and optimize those user experiences to meet their business objectives. We also need to do this in a way that makes it operationally efficient for the business, keeping the overall total cost of ownership low – yet also allows the business to expand, whether it be to new business models, geographies or brands. To learn more about the latest Oracle Commerce releases and mission, visit the links below: • Hear more from John about the Oracle Commerce mission • Hear from Oracle Commerce customers • Documentation on the new releases • Listen to the Oracle ATG Commerce 10.2 Webcast • Listen to the Oracle Endeca Commerce 3.1.2 Webcast

    Read the article

  • Taking advantage of Windows Azure CDN and Dynamic Pages in ASP.NET - Caching content from hosted services

    - by Shawn Cicoria
    With the updates to Windows Azure CDN announced this week [1] I wanted to help illustrate the capability with a working sample that will serve up dynamic content from an ASP.NET site hosted in a WebRole. First, to get a good overview of the capability you can read the Overview of the Windows Azure CDN [2] content on MSDN. When you setup the ability to cache content from a hosted service, the requirement is to provide a path to your role’s DNS endpoint that ends in the path “/cdn”.  Additionally, you then map CDN to that service. What WAZ CDN does, is allow you to then map that through the CDN to your host.  The CDN will then make a request to your host on your client’s behalf. The requirement is still that your client, and any Url’s that are to be serviced through the CDN and this capability have to use the CDN DNS name and not your host – no different than what CDN does for Blog storage. The following 2 URL’s are samples of how the client needs to issue the requests. Windows Azure hosted service URL: http: //myHostedService.cloudapp.net/cdn/music.aspx   - for regular “dynamic” content Windows Azure CDN URL: http: //<identifier>.vo.msecnd.net/music.aspx   - for CDN “cachable” content. The first URL path’s the request direct to your host into the Azure datacenter.  The 2nd URL paths the request through the CDN infrastructure, where CDN will make the determination to request the content on behalf of the client to the Azure datacenter and your host on the /cdn path. The big advantage here is you can apply logic to your content creation.  What’s important is emitting the CDN friendly headers that allow CDN to request and re-request only when you designate based upon it’s rules of “staleness” as described in the overview page. With IIS7.5 there is an underlying issue when the Managed Module “OutputCache” is enabled that in order to emit a good header for your content, you’ll need to remove, and in my sample, helps provide CDN friendly headers.  You get IIS 7.5 when running under OS Family “2” in your service configuration. By default, and when the OutputCache managed module is loaded, if you use the HttpResponse.CachePolicy to set the Http Headers for “max-age” when the HttpCacheability is “Public”, you will NOT get the “max-age” emitted as part of the “Cache-control:” header.  Instead, the OutputCache module will remove “max-age” and just emit “public”.  It works ok when Cacheability is set to “private”. To work around the issue and ensure your code as follows emits the full max-age along with the public option, you need to remove as follows: <system.webServer>   <modules runAllManagedModulesForAllRequests="true">     <remove name="OutputCache"/>   </modules> </system.webServer>   Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromMinutes(rv));   In the attached solution, the way I approached it was to have a VirtualApplication under the root site that has it’s own web.config  - this VirtualApplication is the /cdn of the site and when deployed to Azure as a Web Role will surface as a distinct IIS Application – along with a separate AppDomain. The CDN Sample is a simple Web Forms site that the /default landing page contains 3 IFrames to host: 1. Content direct from the host @   http://xxxx.cloudapp.net/cdn 2. Content via the CDN @ http://azxxx.vo.msecnd.net  3. Simple list of recent requests – showing where the request came from.   When you run the sample the first time you hit the page, both the Host and the CDN will cause 2 initial requests to hit the host.  You won’t see the first requests in the list because of timing – but if you refresh, you’ll see that the list will show that you have 2 requests initially. 1. sourced direct from the Browser to the HOST 2. sourced via the CDN The picture above shows the call-outs of each of those requests – green rows showing requests coming direct to the HOST, yellow showing the CDN request.  The IP addresses of the green items are direct from the client, where the CDN is from the CDN data center. As you refresh the page (hit Ctrl+F5 to force a full refresh and avoid “304 – not changed”) you’ll see that the request to the HOST get’s processed direct; but the request to the CDN endpoint is serviced direct from the CDN and doesn’t incur any additional request back to the HOST. The following is the Headers from the CDN response (Status-Line) HTTP/1.1 200 OK Age 13 Cache-Control public, max-age=300 Connection keep-alive Content-Length 6212 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:14 GMT Expires Fri, 11 Mar 2011 20:52:01 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   The following are the Headers from the HOST response (Status-Line) HTTP/1.1 200 OK Cache-Control public, max-age=300 Content-Length 6189 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:15 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   You can see that with the CDN request, the countdown (age) starts for aging the content. The full sample is located here: CDNSampleSite.zip [1] http://blogs.msdn.com/b/windowsazure/archive/2011/03/09/now-available-updated-windows-azure-sdk-and-windows-azure-management-portal.aspx [2] http://msdn.microsoft.com/en-us/library/ff919703.aspx

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>” This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks” /> By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH".

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • SQL SERVER – SSMS: Backup and Restore Events Report

    - by Pinal Dave
    A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy. So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations. Here is how this report can be launched.   Once we launch this report, we can see 4 major sections shown as listed below. Average Time Taken For Backup Operations Successful Backup Operations Backup Operation Errors Successful Restore Operations Let us look at each section next. Average Time Taken For Backup Operations Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section USE msdb; SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1 ,       1 AS l2 ,       1 AS l3 ,       t1.TYPE AS [type] ,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration FROM backupset t1 INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name) WHERE t3.name = N'AdventureWorks2014' GROUP BY t1.TYPE ORDER BY t1.TYPE On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows. Successful Backup Operations Here is the expanded version of this section.   This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well. SELECT * FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name) LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id) WHERE (t1.name = N'AdventureWorks2014') ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name; The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file. Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it. Backup Operation Errors This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic. No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled. Successful Restore Operations This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections. Here is the simplified version of the query used to populate this output. USE msdb; SELECT * FROM restorehistory t1 LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id) LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id) WHERE t1.destination_database_name = N'AdventureWorks2014' ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >