Search Results

Search found 3601 results on 145 pages for 'variadic templates'.

Page 114/145 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Where do I define a group policy that will set a users desktop background color to green the first time they log in?

    - by Tyler
    Servers: W2k8 R2 x64 Desktops: Win7 Pro x64 Our current group policy uses a custom ADM file to define certain properties of the desktop (Background Image (centered), Background Color is green (00 74 00)). This policy works for us, but the down-side is that policies defined in our custom ADM are only applied after a GPUpdate /Force is applied. We would like these desktop theme settings to be applied the first time the user logs onto the computer. I've been working on a new policy that forces the computer to wait for the network when the user logs on to handle folder redirection. The reason for writing the new policy was to resolve the issue that a user needs to run GPupdate /Force the first time they log in, so it doesn't make sense for me to implement the new policy if there is still something that requires GPUpdate /Force to get the user in the state that we want them. I've moved the setting for background image out into Admin Templates- Desktop- Desktop- "Desktop Wallpaper" so this is now being set properly when the user first logs in. Now I'm left with a black background until I force a group policy update. I have tried to play around with setting a default "Theme" and had limited success; this was not reliable enough to call a solution. I suppose I could set the background color with a script? Any thoughts? It feels like I'm missing something obvious, or that this should be much easier than it is.

    Read the article

  • User-friendly program to create editable & searchable pdf files, like tax & application forms and su

    - by Nick Gorbikoff
    Hello. Can somebody recommend user-friendly program that will create ( or convert from Excel & Word, or OpenOffice) editable pdf forms. You know like tax forms, that some of us filled out. Where you can create a form with predefined format and stationary, but let user edit/fill out fields. I need something user-friendly that a regular person can use. I'm NOT looking for a pdf library ( I already use wkhtmltopdf for generating pdfs programmaticaly). The reason is that we have about 400 documents ( internal expense forms, traing forms, etc) in .doc and .xls format that we want to convert to editable pdf ( so that people don't have to fill them out by hand). Coding 400 templates and then converting them using some lib or command line tool - is not my idea of fun, espsecially since those form change all the time. I'd like to just give HR and Quality department the tool, so that they can maintain those documents. I looked at everything listed on this page ( http://www.cogniview.com/convert-pdf-to-excel/post/pdf-editing-creation-50-open-sourcefree-alternatives-to-adobe-acrobat/ ), but can't find what I need. Thank you!!!

    Read the article

  • How to get multipath working for Ubuntu Server 12.04

    - by mlampi
    I'm working on a project which aims to make use of Ubuntu servers running on enterprise class hardware. In our case that means IBM HS23E blade servers, QLogic 4GB fibre channel extension cards and quite old IBM DS4500 disk array with two controllers. At the moment we have fibre channel as only boot option and Ubuntu Server 12.04 installed just fine and is able to boot without multipath. I'm not a linux professional myself but in our team we have people who will understand the technical stuff. Don't let my post confuse :) The current situation is that we have only one fibre channel connection to a single disk array controller. Real life case would be of course quite different. At minimum we should have two fibre channel ports connected to two different switches and two different controllers. However, we have no idea how to set up multipath tool. Is the DM-MPIO the right software? At minimum we should be able to boot when multiple connections are available and achieve fault tolerance when any of them should be down. Since the disk array is not the latest hardware, I managed to find RDAC driver sources only for 2.6.x kernel. And we have 3.2.x. Another issue is to build a multipath.conf. The said driver sources are from IBM support and the QLogic drivers provided to Ubuntu installer are from Ubuntu site. It seems that RHEL and SLES would have near out of the box support but that is not an option for our project. Actual questions: - What is the recommended software tool for multipath for Ubuntu Server 12.04? - Is there available pre-made configurations or templates? Does it require disk array / controller specific settings or do a more generic config work? - Do you have expriences on similar setup and like to share the knowledge? I'll provide you with any additional information you might require. Thanks in advance.

    Read the article

  • rsyslog - template - regex data for insertion into db

    - by Mike Purcell
    I've been googling around the last few days looking for a solid example of how to regex a log entry for desired data, which is then to be inserted into a database, but apparently my google-fu is lacking. What I am trying to do is track when an email is sent, and then track the remote mta response, specifically the dsn code. At this point I have two templates setup for each situation: # /etc/rsyslog.conf ... $Template tpl_custom_header, "MPurcell: CUSTOM HEADER Template: %msg%\n" $Template tpl_response_dsn, "MPurcell: RESPONSE DSN Template: %msg%\n" # /etc/rsyslog.d/mail if $programname == 'mail-myapp' then /var/log/mail/myapp.log if ($programname == 'mail-myapp') and ($msg contains 'X-custom_header') then /var/log/mail/test.log;tpl_custom_header if ($programname == 'mail-myapp') and ($msg contains 'dsn=') then /var/log/mail/test.log;tpl_response_dsn & ~ Example log entries: MPurcell: CUSTOM HEADER Template: D921940A1A: prepend: header X-custom_header: 101 from localhost[127.0.0.1]; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<localhost>: headername: message-id MPurcell: RESPONSE DSN Template: D921940A1A: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[2607:f8b0:400e:c02::1a]:25, delay=2, delays=0.12/0.01/0.82/1.1, dsn=2.0.0, status=sent (250 2.0.0 OK 1372378600 o4si2828280pac.279 - gsmtp) From the CUSTOM HEADER Template I would like to extract: D921940A1A, and X-custom_header value; 101 From the RESPONSE DSN Template I would like to extract: D921940A1A, and "dsn=2.0.0"

    Read the article

  • APC uptime 0 because of Fast

    - by demlasjr
    I have a VPS using Parallels/Plesk (11.0.9 Update #22, last updated at Oct 31, 2012 03:33 AM CentOS 6.3 (Final) x86_64) I have apache (CGI/FastCGI) installed and nginx as reverse proxy. Everything is working just fine. I installed APC for caching, but the issue is that the uptime is 0 always. It's restarting each 15 seconds or so. I checked everywhere and can't find a solution to fix it. The server have the grace restart enabled, but every 6 hours, which shouldn't influence the APC uptime. Searching in Google I found that this could be related to Apache, running with FCGId instead of FastCGI. Plesk/Apache is using this config file: usr/local/psa/admin/conf/templates/default/service/php_over_fastcgi.php which content is: <IfModule mod_fcgid.c> <Files ~ (\.php)> SetHandler fcgid-script FCGIWrapper <?php echo $VAR->server->webserver->apache->phpCgiBin ?> .p$ Options +ExecCGI allow from all </Files> Is here the issue or elsewhere ? How can I fix this to work with FastCGI and make APC working properly. I forgot to specify that even if the uptime is below one minute, APC is doing pretty good job caching (92% are hits).

    Read the article

  • Munin graphing by CGI

    - by Vaughn Hawk
    I have Munin working just fine, but any time I try to do cgi graphing - it just stops graphing... no errors in the log, nothing. I've followed the instructions here: http://munin-monitoring.org/wiki/CgiHowto - and it should be working - here's my munin.conf setup, at least the parts that matter: dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin tmpldir /etc/munin/templates graph_strategy cgi cgiurl /usr/lib/cgi-bin cgiurl_graph /cgi-bin/munin-cgi-graph And then the host info yada yada - graph_strategy cgi and cgrurl are commented out in munin.conf - that's because if I uncomment them, graphing stops working. Again, I get no errors in logs, just blank images where the graphs used to be. Comment out cgi? As soon as munin html runs again, everything is back to normal. I'm running the latest version of munin and munin-node - I've tried fastcgi and regular cgi - permissions for all of the directories involved are munin:www-data - and my httpd.conf file looks like this: ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory /usr/lib/cgi-bin/> AllowOverride None SetHandler fastcgi-script Options ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Location /cgi-bin/munin-cgi-graph> SetHandler fastcgi-script </Location> Does anyone have any ideas? Without this working, at least from what I understand, Munin just graphs stuff, even if no one is looking at them - you add 100 servers to graph, and this starts to become a problem. Hope someone has ran into this and can help me out. Thanks!

    Read the article

  • OpenVZ Can't initialize containers after install

    - by Tonino Jankov
    I have installed OpenVZ on centos 6 on a dedicated server. I followed quick installation guide on openvz wiki. After installing thru yum, I don't know why, but grub.conf wasn't automatically updated to accomodate new kernel, so I had to do it manually. I edited grub.conf, added openvz kernel and rebooted - it went fine. Server went up into openvz kernel and it worked, it started openvz service byitself. But after I created a container, added IP to it and attempted to start it, I couldn't. Here is the output from the shell: [root@cloud2 ~]# vzctl start 86 Starting container ... Container is mounted Container start failed (try to check kernel messages, e.g. "dmesg | tail") Container is unmounted [root@cloud2 ~]# dmesg | tail [ 1973.401596] CT: 86: failed to start with err=-105 [ 2107.113850] Failed to initialize the ICMP6 control socket (err -105). [ 2107.155523] CT: 86: stopped [ 2107.155543] CT: 86: failed to start with err=-105 [ 6348.282184] Failed to initialize the ICMP6 control socket (err -105). [ 6348.330348] CT: 86: stopped [ 6348.330361] CT: 86: failed to start with err=-105 [45184.024002] Failed to initialize the ICMP6 control socket (err -105). [45184.072086] CT: 86: stopped [45184.072099] CT: 86: failed to start with err=-105 [root@cloud2 ~]# I don't know what is wrong. I tried different templates, debian 6, centos 6, i386, amd64, but the issue is the same. What is the problem?

    Read the article

  • Choosing the right e-mail client

    - by CFP
    Hi all, I'm currently using Outlook 2007 (under windows 7), but I much prefer free software (open source being the best of course), so I thought I'd ask for expert advice here. I thought it might be easier if I included a small "wanted list": I receive about 15 to 30 e-mails every day, but I have large archives (10'000 emails), which I frequently need to access. I usually open and close my mail program many times, so I'd like it to start pretty fast I cannot use an online mailbox, because I have too many email addresses (about 5: 1 for work, 1 for home, 1 semi-private, 1 for specific emails, and 1 for newletters By order of importance, the things I'd like my mail client to be able to: Efficiently categorize e-mails. Until now, I've mostly been using Outlook folders, because filtering by tags was not easy, but I'd rather one large list of mails, neatly tagged so I can easily filter. I'd love being able to select mails by tags (eg in a click or too (could be a tab) show all mails tagged with "software") Create "tagging rules", such as "if the mail was sent to this address, add this tag", or "if the body contains ..., add that tag" Sync contacts with Gmail, handle tasks (syncing with toodledo would be awesome), possibly provide a calendar Create e-mail templates, signatures... Other ideas: A timeline, scripting support, being able to import MS Outlook emails, provide a nice backup format... Thanks for sharing ideas and suggestions!

    Read the article

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • Give access to specific services on Windows 7 Professional machines?

    - by Chad Cook
    We have some machines running Windows 7 Professional at our office. The typical user needs to have access to stop and start a service for a local program they run. These machines have a local web server and database installed and we need to restrict access to certain folders and services related to the web server and database for these users. The setup I have tried so far is to add the typical user as a Power User. I have been able to successfully restrict them from accessing certain folders (as far as I can tell) but now they do not have access to the service needed for starting and stopping the local program. My thought was to give them access to the specific service but I have not had any luck yet. In searching the web for solutions the only results I have found relate to Windows Server 2000 and 2003 and involve creating security templates and databases through the Microsoft Management Console. I am hesitant to try an approach like this as these articles are typically older and I worry this method is outdated. Is there a better way to accomplish the end goal of giving the user permission to run the service and restrict their access to certain folders? If any clarification is needed on the setup or what we are trying to achieve, please let me know. Thanks in advance.

    Read the article

  • Group Policy for DNS Server Addresses

    - by John
    This question deals strictly with the methodology of using Active Directory to publish DNS settings and controlling the DNS server address order to domain attached clients. This is not asking about if this is the right function, role, or best use of group policy; but rather, how to make it work. I would like to be able to publish in group policy DNS addresses and possibly control the order in which the client consumes the addresses. I have tried following the information from this site without success. I set the following setting within group policy, but the client never shows the settings within the TCP/IP properties. Computer Configuration/Administrative Templates/Network/DNS Client/DNS Servers I did list them as a single space separated list. For example, the following: 192.168.0.1 192.168.10.3 192.168.34.2 192.168.2.67 192.168.56.99 192.168.99.23 This would be for Windows 7, Windows Server 2003, and Windows Server 2008 clients. I am not sure what I am doing wrong or how to get this to work. Am I missing a setting? Do I need to set something differently?

    Read the article

  • Is there a standard method for assigning nameservers to servers in a Windows domain?

    - by HopelessN00b
    As the title asks, I'm wondering if there's any standard or "best practice" for how to actually assign nameservers (DNS) and manage the nameserver configuration for client servers on a Windows domain. I'm talking about the setting circled in the below image, in case the language of the question is not clear enough: This is for a large, multi-site environment, where ideally/hopefully all servers point at their site's Domain Controller as the primary DNS server, and a DNS server at a different site as the secondary DNS server. For simplicity's sake, we can say that the secondary server would be the Domain Controller at the home site for everyone, and there are no tertiary DNS servers (even though that's not actually the case). Try as I might, I can't seem to find a GPO setting for this (at least on FL 2003 R2, the Computer Configuration - Administrative Templates - Network - DNS Client - DNS Servers GPO is Supported on: Windows XP Professional only), and I find it rather hard to believe that the "best"/"standard" solution would therefore be either scripting up something to apply the DNS settings per site, or using a DHCP server to push those configurations out to the other servers via the DHCP Scope Options. So, is there a standard way of managing this configuration? (That's hopefully not "a script" or "DHCP Scope Option.)

    Read the article

  • How can I make grub2 boot into Windows 7?

    - by Grzenio
    I had Windows 7 installed on my system, then I installed Debian testing with grub2 as its boot manager. Initially I couldn't see windows entry in grub at all, so I ran: aptitude install os-prober kcpuload update-grub Now I can see the entry, but when I select it I get only Win7 system restore, instead of the the real thing. Any ides how to make it work? EDIT: I tried the suggested approach to add a new file to /etc/grub.d, which generated an entry in grub.cfg, but it does not appear in the grub menu on boot :( I have this: grzes:/home/ga# cat /etc/grub.d/11_Windows #! /bin/sh -e echo Adding Windows >&2 cat << EOF menuentry “Windows 7? { set root=(hd0,2) chainloader +1 } And I have the following grub.cfg file: grzes:/home/ga# cat /boot/grub/grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } insmod ext2 set root=(hd0,3) search --no-floppy --fs-uuid --set 6ce3ff31-0ef7-41df-a6f5-b6b886db3a94 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi set locale_dir=/boot/grub/locale set lang=en insmod gettext set timeout=5 ### END /etc/grub.d/00_header ###

    Read the article

  • Router that allows custom Dynamic DNS server [closed]

    - by Thuy
    I've made my own DDNS service and it works fine using an application running on clients to update the IP. But if for some reason I don't have the choice of using my software and instead I need to use a router to update the IP, it becomes troublesome. For example, I needed to setup IPsec from a customer to me and the customers router/firewall (netgear srx5308) has a dynamic IP which is given from the ISP which can't offer static IPs. So it needs to use dynamic dns for it to work. In this case there really isn't a client to run the software on since it's a router/firewall. Unfortunately it seems that most routers are rather unfriendly towards custom DDNS solutions and most offer only dyndns.com or similar templates. Which was the case with this router too. Leaving me with no way to use my own dynamic dns server IP. I have the option of switching out the customers router and I've been looking around for alternatives and other routers/solutions and I was wondering if anyone on this great site might have been in a similar situation or might just know about some router/firewall that is more friendly towards custom ddns solutions that I might be able to use. Thanks in advance for any help or guidance!

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • How to start MSSQL Server with corrupt model db

    - by Jordan McGuigan
    After moving some databases around (restoring, deleting, etc) we experienced an issue creating new databases. Specifically, When trying to create a new database MSSQL Server it failed because the "The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run". As some online solutions suggested, we tried to Start and Stop the MSSQL Service. Service would not restart because "Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive" (FYI: the drive has 100gb of free space). Tried restarting the machine the MSSQL Server is running on. When the server came back online, we received the same error. We have tried deleting tempdb.mdf and restoring the modeldb from the templates folder, but neither of these solved the issue. We are unable to connect to the database, even in single user mode. Many of the online solutions have us running SQL commands against the server, but we are unable to connect (even in single user mode) to the DB to run commands against the server. Specific error messages: Database 'model' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927) The SQL Server (MSSQLSERVER) service is starting. The SQL Server (MSSQLSERVER) service could not be started. A service specific error occurred: 1814. We need the server up and running again ASAP.

    Read the article

  • Create new vms with a template with a csv. Possible?

    - by EdConde
    I am new to Powershell and Powercli... but i manager few ESX environments and really would like to do as much as possible via powershell. I am trying to do as much as i can via Powershell. On with the help I need: I used this one liner to create VMs from templates. But the problem is there has to be some user input after each new VM is created. New-VM name -Template template -VMHost VMHost -Datastore Datastore What i would like to do is be able to import via CSV the name of the new vm, the template to use, the host to put the new vm and the datastore all from a CSV. I don't know if it is as easy as below, but i kept getting errors. Import-Csv "C:\powershell\Data\VM2Create.csv" | Foreach-object{ New-VM $.name -Template $.template -VMHost $.VMHost -Datastore $.Datastore} I know there some () or {} or possibly | that need... just don't know where to put them... The csv i think would look like this: name, template, vmhost, datastore Any help or thoughts would be much appreciated...

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Varnish going sick

    - by junke1990
    I'm having trouble with Varnish, it works for a couple of views and then just goes sick... The weird thing is that it does work for about 20 or 30 requests. If I call apache directly it works fine. I'm running Varnish Version: 3.0.3-1 on Debian Squeeze and, for now, Apache on port 80 and Varnish on port 8080 on the same server.. I'm using https://github.com/mattiasgeniar/varnish-3.0-configuration-templates as base for my VCLs and modified the VCLs to support Concrete5. Anyone any clue on how I should debug this? backend default { .host = "127.0.0.1"; .port = "80"; .connect_timeout = 1.5s; .first_byte_timeout = 45s; .between_bytes_timeout = 30s; .probe = { .url = "/"; .timeout = 1s; .interval = 10s; .window = 10; .threshold = 8; } } LOG 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791312 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791315 1.0 0 Backend_health - default Still sick 4--X-R- 0 8 10 0.000689 0.000000 HTTP/1.1 301 Moved Permanently (the 301 is because I check for www.)

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • VMware ESX Linux Guest Customization

    - by andyh_ky
    Hello, I am interested in deploying several RHEL 4 Update 8 virtual machines for creation of a test environment. Here are the steps I am taking: In off hours, P2V/V2V the production machines and convert them to templates Deploy the virtual machines with a customization specification that changes hostname, IP address I am interested in how these processes are done and if there are any options for further customization. Are the machines brought on the network when they are powered on, before they are reconfigured? Is there a potential IP address conflict? Is there an option to run additional scripts which reside on the guest as a part of the reconfiguration? For example, restoring an Oracle Database. This is an option with Windows guests and sysprep, but I have been unable to locate anything showing a RHEL equivalent. I am dealing with a multi tier application. The main issue I am attempting to mitigate is that the application servers reference database servers by hostname and in tnsnames files. I am interested in scripting the reconfiguration of the application in the deployment so that the app/db servers are pointing to the test environment. I am OK with placing the 'cleanup' script on the source and executing it after the machine has been brought up. I am interested in the automation of the script's execution post clone/boot, as well as if there could be an IP address conflict. (cross posted to VMTN's ESX 4 community)

    Read the article

  • xslt cookbook example not working

    - by Liza dawson
    Hi I am working on this from xslt cookbook type my.xml <?xml version="1.0" encoding="UTF-8"?> <people> <person name="Al Zehtooney" age="33" sex="m" smoker="no"/> <person name="Brad York" age="38" sex="m" smoker="yes"/> <person name="Charles Xavier" age="32" sex="m" smoker="no"/> <person name="David Williams" age="33" sex="m" smoker="no"/> <person name="Edward Ulster" age="33" sex="m" smoker="yes"/> <person name="Frank Townsend" age="35" sex="m" smoker="no"/> <person name="Greg Sutter" age="40" sex="m" smoker="no"/> <person name="Harry Rogers" age="37" sex="m" smoker="no"/> <person name="John Quincy" age="43" sex="m" smoker="yes"/> <person name="Kent Peterson" age="31" sex="m" smoker="no"/> <person name="Larry Newell" age="23" sex="m" smoker="no"/> <person name="Max Milton" age="22" sex="m" smoker="no"/> <person name="Norman Lamagna" age="30" sex="m" smoker="no"/> <person name="Ollie Kensington" age="44" sex="m" smoker="no"/> <person name="John Frank" age="24" sex="m" smoker="no"/> <person name="Mary Williams" age="33" sex="f" smoker="no"/> <person name="Jane Frank" age="38" sex="f" smoker="yes"/> <person name="Jo Peterson" age="32" sex="f" smoker="no"/> <person name="Angie Frost" age="33" sex="f" smoker="no"/> <person name="Betty Bates" age="33" sex="f" smoker="no"/> <person name="Connie Date" age="35" sex="f" smoker="no"/> <person name="Donna Finster" age="20" sex="f" smoker="no"/> <person name="Esther Gates" age="37" sex="f" smoker="no"/> <person name="Fanny Hill" age="33" sex="f" smoker="yes"/> <person name="Geta Iota" age="27" sex="f" smoker="no"/> <person name="Hillary Johnson" age="22" sex="f" smoker="no"/> <person name="Ingrid Kent" age="21" sex="f" smoker="no"/> <person name="Jill Larson" age="20" sex="f" smoker="no"/> <person name="Kim Mulrooney" age="41" sex="f" smoker="no"/> <person name="Lisa Nevins" age="21" sex="f" smoker="no"/> </people> type generic-attr-to-csv.xslt <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:csv="http://www.ora.com/XSLTCookbook/namespaces/csv"> <xsl:param name="delimiter" select=" ',' "/> <xsl:output method="text" /> <xsl:strip-space elements="*"/> <xsl:template match="/"> <xsl:for-each select="$columns"> <xsl:value-of select="@name"/> <xsl:if test="position( ) != last( )"> <xsl:value-of select="$delimiter/> </xsl:if> </xsl:for-each> <xsl:text>&#xa;</xsl:text> <xsl:apply-templates/> </xsl:template> <xsl:template match="/*/*"> <xsl:variable name="row" select="."/> <xsl:for-each select="$columns"> <xsl:apply-templates select="$row/@*[local-name(.)=current( )/@attr]" mode="csv:map-value"/> <xsl:if test="position( ) != last( )"> <xsl:value-of select="$delimiter"/> </xsl:if> </xsl:for-each> <xsl:text>&#xa;</xsl:text> </xsl:template> <xsl:template match="@*" mode="map-value"> <xsl:value-of select="."/> </xsl:template> </xsl:stylesheet> type my.xsl <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:csv="http://www.ora.com/XSLTCookbook/namespaces/csv"> <xsl:import href="generic-attr-to-csv.xslt"/> <!--Defines the mapping from attributes to columns --> <xsl:variable name="columns" select="document('')/*/csv:column"/> <csv:column name="Name" attr="name"/> <csv:column name="Age" attr="age"/> <csv:column name="Gender" attr="sex"/> <csv:column name="Smoker" attr="smoker"/> <!-- Handle custom attribute mappings --> <xsl:template match="@sex" mode="csv:map-value"> <xsl:choose> <xsl:when test=".='m'">male</xsl:when> <xsl:when test=".='f'">female</xsl:when> <xsl:otherwise>error</xsl:otherwise> </xsl:choose> </xsl:template> </xsl:stylesheet> using the apache xalan parser D:\Test>java org.apache.xalan.xslt.Process -in my.xml -xsl my.xsl -out my.csv [Fatal Error] generic-attr-to-csv.xslt:15:6: The value of attribute "select" associated with an element type "xsl:v alue-of" must not contain the '<' character. file:///D:/Test/generic-attr-to-csv.xslt; Line #15; Column #6; org.xml.sax.SAXParseException: The value of attribut e "select" associated with an element type "xsl:value-of" must not contain the '<' character. java.lang.NullPointerException at org.apache.xalan.transformer.TransformerImpl.createSerializationHandler(TransformerImpl.java:1171) at org.apache.xalan.transformer.TransformerImpl.createSerializationHandler(TransformerImpl.java:1060) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1268) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1251) at org.apache.xalan.xslt.Process.main(Process.java:1048) Exception in thread "main" java.lang.RuntimeException at org.apache.xalan.xslt.Process.doExit(Process.java:1155) at org.apache.xalan.xslt.Process.main(Process.java:1128) Any ideas what am i doing wrong

    Read the article

  • $_GET['page'] loading content incorrectly

    - by s32ialx
    OK so here is my previous post PHP Templated Site w/ file_get_content links now i got that issue resolved BUT the problem is now that the content loads it displays UNDER the div i placed #CONTENT# inside so the styles are being ignored and it's posting #CONTENT# outside the divs at positions 0,0 any suggestions? Found out whats happening by using "View Source" seems that it's putting all of the #CONTENT#, content that's being loaded in front of the tag. Like this <doctype...> <div class="home"> blah blah </div> <head> <script src=""></script> </head> <body> <div class="header"></div> <div class="contents"> #CONTENT# < where content SHOULD load </div> <div class="footer"></div> </body> so anyone got a fix? OK so a better description I'll add relevant screen-shots Whats happening is /* file.class.php */ <?php $file = new file(); class file{ var $path = "templates/clean"; var $ext = "tpl"; function loadfile($filename){ return file_get_contents($this->path . "/" . $filename . "." . $this->ext); } function css($val,$content='',$contentvar='#CSS#') { if(is_array($val)) { $css = 'style="'; foreach($val as $p) { $css .= $p . ";"; } $css .= '"'; } else { $css = 'style="' . $val . '"'; } if($content!='') { return str_replace($contentvar,' ' . $css,$content); } else { return $css; } } function setsize($content,$width='-1',$height='-1',$border='-1'){ $css = ''; if($width!='-1') { $css = $css . "width=\"".$width."\""; } if($height!='-1') { $css = $css . "height=\"".$height."\""; } if($border!='-1') { $css = $css . "border=\"" . $border . "\""; } return str_replace('#SIZE#',' ' . $css,$content); } function setcontent($content,$newcontent,$vartoreplace='#CONTENT#'){ $val = str_replace($vartoreplace,$newcontent,$content); return $val; } function p($content) { $v = $content; $v = str_replace('#CONTENT#','',$v); $v = str_replace('#SIZE#','',$v); print $v; } } if (isset($_GET['page'])) { $content = $_GET['page'].'.php'; } else { $content = 'main.php'; } ?> is calling for a file_get_contents at the bottom which I use in /* index.php */ <?php include('classes/file.class.php'); // load the templates $header = $file->loadfile('header'); $body = $file->loadfile('body'); $footer = $file->loadfile('footer'); // fill body.tpl #CONTENT# slot with $content $body = $file->setcontent($body, $content); // cleanup and output the full page $file->p($header . $body . $footer); ?> and loads into /* body.tpl */ <div id="bodys"> <div id="bodt"></div> <div id="bodm"> <div id="contents"> #CONTENT# </div> </div> <div id="bodb"></div> </div> but the issue is as follows the $content loads properly img tags etc <h2> tags etc but CSS styling is TOTALY ignored for position width z-index etc. and as follows here's the screen-shot JUST incase you require the css for where $content is being loaded #bodys { top:91px; position:absolute; width:100%; } #bodt { margin-left:auto; margin-right:auto; top:3px; position:relative; width:820px; height:42px; background-image:url('images/pagetop.png'); background-repeat:no-repeat; z-index: 0; } #bodm { margin-left:auto; margin-right:auto; top:3px; position:relative; width:820px; background-image:url('images/pagemid.png'); background-repeat:repeat-y; z-index: 0; } #bodb { margin-left:auto; margin-right:auto; bottom:-42px; position:relative; width:820px; height:42px; background-image:url('images/pagebot.png'); background-repeat:no-repeat; z-index:-1; } #menuo { position:absolute; bottom:-2px; z-index:199; } #contents { position:relative; top:5px; left:25px; width:770px; z-index:10; overflow: auto; color: #000000; line-height: 1.3em; font-size: 12px; } #content { position:absolute; top:5px; left:25px; width:760px; z-index:1; color: #000000; line-height: 1.3em; font-size: 12px; } #contents p{ margin-bottom: 0.7em; } #contents a{ font-weight:bold; color: #6fa5fd; border-bottom: 1px dotted #6fa5fd; }

    Read the article

  • change password code error

    - by ejah85
    I've created a code to change a password. Now it seem contain an error. When I fill in the form to change password, and click save the error message: Warning: mysql_real_escape_string() expects parameter 2 to be resource, null given in C:\Program Files\xampp\htdocs\e-Complaint(FYP)\userChangePass.php on line 103 Warning: mysql_real_escape_string() expects parameter 2 to be resource, null given in C:\Program Files\xampp\htdocs\e-Complaint(FYP)\userChangePass.php on line 103 I really don’t know what the error message means. Please guys. Help me fix it. Here's is the code: <?php session_start(); ?> <?php # change password.php //set the page title and include the html header. $page_title = 'Change Your Password'; //include('templates/header.inc'); if(isset($_POST['submit'])){//handle the form require_once('connectioncomplaint.php');//connect to the db. //include "connectioncomplaint.php"; //create a function for escaping the data. function escape_data($data){ global $dbc;//need the connection. if(ini_get('magic_quotes_gpc')){ $data=stripslashes($data); } return mysql_real_escape_string($data, $dbc); }//end function $message=NULL;//create the empty new variable. //check for a username if(empty($_POST['userid'])){ $u=FALSE; $message .='<p> You forgot enter your userid!</p>'; }else{ $u=escape_data($_POST['userid']); } //check for existing password if(empty($_POST['password'])){ $p=FALSE; $message .='<p>You forgot to enter your existing password!</p>'; }else{ $p=escape_data($_POST['password']); } //check for a password and match againts the comfirmed password. if(empty($_POST['password1'])) { $np=FALSE; $message .='<p> you forgot to enter your new password!</p>'; }else{ if($_POST['password1'] == $_POST['password2']){ $np=escape_data($_POST['password1']); }else{ $np=FALSE; $message .='<p> your new password did not match the confirmed new password!</p>'; } } if($u && $p && $np){//if everything's ok. $query="SELECT userid FROM access WHERE (userid='$u' AND password=PASSWORD('$p'))"; $result=@mysql_query($query); $num=mysql_num_rows($result); if($num == 1){ $row=mysql_fetch_array($result, MYSQL_NUM); //make the query $query="UPDATE access SET password=PASSWORD('$np') WHERE userid=$row[0]"; $result=@mysql_query($query);//run the query. if(mysql_affected_rows() == 1) {//if it run ok. //send an email,if desired. echo '<p><b>your password has been changed.</b></p>'; include('templates/footer.inc');//include the HTML footer. exit();//quit the script. }else{//if it did not run OK. $message= '<p>Your password could not be change due to a system error.We apolpgize for any inconvenience.</p><p>' .mysql_error() .'</p>'; } }else{ $message= '<p> Your username and password do not match our records.</p>'; } mysql_close();//close the database connection. }else{ $message .='<p>Please try again.</p>'; } }//end oh=f the submit conditional. //print the error message if there is one. if(isset($message)){ echo'<font color="red">' , $message, '</font>'; } ?> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> <body> <script language="JavaScript1.2">mmLoadMenus();</script> <table width="604" height="599" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td height="130" colspan="7"><img src="images/banner(E-Complaint)-.jpg" width="759" height="130" /></td> </tr> <tr> <td width="100" height="30" bgcolor="#ABD519"></td> <td width="100" bgcolor="#ABD519"></td> <td width="100" bgcolor="#ABD519"></td> <td width="100" bgcolor="#ABD519"></td> <td width="100" bgcolor="#ABD519"></td> <td width="160" bgcolor="#ABD519"> <?php include "header.php"; ?>&nbsp;</td> </tr> <tr> <td colspan="7" bgcolor="#FFFFFF"> <fieldset><legend> Enter your information in the form below:</legend> <p><b>User ID:</b> <input type="text" name="username" size="10" maxlength="20" value="<?php if(isset($_POST['userid'])) echo $_POST['userid']; ?>" /></p> <p><b>Current Password:</b> <input type="password" name="password" size="20" maxlength="20" /></p> <p><b>New Password:</b> <input type="password" name="password1" size="20" maxlength="20" /></p> <p><b>Confirm New Password:</b> <input type="password" name="password2" size="20" maxlength="20" /></p> </fieldset> <div align="center"> <input type="submit" name="submit" value="Change My Password" /></div> </form><!--End Form--> </td> </tr> </table> </body> </html>

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >