Search Results

Search found 2210 results on 89 pages for 'stupid phil'.

Page 81/89 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • <%: %> brackets for HTML Encoding in ASP.NET 4.0

    - by Slauma
    Accidentally I found this post about a new feature in ASP.NET 4.0: Expressions enclosed in these new brackets <%: Content %> should be rendered as HTML encoded. I've tried this within a databound label in a FormView like so: <asp:Label ID="MyLabel" runat="server" Text='<%: Eval("MyTextProperty") %>' /> But it doesn't work: The text property contains script tags (for testing), but the output is blank. Using the traditional way works: <asp:Label ID="MyLabel" runat="server" Text='<%# HttpUtility.HtmlEncode(Eval("MyTextProperty")) %>' /> What am I doing wrong? (On a sidenote: I am too stupid to find any information: Google refuses to search for that thing. The VS2010 Online help on MSDN offers a lot of hits, but nothing related to my search. Stackoverflow search too. And I don't know how these "things" (the brackets I mean) are officially called to have a better search term.) Any info and additional links and resources are welcome! Thanks in advance!

    Read the article

  • Lighttpd 403 Errors on HTML and PHP pages

    - by Brian
    I installed lighttpd on CentOS 5.5 64-bit. Everything seems fine and running except I cannot get past 403 errors on both HTML and PHP pages. I have used CHMOD and CHOWN, changed ownership in the config file, done everything possible and have been stuck for 2 days. Appreciate any help, and here's hoping to a stupid error on my part. Here is the log file with debug options on: 2011-02-21 11:23:13: (request.c.304) fd: 7 request-len: 408 GET /index.html HTTP/1.1 Host: 10.0.1.8 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cache-Control: max-age=0 2011-02-21 11:23:13: (response.c.241) run condition 2011-02-21 11:23:13: (response.c.300) -- splitting Request-URI 2011-02-21 11:23:13: (response.c.301) Request-URI : /index.html 2011-02-21 11:23:13: (response.c.302) URI-scheme : http 2011-02-21 11:23:13: (response.c.303) URI-authority: 10.0.1.8 2011-02-21 11:23:13: (response.c.304) URI-path : /index.html 2011-02-21 11:23:13: (response.c.305) URI-query : 2011-02-21 11:23:13: (response.c.349) -- sanatising URI 2011-02-21 11:23:13: (response.c.350) URI-path : /index.html 2011-02-21 11:23:13: (response.c.470) -- before doc_root 2011-02-21 11:23:13: (response.c.471) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.472) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.473) Path : 2011-02-21 11:23:13: (response.c.521) -- after doc_root 2011-02-21 11:23:13: (response.c.522) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.523) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.524) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.541) -- logical -> physical 2011-02-21 11:23:13: (response.c.542) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.543) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.544) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.561) -- handling physical path 2011-02-21 11:23:13: (response.c.562) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.608) -- access denied 2011-02-21 11:23:13: (response.c.609) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.128) Response-Header: HTTP/1.1 403 Forbidden Content-Type: text/html Content-Length: 345 Date: Mon, 21 Feb 2011 16:23:13 GMT Server: lighttpd/1.4.28 Here is the directory listing. I used CHOWN to set to lighttpd:lighttpd [root@localhost lighttpd]# ls -al total 40 drwxrwxrwx 2 lighttpd lighttpd 4096 Feb 21 10:48 . drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 .. -rwxrwxrwx 1 lighttpd lighttpd 10 Feb 20 08:32 index.html -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:48 index.php -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:39 info.php [root@localhost lighttpd]# Requested Commands: [root@localhost lighttpd]# ls -ld / /srv /srv/www drwxr-xr-x 22 root root 4096 Feb 21 04:39 / drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 20 07:38 /srv drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 /srv/www [root@localhost lighttpd]# ps auxZ | grep lighttpd root:system_r:httpd_t lighttpd 3842 0.0 0.2 48368 896 ? S 12:24 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf root:system_r:unconfined_t:SystemLow-SystemHigh root 3845 0.0 0.2 61152 764 pts/0 R+ 12:24 0:00 grep lighttpd

    Read the article

  • Error codes 80070490 and 8024200D in Windows Update

    - by Sammy
    How do get past these stupid errors? The way I have set things up is that Windows Update tells me when there are new updates available and then I review them before installing them. Yesterday it told me that there were 11 new updates. So I reviewed them and I saw that about half of them were security updates for Vista x64 and .NET Framework 2.0 SP2, and half of them were just regular updates for Vista x64. I checked them all and hit the Install button. It seemed to work at first, updates were being downloaded and installed, but then at update 11 of 11 total it got stuck and gave me the two error codes you see in the title. Here are some screenshots to give you an idea of what it looks like. This is what it looks like when it presents the updates to me. This is how it looks like when the installation fails. I'm not sure if you're gonna see this very well but these are the updates it's trying to install. Update: This is on Windows Vista Ultimate 64-bit with integrated SP2, installed only two weeks ago on 2012-10-02. Aside from this, the install is working flawlessly. I have not done any major changes to the system like installing new devices or drivers. What I have tried so far: - I tried installing the System Update Readiness Tool (the correct one for Vista x64) from Microsoft. This did not solve the issue. Microsoft resource links: Solutions to 80070490 Windows Update error 80070490 System Update Readiness Tool fixes Windows Update errors in Windows 7, Windows Vista, Windows Server 2008 R2, and Windows Server 2008 Solutions to 8024200D: Windows Update error 8024200d Essentially both solutions tell you to install the System Update Readiness Tool for your system. As I have done so and it didn't solve the problem the next step would be to try to repair Windows. Before I do that, is there anything else I can try? Microsoft automatic troubleshooter If I click the automatic troubleshooter link available on the solution web page above it directs me to download a file called windowsupdate.diagcab. But after download this file is not associated to any Windows program. Is this the so called Microsoft Fix It program? It doesn't have its icon, it's just blank file. Does it need to be associated? And to what Windows program?

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • Uninstall php and nginx or fix setup

    - by jreed121
    First off, I'm a huge linux noob - sorry... I'm trying to setup nginx with php-fpm on debian and I'm pretty sure that I've completely screwed it up. nginx seems to be running fine because I can it it from a web browser and it load the stock "Welcome to nginx!" page. I'm not so sure about php-fpm though. When I try something like # restart php-fpm I get: bash: restart: command not found First off php-fpm some how got installed as php5-fpm when I do root@server:/etc/init.d# ls, which seems to contradict every tutorial and help doc I've read (supposed to be 'php-fpm'). I can restart it with this: service php5-fpm restart And just enter the package name 'php5-fpm' I get this: root@server:~# php5-fpm [17-Nov-2012 23:15:36] NOTICE: PHP message: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/suhosin.so' - /usr/lib/php5/20100525/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 [17-Nov-2012 23:15:36] ERROR: An another FPM instance seems to already listen on /var/run/php5-fpm.sock [17-Nov-2012 23:15:36] ERROR: FPM initialization failed The root for nginx is /usr/share/nginx/html, when I try to navigate to a .php file in there with my web browser, it tries to download the file instead of interpret it. I would like this folder to be in my user's home directory ie: /home/administrator/www or /home/nginx/www. I know in order to do this I need to modify nginx.conf, but I find that configuration file difficult to understand. I suppose the fact that my .php scripts aren't being handled is my bigger problem anyways. When I try to see what running on port 9000 (php-fpm default port) with lsof -i :9000 it returns nothing - I guess indicating that it isn't listening. then I head over to vim /etc/php5/fpm/php-fpm.conf and there is no where to designate a port number. So should I just uninstall everything and start from scratch? If so, how do I clean it all up? Any suggestions for a tutorial once I'm ready to try again? Should I attempt to troubleshoot this mess? If so where should I start? Sorry guys, I'm feeling pretty stupid and lost right now. I'm not sure what my next steps are in trying to resolve this issue are. I realize that this is a horrible question for this type of Q&A site, but I'd really appreciate any guidance.

    Read the article

  • Tutorial for configuring OpenVPN [on hold]

    - by user2699451
    I have been through 10+ tutorials on setting up a OpenVPN, and each tutorial gives a different problem... Does anyone know of a decent and helpful website/tutorial which I could go to to get it set up? I have been battling through it for almost 2 months now. Yes, I have also bugged forums.openvpn, but I think I have "reached my post limit" with them. I have to configure it remotely via ssh. UPDATE: okay, I have been asked to be more clear on the topic I followed this tutorial (as a example) - http://www.servermom.com/how-to-build-openvpn-server-on-centos-6-x/732/ I had no issues setting up, etc. except when I boot into windows and run the OpenVPN GUI Client, it connects and gives this error: WARNING: Bad encapsulated packet length from peer (21331), which must be 0 and <= 1576 -- please ensure that --tun-mtu or --link-mtu is equal on both peers -- this condition could also indicate a possible active attack on the TCP link -- [Attemping restart...] Here is my server config: port 1194 #- port proto udp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 reneg-sec 0 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /usr/lib64/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Co$ #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment$ client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 5 30 comp-lzo persist-key persist-tun status 1194.log verb 3 and my client config: client dev tun proto udp remote [server ip] 1194 # - Your server IP and OpenVPN Port resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 persist-key persist-tun ca ca.crt auth-user-pass comp-lzo reneg-sec 0 verb 3 OpenVPN Client Log: Thu Oct 31 11:51:29 2013 OpenVPN 2.0.9 Win32-MinGW [SSL] [LZO] built on Oct 1 2006 Thu Oct 31 11:51:44 2013 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Thu Oct 31 11:51:44 2013 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Thu Oct 31 11:51:44 2013 LZO compression initialized Thu Oct 31 11:51:44 2013 Control Channel MTU parms [ L:1576 D:140 EF:40 EB:0 ET:0 EL:0 ] Thu Oct 31 11:51:44 2013 Data Channel MTU parms [ L:1576 D:1450 EF:44 EB:135 ET:32 EL:0 AF:3/1 ] Thu Oct 31 11:51:44 2013 Local Options hash (VER=V4): '2547efd2' Thu Oct 31 11:51:44 2013 Expected Remote Options hash (VER=V4): '77cf0943' Thu Oct 31 11:51:44 2013 Attempting to establish TCP connection with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCP connection established with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link local: [undef] Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link remote: x.x.x.x:1194 // after this it just hangs, nothing happens So I dont know what I am doing wrong but I am getting a bit impatient and on each forum I post this, I get stupid/unrelated/unhelpful answers...

    Read the article

  • shell script to start multiple Java programs from a directory at boot

    - by zcourts
    I'm not sure if this is the best approach to this, It's my first time doing all of this (including writing shell scripts). OS: Centos My problem: I want to start multiple shell scripts at boot. One of the shell scripts is to start my own services and 3 others are for third party services. The shell script to start my own services will be looking for jar files. I currently have two services (will change), written in Java. All services are named under convention prefix-service-servicename What I've done: I created the following directory structure /home/username/scripts init.sh boot/ boot/startthirdprtyservice1.sh boot/startthirdprtyservice2.sh boot/startthirdprtyservice3.sh boot/startmyservices.sh /home/username/services prefix-lib-libraryname.jar prefix-lib-libraryname.jar prefix-service-servicename.jar prefix-service-servicename.jar prefix-service-servicename.jar In init.sh I have the following: #!/bin/sh #This scripts run all executable scripts in the boot directory at boot #done by adding this script to the file /etc/rc.d/rc.local #nohup #run-parts /home/username/scripts/boot/* #for each file in the boot dir... # ignore the HUP (hangup) signal for s in ./boot/*;do if [ -x $s ]; then echo "Starting $s" nohup $s & fi done echo "Done starting bootup scripts " echo "\n" In the script boot/startmyservices.sh I have #!/bin/sh fnmatch () { case "$2" in $1) return 0 ;; esac ; return 1 ; } ##sub strin to match for SUBSTRING="prefix-service" for s in /home/username/services/*;do if [ -x $s ]; then #match service in the filename , i.e. only services are started if fnmatch "$SUBSTRING" "$s" ; then echo "Starting $s " nohup $s & fi fi done echo "Done starting Services" echo "\n" Finally: Usually you can stick a program in /etc/rc.d/rc.local for it to be run at boot but I don't think this works in this case, or rather I don't know what to put in there I've just learnt how to do this by reading up a bit so I'm not sure its particularly the best thing to do so any advice is appreciated. When I run init.sh nohup.out contains Starting the thirdparty daemon... thirdparty started... .... but nothing from myservices.sh and my Java services aren't running I'm not sure where to start debugging or what could be going wrong. Edit Found some issues and got it to work, used -x instead of -n to check if the string is none zero, needed the sub string check to also be if [[ $s = $SUBSTRING ]] ; then and this last one was just stupid, missing java -jar in front of $s Still unsure of how to get init.sh to run at boot though

    Read the article

  • Critique My Backup and Storage Plan

    - by MetaHyperBolic
    My current storage (RAID-1 off of a hardware RAID card) and backup (a spare drive) solutions for my home network are inadequate. I have too much data scattered on various one-off drives. It is time to evolve. Backups seem simple enough, at least: lots of big drives. However, I am bewildered by the number of choices for small home storage. The Drobo S looks appealing. So does the ReadyNAS. I am not looking for bunches of shiny features, I'm mostly interested in reliability. I am not interested in building Yet Another PC to create a file server or doing something in the cloud, or whatever. I'm stupid, so I am keeping it simple. Requirements for Main Volume: Starting working space roughly 2TB, with options for growth up to 5TB RAID or something RAID-like with at least one parity drive eSATA II for speed during backups Ability to shut down gracefully when alerted of low power by a UPS Optional but Desirable: Will take 2TB drives now with options for the larger 3TB drives coming in 2010-2011 Optional but Desirable: : RAID-6 or something similar, with two parity drives Optional but Desirable: : Hot spare Ethernet connection not required, as the volume will be shared via the same machines which runs my home print server Backups: Backup performed via ROBOCOPY in mirror mode to an external hard drive via a eSATA II connection. Start with rotating between two external 2TB hard drives, will go up to six external 2TB drives. Start with a weekly backup, move to a bi-weekly backup as more drives are added. Move to 3TB drives as the size of my main volume increases. Backup drives will be stored on an off-site location. Hard drives: I plan on buying all of the same model, but different batches from different vendors. I found a "burn-in" utility with which I can pound away on the drives for a couple of weeks before adding them to the backup pool or the main volume. I estimate that I am looking at roughly $1,500 to start, once I start throwing in two TB drives for backup and four for storage. So, are there any obvious flaws in my plan? What have I overlooked? Any suggestions for the storage device for my main volume that fits my requirements? Or do I just keep it simple, 2 drives in RAID-1, then perform due diligence with my backups, accepting that I will have to buy a whole new unit when my data grows past 2TB?

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest. Edit: Had a look at Deoendency Walker and Process Explorer. Both great tools. Here is a image of the TCP connections for explorer.exe in Process Explorer http://img210.imageshack.us/img210/3563/61930284.gif

    Read the article

  • Amusing or Sad? Network Solutions

    - by dbasnett
    When I got sick my email ended up in every drug sellers email list. Some days I get over 200 emails selling everything from Viagra to Xanax. Either they don't know what my condition is or they are telling me you are a goner, might as well chill-ax and have a good time. In order to cut down on the mail being downloaded I thought I would add all of the Junk email senders from Outlook to my Network Solution mail server. Much to my amazement I could not find that import Spammers button, so I submitted a tech support request. Here is the response: Thank you for contacting Network Solutions Customer Service Department. We are committed to creating the best Customer experience possible. One of the first ways we can demonstrate our commitment to this goal is to quickly and efficiently handle your recent request. We apologize for any inconvenience this might have caused you. With regard to your concern, please be advised that we cannot import blocked senders in to you e-mail servers. An alternative option is for you to create a Custom Filter that filters unwanted e-mails. To create a Custom Filter: Open a Web browser (e.g., Netscape, Microsoft Internet Explorer, etc.). Type mail.[domain name].[ext] in the address line. Login to your Network Solutions email account. Click on the Configuration left menu tab. Click on the Custom Filter link. Type the rule name. blah, blah, blah Basically add them one at a time. "We are committed to creating the best Customer experience possible." No you are not. You are trying to squeeze every nickle you can out of me. "With regard to your concern, please be advised that we cannot import blocked senders in to you e-mail servers." Maybe I should apply for a job to write those ten complicated lines of code... Maybe I should question my choice of vendors, because if they truly "cannot" then they are to stupid to have my business. It is both amusing and sad. I'll be posting this in every forum I am a member of.

    Read the article

  • AWS EC2 instance not pingable or available in browser

    - by Slimmons
    I've seen this questions asked other places, but now I've run through every fix proposed in other questions so I'm re asking it here, in hopes that someone will have a different solution. Problem: I have a EC2 instance, and I can ssh into it and work on it, and I have a Elastic ip set to it. I am unable to ping this machine, or log in to it using my browser. Solutions mentioned and tried: service httpd start i. response I get is "unrecognized service" ii. when I run apache2ctl -k start, it shows "httpd already running", so I'm assuming httpd is not the problem, it's just possibly named something else because of apache2, or for whatever reason. I went into EC2-Security Group- Default (which is the one I used.)-inbound, and everything there is set up correctly (I'm assuming). There it shows 80(HTTP) 0.0.0.0/0. 443(HTTPS) 0.0.0.0/0, and various other servies with their ports and 0.0.0.0/0 next to them. I also enabled a rule for enabling ICMP Request All on 0.0.0.0/0 temporarily for testing purposes I've tried disabling the iptables with "service ufw stop" Just in case I'm doing something really stupid, because I'm not all that used to connecting to web servers that I've spun up, I'm typing in the address to the machine into the url like this (assuming my ip address was ip.address). i. http:/(slash)ip.address/ ii. ip.address iii. https:/(slash)ip.address/ iv. ip.address/webFolderName/ v. http:/(slash)ip.address/webFolderName/ None of the attempts worked, and the only thing I haven't tried that i've seen is to start wireshark on the machine, and see if the requests are reaching it, and it's just ignoring them. I'm not sure I want to do that yet, since A). I'm not 100% positive how to use wireshark without the gui, since it's the only way I've ever used it (I really should get used to it in terminal, but I didn't even know you could). B). It really seems like I'm missing something simple in getting this to work. Thanks in advance for any help.

    Read the article

  • Rsync: windows 7, synology: login error and permission denied error

    - by loonboon
    Good day to all of you all, I'm running into strange/stupid errors, and I hope anybody would be so kind to help me out. I have to admit, I am by no means a guru, so please bear with me :-) Situation: -Synology NAS (runs Linux), and Windows 7 desktop (1 normal/restricted user Lisa and 1 admin user). - Data from W7 desktop to be rsynced to synology: /volume1/home/Lisa/Backup - Rsync command: c:\cygwin\bin\rsync -avz /cygdrive/e/Lisa/ [email protected]:/volume1/homes/Lisa/Backup - I've set up ssh per these two threads: a. http://www.cesareriva.com/archives/102 b. http://www.cesareriva.com/archives/112 Now the horrors begin: - root is allowed to run the rsync succesfully, however, he doesn't login automatically (so I can not use rsync in W7 batchscripts, which is of course required). - Lisa is allowed to login automatically but he can not succesfully finish the rsync command because of permission errors: rsync change dir /volume1/homes/Lisa/Backup failed: permissions denied. This happens for each and every file and subdir rsync tries to create. However, the main directory (Backup) is created. When I try to copy files from windows explorer to the directory 'Backup' using the very same user Lisa everything goes smoothly. So, obviously, there is a permission problem somewhere; either my rsync-command isn't correct, or the folder permissions for homes/Lisa aren't correct (but, then again, Windows 7 copies files to that folder without any problems, so that does make me believe the homes/Lisa-permissions don't appear to be the problem). I also tried adding: --chmod=Dugo+x --chmod=ugo+r which I found somewhere on the web, to the rsync-command, but this didn't solve any problem and gave the exact errors. Would anybody please please help me on how to fix this? I am utterly frustrated about this, because I have been trying for 1 month to get everything to work and it simply doesn't work. I bought the big Synology to end the horrors of 20 external USB-disks for once and for all (we have many pictures and home vids of our deceased dogs and want to watch these, the horrors being 'what material is on what disk'). I'll gladly return the favour of somebody helping me out by buying you a nice beer (paypal), if you could end my misery. I am not extremely skilled on Linux (not at all :-( ) so if you could give an extra word when possible so I understand what to do, I'd be very grateful. I really hope somebody can help me out, Thank you in advance, Lisa

    Read the article

  • Changing Physical Path gives blank homepage

    - by Julie
    I have two websites ASP Classic - www.company.com and www.companytesting.com. At this time of year, company.com is pointed to a folder called website2012 and companytesting.com is pointing to a folder called website2013. The contents of those two folders are almost identical, just minor changes for our season change (which I was supposed to do today - lol). Up until a couple of weeks ago, I was running Windows Server 2003. To update the "live" website, I'd make a copy of the test site folder, and rename it website2013R1, and point the test site there, then point the live site at website2012. We now have Windows Server 2008 R2 64. (I had someone migrate the websites to the new server for me.) The companytesting.com site, when I pointed it to website2013R1, worked fine. The company.com site, when I pointed it to website2013 (which worked just before, for the companytesting.com site) gives an empty page. (i.e. view source = nothing there.) There is nothing in the failed request log when this happens. I can use the Explore button/link (upper right) in IIS7.5 and see all of the files there. If I use the browse button (either in general or on the index.asp page) I get the blank page again. One weirdness about how these are set up is that companytesting.com uses a login (which I think is windows authentication - it's simply a single username and password for staff, and to keep the GoogleBots out of it). Obviously, company.com does not. But redirecting the to website2013r1 kept the login in place. (So I'm not absolutely clear whether that's attached to the folder or to the site. Hitting the company.com site after changing the path did not yield a password request.) The permissions on the folders all seem to be the same, but obviously, I'm missing something. Why isn't changing the physical path working? As is probably obvious, I'm not knowledgeable about servers. I did OK in 2003, but since it's not my main task and I'm buried right now, I have barely looked at 2008. So I may have really stupid questions when you ask me to check something.

    Read the article

  • PTR Record Troubles

    - by Physikal
    I am having a hell of a time getting our PTR record right. Our current PTR zone looks like this: $ttl 38400 @ IN SOA ns1.domain.com. admin.domain.com. ( 1268669139 10800 3600 604800 38400 ) xxx.xxx.xxx.in-addr.arpa. IN NS ns2.domain.com. xxx.xxx.xxx.in-addr.arpa. IN NS ns1.domain.com. 97 IN PTR mail.domain.com. xxx.xxx.xxx.xxx.in-addr.arpa. IN PTR mail.domain.com. 97.96/28. IN PTR mail.domain.com For some reason the only thing that works is the 97.96/28. When this line is in there it actually says I have a PTR record when reporting from intodns.com. If I remove that line, it says I have no PTR. I have followed instructions from http://www.philchen.com/2007/04/04/configuring-reverse-dns and when I follow those instructions intodns.com says I have no PTR. When it does work with the line 97.96/28., the PTR kicks back as (from intodns.com) : 97.xxx.xxx.xxx.in-addr.arpa -> mail.domain.com.xxx.xxx.xxx.in-addr.arpa Which is, to my knowledge, an incorrect PTR. I want it to just kick back as mail.domain.com, without the xxx.xxx.xxx.in-addr.arpa extension. I have tried everything I can think of but I can't fix it. I can't help but think it's one of those things that is so stupid and simple I'm going to do the ol'facepalm. Any help is greatly appreciated. Thanks! In the event that the domain zone is needed, here it is: $ttl 38400 @ IN SOA domain.com. [email protected]. ( 1265221037 10800 3600 604800 38400 ) domain.com. IN A xxx.xxx.xxx.xxx www.domain.com. IN A xxx.xxx.xxx.xxx ftp.domain.com. IN A xxx.xxx.xxx.xxx m.domain.com. IN A xxx.xxx.xxx.xxx localhost.domain.com. IN A 127.0.0.1 webmail.domain.com. IN A xxx.xxx.xxx.xxx admin.domain.com. IN A xxx.xxx.xxx.xxx mail.domain.com. IN A xxx.xxx.xxx.xxx domain.com. IN MX 5 mail.domain.com. domain.com. IN TXT "v=spf1 a mx a:domain.com ip4:xxx.xxx.xxx.xxx ?all" domain.com. IN NS ns1 domain.com. IN NS ns2 ns1 IN A xxx.xxx.xxx.xxx ns2 IN A xxx.xxx.xxx.xxx Any double entries in different formats were part of my troubleshooting process.

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • batch file infinite loop when parsing file

    - by Bart
    Okay, this should be a really simple task but its proving to be more complicated than I think it should be. I'm clearly doing something wrong, and would like someone else's input. What I would like to do is parse through a file containing paths to directories and set permissions on those directories. An example line of the input file. There are several lines, all formatted the same way, with a different path to a directory. E:\stuff\Things\something else (X)\ (The file in question is generated under Cygwin using find to list all directories with "(X)" in the name. The file is then passed through unix2win to make it windows compatible. I've also tried manually creating the input file from within windows to rule out the file's creation method as the problem.) Here's where I'm stuck... I wrote the following quick and dirty batch file in Windows XP and it worked without any issues at all, but it will not work in server 2k8. Batch file code to run through the file and set permissions: FOR /F "tokens=*" %%A IN (dirlist.txt) DO echo y| cacls "%%A" /T /C /G "Domain Admins":f "Some Group":f "some-security-group":f What this is SUPPOSED to do (and does in XP) is loop through the specified file (dirlist.txt) and run cacls.exe on each directory it pulls from the file. The "echo y|" is in there to automagically confirm when cacls helpfully asks "are you sure?" for every directory in the list. Unfortunately, however, what it DOES is fall into an infinite loop. I've tried surrounding everything after "DO" with quotes, which prevents the endless loop but confuses cacls so it throws an error. Interestingly, I've tried running the code from after "DO" manually (obviously replacing the variable with the full path, copied straight from the file) at a command prompt and it runs as expected. I don't think it's the file or the loop, as adding quotes to the command to be executed prevents the loop from continuing past where it's supposed to... I really have no idea at this point. Any help would be appreciated. I have a feeling it's going to be something increadibly stupid... but I'm pulling my hair out so I thought I'd ask.

    Read the article

  • Can't validate mine, sudo nor root in Debian "Jessie" Gnome anymore?

    - by Janar
    I'm Debian beginner & GUI guy in a bit of trouble? Can't login as sudo/gksu/root/su nor as (main/super)user after removed user password via Gnome-user-settings. History of actions (Probably irrelevant though) Installed Debian "Jessie" GNU/Linux with xFce GUI (en-US) as only OS. HardWare is ThinkPad w510. Skipped root user password in setup, to get sudo for superuser easily. Logged in (as always had) with Gnome (3.4.x), not once with xFCE. (installed Xfce. Installed xFce only to achieve more control (easier management) over packages this way, to set-up gnome much more by mine likes. Added more jessie repros (same ones as in Wheesy stable by default but for Jessie as, Jessie only had repros for security updates by default). Installed lots of gtk(3) & gnome(3) based soft; (- restarted again after this) Installed propietary graphics card driver for mine nvidia quadro. (- restarted once again after that one) Installed more stuff related to mine work/school/devel. The actual problem Had a plan to restart again, but wanted to set up auto-login first, instead set user password to none (don't ask why / perhaps caused by being awake for a looooong time), noticed it, and set also to auto-login, but couldn't undo mine previous mistake to create new password for me. As mine password is set to none I would have expected that simply return in pass prompt for emty password field would do, but it won't authenticate. I tried Alt+F2 "gksu gedit" as well as: sudo wget "https://www.some-page.eu/file.ext" and "su" in terminals, none has applied (quite logical actually - as I'm sudoer and highest ranked super user, besides only user in computer). Current stand Everything worked & still works nice after this accident, besides this password prompts part. To spoked to log-out nor restart. Synaptic package-manager is still open with root rights (only one, that has left open prior to the issue and not closed since, just in case). Goggled for help and read some manuals/faqs/how-tos - mostly lead to sudoers file management, but not found one specifically for mine issue - so still not any smarter. Really hope, that I don't have to redo OS inst all over again, by just one stupid mistake. Thanks for your reply :-)

    Read the article

  • Both nginx and php5-fpm init.d startup scripts are non-functional and returning no errors..? But they used to work perfectly

    - by Ollie Treend
    I have been using nginx and php5-fpm on my Ubuntu box for a while now. Everything has been configured and setup correctly, and it ran like a charm. I have been keeping the packages updated & upgraded as usual, but haven't touched the nginx OR php5-fpm config files at all (thus I'm pretty sure this isn't my fault... ) Basically, I noticed nginx wasn't running as it should be. I ran the command sudo service nginx start, and the script did nothing. The same thing happens when trying to do anything - start, stop, restart or reload. This also happens for the "php5-fpm" init script - although all other init scripts seem to be functioning correctly. When trying to start nginx OR php5-fpm, this is what happens: root@HAL:/etc# service php5-fpm start root@HAL:/etc# I can't understand what is going wrong. The script isn't returning errors, but similarly it isn't starting the daemon or reporting success as usual. For reference, both installations are from the official nginx and php5-fpm PPAs. The fact that both started doing this at the same time has thrown me - since they are both unrelated packages. I have since purged both sets of packages from my system with apt-get purge ... and also apt-get remove --purge ... both of which have successfully removed the packages, their config files, and their init.d startup scripts. After having reinstalled nginx, I now have a functioning startup script again - I can start the web server as usual. However, php5-fpm is still experiencing the strange premature exiting of the startup script.. and I really can't figure out what's causing it. I have no idea what caused this to occur initially, but have managed to fix nginx. I now need to fix the php5-fpm startup script. If anybody could shed some light on this situation, I would be very grateful! The chances are both these issues are related - and they were caused by me doing something stupid. But now I need to fix it. This time I was lucky - because these problems are just on my development server. But I have 2 other live servers which are configured in a similar way, and I am worried the same thing will happen to these two as well! Has anybody else come across this? Do you have any words of advice? Thank you

    Read the article

  • Synchronize the same set of files to 2 different locations with 2 different programs for 2 different purposes

    - by Hedgetrimmer
    Because of stupid questionable IT policies at my not-to-be-named place of occupation, I have been (and will be, for the forseeable future) carrying on an external hard drive a unison-synchronized copy of all of my documents and code, including code which resides in some of my "dotfiles" and other code which resides in ~/bin (things I've made are there because ~/bin is in my $PATH) along with some cruft generated (and to be generated) by conscript and its related "giter8" templating system for Scala project boilerplates. Despite this, I do use a symlinking program to store all of my important dotfiles in a subdirectory. Thanks to that somewhat complicated setup, I have resorted to making a directory full of symlinks to every directory (or file, as is the case with stuff under ~/bin) that I want synchronized, and then follow = True is in my unison profile. It happens to be that this collection of odds and ends—plus an automatically-generated text file containing every package installed on my system—is everything under ~ that needs to be backed up to a remote (rsync-over-ssh) host with client-side encryption and signing from GPG. I already believe that duplicity is the most appropriate program to do that. What isn't as clear-cut is how to make duplicity use the exact same set of files when it runs a backup; it would be simple if duplicity would follow symlinks, but it does not and the manpage lists no option for enabling any such behavior. Comparing unison's file selection algorithm to duplicity's, I don't think I can write a program that could compute a ruleset for one program given one for the other. For the record, I would rather not keep the symlinks manually synchronized with duplicity file-selection rules, as they can change thanks to the above-mentioned complications regarding ~/bin. I don't think running duplicity on the external hard disk is such a good idea either; I usually keep that hard disk unmounted and unplugged in case of a power failure or other physical problem with the computer, plus I'm not sure about duplicity's performance given that: the hard disk is NTFS-formatted in order to be useable at my Windows-imprisoned place of occupation. despite being a USB 3.0 disk, my computer has no USB 3.0 ports so it acts as a USB 2.0 disk. How can I have duplicity (or is there a better program that I have overlooked?) back up the exact same set of files that is bidirectionally synchronized with my external hard disk?

    Read the article

  • SQL Transactional Replication snapshot not applying

    - by dmch2
    Hi, I'm using SQL Transactional Replication with pull subscriptions to replicate databases (hosting their own distribution database) from several servers across a VPN to a central server. I've got the first 2 databases working fine but the 3rd one is causing me problems. My subscription server is SQL 2008, the source systems are all SQL 2005. The source databases are a few 100Mb in size and contain audit data so are simply growing slowly by adding new records at approx 1kb a second. As far as the replication monitor, Agent logs and event logs show everything is working fine - except that no data appears in my subscription database. The distribution agent doesn't seem to want to read the snapshot (and hence the initial state and schema) from the publisher. New transactions aren't applied although they do seem to be arriving OK as the replication monitor shows things like '5 transactions with 10 commands were delivered'. I would expect (as in previous times) to see statements about data being BCPed in the replication monitor. The snapshot is on the publisher on a shared folder. The subscriber can view the snapshot OK (\\repldata) and the alt snapshot folder is pointing at it. But the distribution agent doesn't seem to be making an attempt to do read it. I tried changing the snapshot path to something that's incorrect and didn't even get an error saying that it couldn't access it. After lots of googling etc I found that sp_MSget_repl_commands is called by the subscriber on the distribution database on the publisher. Running a profiler I can see that it's only called for one agent Id. After a reinit it's called for sequence number 0x0 as expected so I thought that would mean it's would look for the snapshot. However, looking on the publisher I see that there's data for two agents - the snapshot agent and the log reader agent (which is being queries). So I guess I need to tell the distribution agent to get the data for both. But how? and more importantly - why? It worked fine on the other two servers I've replicated. I'm not an SQL novice but this is pretty much my first go at replication so don't be afraid to accuse me of missing something obvious/stupid! I can get log files (eg from the distribution agent) if you want but they don't seem to have any errors in them - it just starts up and starts applying log reader agent changes. Cheers Dave

    Read the article

  • Recovering from backup without original install media

    - by KGendron
    A machine from my old job had a complete hard drive failure. I have backups but I'm running into severe problems restoring from them. The only install media was a secondary restore partition on the system's hard drive. I hate whoever came up with that idea more than i can possibly express with words. I spent several days trying to recover the disk - it is pretty well shot and none of my best tricks could even get it to show up in the bios/ The machine that broke is an hp with xp media center edition on it (I don't know why either). The backups were created using the default windows backup tool - I have .bfk file on an external hardrive that i am trying to restore from. I've replaced the hard drive. My home machine is running windows 7 64bit and i'm trying to use it as a platform to restore to the other disk. I downloaded the window 7 nt-restore utility, however no matter what i do it restores to my C drive rather than the specified drive. Fortunately win7 security settings prevented it from being a complete disaster - but still not a happy thing. I tried firing up the xp virtual machine. I can browse to the backups but it says they are invalid and refuse to let me view/ continue with the restore. I tried installing XP to an extra harddrive on my machine - however it bluescreens on me during the install process and I cry. I tried installing xp pro to the new drive and attempted to restore over it, it of course blackscreened on me as that was a stupid idea. I made two partitions on the new hard drive (Apparently the bios on this accursed piece of junk doesn't allow hd partitions larger than 200G anyways and thus fails 40 minutes into the install with an ever-descriptive "Disk Read Error". Guess how i spent last weekend? My last idea was to install xp pro to the second partition and then use it to restore from backup to the first. After the first restart it gives me the error "Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware". My brain made one of those bad hard drive clicky noises. I've tried several boot disks but they don't seem to work. If anyone has a link to a good one it would be greatly appreciated. Anyone have any more ideas? - I really hate asking on what seems like such a simple issue but i am quite literally at my wit's end. Thanks - and sorry for the really long post.

    Read the article

  • Router behind Router--second router (and its clients) cannot be "seen" even after both routers are D

    - by Trioke
    Couple of terminology I guess I should get out of the way for consistency's sake throughout the post: External Router/Modem - SMC 8014WG - External IP 173.32.144.134 - Internal IP 192.168.0.1 Internal Router - LinkSys WRT120N - "External" IP of 192.168.0.175 - Internal IP 192.168.1.1 - Connected via Ethernet Cable (a really long one, from the basement to the second floor) PC - IP 192.168.200 - Connected Wirelessly via WAP2 Personal. Laptop - Used to try and diagnose the problem, a 4th machine to the setup which won't be part of the final setup once everything works. The actual problem: I've tried setting the LinkySys router as a DMZ'd client on the SMC router, and then DMZ'd the actual PC on the LinkSys. So the DMZ looks like this: On the SMZ, client with IP 192.168.0.175 is DMZ'd. On the LinkSys, client with IP 192.168.1.200 is DMZ'd. No dice. I then tried port forwarding the necessary port on the SMC to the LinkSys (lets just say, port 80). Then port forwarded Port 80 on the LinkSys to the PC. Same as the DMZ scenario above, but change DMZ with port forwarding. No dice, still :(. Now here's where I went stupid--and tell me if one should never do this--I enabled both DMZ and port forwarding at the same time. I fired up Opera--my browser of choice ;)--typed in 173.32.144.134:6333 and... ... Third time is the charm they say? Well, clearly not. Otherwise I wouldn't be here ;). To diagnose the problem, I enabled "Allow remote access to the Admin panel" on the LinkSys router, and specified port 6333 as the port to use. I port forwarded port 6333 on the SMC to 192.168.0.175, and access my external IP of 173.32.144.134:6333 in hopes of seeing the Admin panel... No dice (I think I've ran out of dice by now ;)). So to see where the problem was, I connected a laptop to the SMC via LAN cable, and typed in 192.168.0.175:6333, and viola, Admin Panel access! So the problem looks like it lies with the SMC--But that's as far as I've got, I've done the port forwarding, the DMZ'ing, and I've even disabled the built-in firewall for safe measures, but nothing worked. So, here I am. Unable to connect to the PC behind the Internal router externally, and without anything to go on other than to come here and ask for the wisdom of the the superuser folks :). If any more detail is required, just ask. (Apologies in advance, if questions should never be this long winded!)

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

  • The last MVVM you'll ever need?

    - by Nuri Halperin
    As my MVC projects mature and grow, the need to have some omnipresent, ambient model properties quickly emerge. The application no longer has only one dynamic pieced of data on the page: A sidebar with a shopping cart, some news flash on the side – pretty common stuff. The rub is that a controller is invoked in context of a single intended request. The rest of the data, even though it could be just as dynamic, is expected to appear on it's own. There are many solutions to this scenario. MVVM prescribes creating elaborate objects which expose your new data as a property on some uber-object with more properties exposing the "side show" ambient data. The reason I don't love this approach is because it forces fairly acute awareness of the view, and soon enough you have many MVVM objects laying around, and views have to start doing null-checks in order to ensure you really supplied all the values before binding to them. Ick. Just as unattractive is the ViewData dictionary. It's not strongly typed, and in both this and the MVVM approach someone has to populate these properties – n'est pas? Where does that live? With MVC2, we get the formerly-futures  feature Html.RenderAction(). The feature allows you plant a line in a view, of the format: <% Html.RenderAction("SessionInterest", "Session"); %> While this syntax looks very clean, I can't help being bothered by it. MVC was touting a very strong separation of concerns, the Model taking on the role of the business logic, the controller handling route and performing minimal view-choosing operations and the views strictly focused on rendering out angled-bracket tags. The RenderAction() syntax has the view calling some controller and invoking it inline with it's runtime rendering. This – to my taste – embeds too much  knowledge of controllers into the view's code – which was allegedly forbidden.  The one way flow "Controller Receive Data –> Controller invoke Model –> Controller select view –> Controller Hand data to view" now gets a "View calls controller and gets it's own data" which is not so one-way anymore. Ick. I toyed with some other solutions a bit, including some base controllers, special view classes etc. My current favorite though is making use of the ExpandoObject and dynamic features with C# 4.0. If you follow Phil Haack or read a bit from David Heyden you can see the general picture emerging. The game changer is that using the new dynamic syntax, one can sprout properties on an object and make use of them in the view. Well that beats having a bunch of uni-purpose MVVM's any day! Rather than statically exposed properties, we'll just use the capability of adding members at runtime. Armed with new ideas and syntax, I went to work: First, I created a factory method to enrich the focuse object: public static class ModelExtension { public static dynamic Decorate(this Controller controller, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = CodeCampBL.SessoinInterest(); result.TagUsage = CodeCampBL.TagUsage(); return result; } } This gives me a nice fluent way to have the controller add the rest of the ambient "side show" items (SessionInterest, TagUsage in this demo) and expose them all as the Model: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); dynamic result = this.Decorate(data); return View(result); } So now what remains is that my view knows to expect a dynamic object (rather than statically typed) so that the ASP.NET page compiler won't barf: <%@ Page Language="C#" Title="Ambient Demo" MasterPageFile="~/Views/Shared/Ambient.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> Notice the generic ViewPage<dynamic>. It doesn't work otherwise. In the page itself, Model.Value property contains the main data returned from the controller. The nice thing about this, is that the master page (Ambient.Master) also inherits from the generic ViewMasterPage<dynamic>. So rather than the page worrying about all this ambient stuff, the side bars and panels for ambient data all reside in a master page, and can be rendered using the RenderPartial() syntax: <% Html.RenderPartial("TagCloud", Model.SessionInterest as Dictionary<string, int>); %> Note here that a cast is necessary. This is because although dynamic is magic, it can't figure out what type this property is, and wants you to give it a type so its binder can figure out the right property to bind to at runtime. I use as, you can cast if you like. So there we go – no violation of MVC, no explosion of MVVM models and voila – right? Well, I could not let this go without a tweak or two more. The first thing to improve, is that some views may not need all the properties. In that case, it would be a waste of resources to populate every property. The solution to this is simple: rather than exposing properties, I change d the factory method to expose lambdas - Func<T> really. So only if and when a view accesses a member of the dynamic object does it load the data. public static class ModelExtension { // take two.. lazy loading! public static dynamic LazyDecorate(this Controller c, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = new Func<Dictionary<string, int>>(() => CodeCampBL.SessoinInterest()); result.TagUsage = new Func<Dictionary<string, int>>(() => CodeCampBL.TagUsage()); return result; } } Now that lazy loading is in place, there's really no reason not to hook up all and any possible ambient property. Go nuts! Add them all in – they won't get invoked unless used. This now requires changing the signature of usage on the ambient properties methods –adding some parenthesis to the master view: <% Html.RenderPartial("TagCloud", Model.SessionInterest() as Dictionary<string, int>); %> And, of course, the controller needs to call LazyDecorate() rather than the old Decorate(). The final touch is to introduce a convenience method to the my Controller class , so that the tedium of calling Decorate() everywhere goes away. This is done quite simply by adding a bunch of methods, matching View(object), View(string,object) signatures of the Controller class: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); return AmbientView(data); } //these methods can reside in a base controller for the solution: public ViewResult AmbientView(dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(result); } public ViewResult AmbientView(string viewName, dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(viewName, result); } The call to AmbientView now replaces any call the View() that requires the ambient data. DRY sattisfied, lazy loading and no need to replace core pieces of the MVC pipeline. I call this a good MVC day. Enjoy!

    Read the article

  • Agile Like Jazz

    - by Jeff Certain
    (I’ve been sitting on this for a week or so now, thinking that it needed to be tightened up a bit to make it less rambling. Since that’s clearly not going to happen, reader beware!) I had the privilege of spending around 90 minutes last night sitting and listening to Sonny Rollins play a concert at the Disney Center in LA. If you don’t know who Sonny Rollins is, I don’t know how to explain the experience; if you know who he is, I don’t need to. Suffice it to say that he has been recording professionally for over 50 years, and helped create an entire genre of music. A true master by any definition. One of the most intriguing aspects of a concert like this, however, is watching the master step aside and let the rest of the musicians play. Not just play their parts, but really play… letting them take over the spotlight, to strut their stuff, to soak up enthusiastic applause from the crowd. Maybe a lot of it has to do with the fact that Sonny Rollins has been doing this for more than a half-century. Maybe it has something to do with a kind of patience you learn when you’re on the far side of 80 – and the man can still blow a mean sax for 90 minutes without stopping! Maybe it has to do with the fact that he was out there for the love of the music and the love of the show, not because he had anything to prove to anyone and, I like to think, not for the money. Perhaps it had more to do with the fact that, when you’re at that level of mastery, the other musicians are going to be good. Really good. Whatever the reasons, there was a incredible freedom on that stage – the ability to improvise, for each musician to showcase their own specialization and skills, and them come back to the common theme, back to being on the same page, as it were. All this took place in the same venue that is home to the L.A. Phil. Somehow, I can’t ever see the same kind of free-wheeling improvisation happening in that context. And, since I’m a geek, I started thinking about agility. Rollins has put together a quintet that reflects his own particular style and past. No upright bass or piano for Rollins – drums, bongos, electric guitar and bass guitar along with his sax. It’s not about the mix of instruments. Other trios, quartets, and sextets use different mixes of instruments. New Orleans jazz tends towards trombones instead of sax; some prefer cornet or trumpet. But no matter what the choice of instruments, size matters. Team sizes are something I’ve been thinking about for a while. We’re on a quest to rethink how our teams are organized. They just feel too big, too unwieldy. In fact, they really don’t feel like teams at all. Most of the time, they feel more like collections or people who happen to report to the same manager. I attribute this to a couple factors. One is over-specialization; we have a tendency to have people work in silos. Although the teams are product-focused, within them our developers are both generalists and specialists. On the one hand, we expect them to be able to build an entire vertical slice of the application; on the other hand, each developer tends to be responsible for the vertical slice. As a result, developers often work on their own piece of the puzzle, in isolation. This sort of feels like working on a jigsaw in a group – each person taking a set of colors and piecing them together to reveal a portion of the overall picture. But what inevitably happens when you go to meld all those pieces together? Inevitably, you have some sections that are too big to move easily. These sections end up falling apart under their own weight as you try to move them. Not only that, but there are other challenges – figuring out where that section fits, and how to tie it into the rest of the puzzle. Often, this is when you find a few pieces need to be added – these pieces are “glue,” if you will. The other issue that arises is due to the overhead of maintaining communications in a team. My mother, who worked in IT for around 30 years, once told me that 20% per team member is a good rule of thumb for maintaining communication. While this is a rule of thumb, it seems to imply that any team over about 6 people is going to become less agile simple because of the communications burden. Teams of ten or twelve seem like they fall into the philharmonic organizational model. Complicated pieces of music requiring dozens of players to all be on the same page requires a much different model than the jazz quintet. There’s much less room for improvisation, originality or freedom. (There are probably orchestral musicians who will take exception to this characterization; I’m calling it like I see it from the cheap seats.) And, there’s one guy up front who is running the show, whose job is to keep all of those dozens of players on the same page, to facilitate communications. Somehow, the orchestral model doesn’t feel much like a self-organizing team, either. The first violin may be the best violinist in the orchestra, but they don’t get to perform free-wheeling solos. I’ve never heard of an orchestra getting together for a jam session. But I have heard of teams that organize their work based on the developers available, rather than organizing the developers based on the work required. I have heard of teams where desired functionality is deferred – or worse yet, schedules are missed – because one critical person doesn’t have any bandwidth available. I’ve heard of teams where people simply don’t have the big picture, because there is too much communication overhead for everyone to be aware of everything that is happening on a project. I once heard Paul Rayner say something to the effect of “you have a process that is perfectly designed to give you exactly the results you have.” Given a choice, I want a process that’s much more like jazz than orchestral music. I want a process that doesn’t burden me with lots of forms and checkboxes and stuff. Give me the simplest, most lightweight process that will work – and a smaller team of the best developers I can find. This seems like the kind of process that will get the kind of result I want to be part of.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >