Search Results

Search found 14142 results on 566 pages for 'missing symbols'.

Page 533/566 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • volume group disappeared after xfs_check run

    - by John P
    EDIT** I have a volume group consisting of 5 RAID1 devices grouped together into a lvm and formatted with xfs. The 5th RAID device lost its RAID config (cat /proc/mdstat does not show anything). The two drives are still present (sdj and sdk), but they have no partitions. The LVM appeared to be happily using sdj up until recently. (doing a pvscan showed the first 4 RAID1 devices + /dev/sdj) I removed the LVM from the fstab, rebooted, then ran xfs_check on the LV. It ran for about half an hour, then stopped with an error. I tried rebooting again, and this time when it came up, the logical volume was no longer there. It is now looking for /dev/md5, which is gone (though it had been using /dev/sdj earlier). /dev/sdj was having read errors, but after replacing the SATA cable, those went away, so the drive appears to be fine for now. Can I modify the /etc/lvm/backup/dedvol, change the device to /dev/sdj and do a vgcfgrestore? I could try doing a pvcreate --uuid KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ /dev/sdj to make it recognize it, but I'm afraid that would erase the data on the drive UPDATE: just changing the pv to point to /dev/sdj did not work vgcfgrestore --file /etc/lvm/backup/dedvol dedvol Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. Cannot restore Volume Group dedvol with 1 PVs marked as missing. Restore failed. pvscan /dev/sdj: read failed after 0 of 4096 at 0: Input/output error Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. Couldn't find device with uuid 'KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ'. PV /dev/sdd2 VG VolGroup00 lvm2 [74.41 GB / 0 free] PV /dev/md2 VG dedvol lvm2 [931.51 GB / 0 free] PV /dev/md3 VG dedvol lvm2 [931.51 GB / 0 free] PV /dev/md0 VG dedvol lvm2 [931.51 GB / 0 free] PV /dev/md4 VG dedvol lvm2 [931.51 GB / 0 free] PV unknown device VG dedvol lvm2 [1.82 TB / 63.05 GB free] Total: 6 [5.53 TB] / in use: 6 [5.53 TB] / in no VG: 0 [0 ] vgscan Reading all physical volumes. This may take a while... /dev/sdj: read failed after 0 of 4096 at 0: Input/output error /dev/sdj: read failed after 0 of 4096 at 2000398843904: Input/output error Found volume group "VolGroup00" using metadata type lvm2 Found volume group "dedvol" using metadata type lvm2 vgdisplay dedvol --- Volume group --- VG Name dedvol System ID Format lvm2 Metadata Areas 5 Metadata Sequence No 10 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 5 Act PV 5 VG Size 5.46 TB PE Size 4.00 MB Total PE 1430796 Alloc PE / Size 1414656 / 5.40 TB Free PE / Size 16140 / 63.05 GB VG UUID o1U6Ll-5WH8-Pv7Z-Rtc4-1qYp-oiWA-cPD246 dedvol { id = "o1U6Ll-5WH8-Pv7Z-Rtc4-1qYp-oiWA-cPD246" seqno = 10 status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 physical_volumes { pv0 { id = "Msiee7-Zovu-VSJ3-Y2hR-uBVd-6PaT-Ho9v95" device = "/dev/md2" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 1953519872 # 931.511 Gigabytes pe_start = 384 pe_count = 238466 # 931.508 Gigabytes } pv1 { id = "ZittCN-0x6L-cOsW-v1v4-atVN-fEWF-e3lqUe" device = "/dev/md3" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 1953519872 # 931.511 Gigabytes pe_start = 384 pe_count = 238466 # 931.508 Gigabytes } pv2 { id = "NRNo0w-kgGr-dUxA-mWnl-bU5v-Wld0-XeKVLD" device = "/dev/md0" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 1953519872 # 931.511 Gigabytes pe_start = 384 pe_count = 238466 # 931.508 Gigabytes } pv3 { id = "2EfLFr-JcRe-MusW-mfAs-WCct-u4iV-W0pmG3" device = "/dev/md4" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 1953519872 # 931.511 Gigabytes pe_start = 384 pe_count = 238466 # 931.508 Gigabytes } pv4 { id = "KZron2-pPTr-ZYeQ-PKXX-4Woq-6aNc-AG4rRJ" device = "/dev/md5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 3907028992 # 1.81935 Terabytes pe_start = 384 pe_count = 476932 # 1.81935 Terabytes } }

    Read the article

  • Identifying the cause of my DNS failure (domain not propagating)

    - by thejartender
    I have set up a DNS server with the help of two helpful tutorials: http://linuxconfig.org/linux-dns-server-bind-configuration http://ulyssesonline.com/2007/11/07/how-to-setup-a-dns-server-in-ubuntu/ I am using: Ubuntu Bind9 and had issues I tried negating on my own thanks to a question I posted here earlier that pointed out my mistake of using rfc 1918 addresses in my previous SOA record: $TTL 3D @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. $TTL 600 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.10.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. I read the ranges that are used under rfc 1918 and modified my routers resource pool to assign LAN devices IP(s) within the 30.0.0.0 range and now modified my SOA to: $TTL 600 @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 30.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 30.0.0.42 gw IN A 30.0.0.138 www IN CNAME thejarbar.org. $TTL600 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.30.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. I can ping my nameserverver ns.thejarbar.organd it gives me the correct isp IP address, but my domain never seems to propagate to my nameserver. I have searched for a concise tutorial that covers setting up a DNS with a nameserver that hosts (my) or the site. I am fully aware that this is not recommended and am using this for my learning purposes. Getting to the question, due to the lack of information in tutorials I looked at (nothing about rfc 1918 and no example of swapping these with ISP IP) is my router modification going to help me as it does not seem to be. I have also tried as recommended using my ISP IP instead of the values I posted. My site never propagated to my nameserver. What could be causing this? I have run dig thejarbar.org @88.89.190.171 and get an authorative response. Can anyone assist me with the final steps I may be missing here?

    Read the article

  • Issue in nginx proxying to apache

    - by Luis Masuelli
    My current nginx configuration is as follows: specific configuration for (currently two) domains: server { listen 443 ssl; server_name studiotv.service.tebusco.lan phpmyadmin.service.tebusco.lan; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { proxy_pass http://127.0.0.1:8180; proxy_set_header Host $http_host:8180; } } default configuration for unmatched ssl connections: server { listen 443 default ssl; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { return 403; } } http configuration: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } The intention is clear: Redirect http traffic to https. Proxy each https:// call from phpmyadmin.service.tebusco.lan and studiotv.service.tebusco.lan to apache2. This includes passing a host header, which is detected. Each unmatched ssl connection must return a 403 in nginx. Does not even reach apache2. In the apache2 side of the life, I have a default site, and a non-default site which will match studiotv.service.tebusco.lan: 000-default.conf file (available and enabled): <VirtualHost 127.0.0.1:8180> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName localhost ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Order deny,allow Require all granted </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet studiotv.conf file (available and enabled): <VirtualHost *:8180> ServerName studiotv.service.tebusco.lan ServerAdmin [email protected] DocumentRoot /var/www/studiotv <Directory /var/www/studiotv/> Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Allow from all Require all granted </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn # No usamos ${APACHE_LOG_DIR} sino en su lugar /var/log/<host> ErrorLog /var/log/apache2/studiotv/error.log CustomLog /var/log/apache2/studiotv/access.log combined </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet However, when I hit the browser with http://studiotv.service.tebusco.lan, the default php page is shown instead. Question: What am I missing? (apache 2.4.7, nginx 1.6.0, ubuntu server 14.04).

    Read the article

  • PHP, Apache and curl: Differences between Windows and Linux?

    - by beginner_
    I'm trying to run my php App on Ubuntu Server 11.10. This App works fine under Apache + PHP in windows. I have other applications that I can simply copy&paste between the 2 OS and they work on both. (These don't use cURL). However this one uses the php library tonic (RESTful webservices) and makes us of php cURL module. The issue is I'm not getting an error message which makes it impossible to find the issue. I (must) use NTLM authentication and this is done with AuthenNTLM Apache Module: Order allow,deny Allow from all PerlAuthenHandler Apache2::AuthenNTLM AuthType ntlm AuthName "Protected Access" require valid-user PerlAddVar ntdomain "domainName server" PerlSetVar defaultdomain domainName PerlSetVar ntlmsemtimeout 2 PerlSetVar ntlmdebug 1 PerlSetVar splitdomainprefix 0 All files that cURL needs to fetch override AuthenNTLM authentication: order deny,allow deny from all allow from 127.0.0.1 Satisfy any Since these files are only fectehd by cURL from same server, access can be limited to localhost. Possible issues are: NTLM auth isn't overridden for files requested through cURL (even though AllowOverride All is set) curl works differently on linux $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIE, $strCookie); curl_setopt($ch, CURLOPT_URL, $baseUrl . $queryString); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $html = curl_exec($ch); curl_close($ch); other? Apache log says: [error] Bad/Missing NTLM/Basic Authorization Header for /myApp/webservice/local/viewList.php But this directory should override NTLM authentication using curl command line from windows to access same resource i get: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html> <head> <title>406 Not Acceptable</title> </head> <body> <h1>Not Acceptable</h1> <p>An appropriate representation of the requested resource /myApp/webservice/myResource could not be found on this server.</p> Available variants: <ul> <li><a href="myResource.php">myResource.php</a> , type application/x-httpd-php</li> </ul> <hr> <address>Apache/2.2.20 (Ubuntu) Server at localhost Port 80</address> </body> </html> Note: This is duplicate from http://stackoverflow.com/questions/9821979/php-curl-on-linux-what-is-the-difference-to-curl-on-windows Is it was suggested I post it here. EDIT: Please see Ubuntu Server: Apache2 seems to attach .php to URI as I discovered why it does not work but need help so the issue does not occur anymore. ANSWER: The issue is the default Apache configuration on Ubuntu: Options Indexes FollowSymLinks MultiViews MultiViews is changing request_uri from myResource to myResource.php. Solutions: disable MultiViews in .htaccess: Options -MultiViews remove MultiViews from default config rename the file as example to myResourceClass I chose last option because that should work regardless of configuration and I only have 3 such files so the change took about 30 secs...

    Read the article

  • My PC suddenly doesn't detect the primary drive (SSD)

    - by smoth190
    My computer has been working fine for months, and it worked today, but tonight I went to start it up to find that my OCZ Vertex 2 isn't being found. When I turn on my computer, the loading screen gets stuck at "Detecting IDE drives...". After a while, it keeps going and lists the drives it finds. The first one in the list should be my Vertex 2, but it just says "None". The computer proceeds to get stuck on "Loading operating system...", which is understandable because the drive with the OS is "gone". My first thought was drive failure, but every time drives have crashed on me, they're still detected--they just don't work. This drive is an SSD, it's pretty new, and I had no problems beforehand. I find it hard to believe it failed. I'm sure it's possible, but I hope this isn't the case. There has been nothing strange going on at all with my PC, it's been running perfect until now. I was just about to do my monthly dskchk and defrag today. I popped in my Windows 7 Home Premium disk and booted from it. When I launched the repair tool, it didn't list any operating systems (because the drive is 100% missing...). When I've had disks crash before, it still listed the OS, you just couldn't do anything with it. I tried to restore from an image, but I don't have any of those, either. I opened the command console and listed the drivers with wmic logicaldisk get name. Only C: and D: came up. C: was my 1TB storage driver (luckily, all my stuff is here--only the OS is on the SSD!) and D: was the disk driver. So I still had an MIA drive... The SSD didn't come with any driver disks, so I can't install drivers. If there's a way to do this from a CD I can burn with my other PC, please let me know. What the heck do I do? Although only the OS is on my SSD, a new SSD is expensive. I'll probably also have to buy a new copy of Windows (an upgrade would be nice, though...) because I've found it eats my registration key when my PC crashes (and my thousands of dollars of Adobe programs, I'll be on the phone with tech support for a week to get those keys back). And I'll lose my registry, all my settings, all sorts of other stuff that I'll spend weeks restoring. My computer is a pain in the butt to take out and open up, so if I can't fix it, I'll try fiddling with the plug or putting it into a new computer, but not right now. Any help is greatly appreciated! The day when they make crash-less drives will be the day I live without worry.

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Exchange Server 2007 Send and Receive Connectors

    - by Mistiry
    I have gotten awesome advice from users on here for getting Exchange on Windows SBS 2008 set up. I think this is the final piece and I'm ready for roll-out! I need to set up Exchange so that it RECEIVES mail from our existing mail server as a Forward [aliases on the existing mail server to forward mail from [email protected] to [email protected]] (not using the POP3 Connector), and SENDS mail through that server as well (sends from [email protected] to [email protected] and then out to the world, showing in the headers as from [email protected] or at absolute least have the reply-to set as this). Alternatively, as long as the .net email address doesn't show in the From and replies are directed to the .com account, email can go from Exchange to the outside world without directing through the existing mail server. External Domain: domain.com Internal Domain: domain.local Internet Domain Name Set in SBS Console: domain.net When I go to http://remote.domain.net I get the Remote Web Workspace, and can login to both Sharepoint and OWA. I can send an email from OWA to a GMail account. I receive it from [email protected], which is an alias of [email protected]. I cannot, however, send an email from OWA to ANY domain.com email addresses. I am also not receiving any email to this Exchange account (except for NDRs). When I try sending an email to a domain.com account, here is the error (I had to replace all < and with { and }): Delivery has failed to these recipients or distribution lists: [email protected] The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Generating server: IFEXCHANGE.domain.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a]) by IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a%10]) with mapi; Tue, 17 Aug 2010 14:14:14 -0400 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: John Doe {[email protected]} To: "[email protected]" {[email protected]} Date: Tue, 17 Aug 2010 14:14:12 -0400 Subject: asdf Thread-Topic: asdf Thread-Index: AQHLPjf+h6hA5MJ1JUu1WS4I4CiWeA== Message-ID: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} MIME-Version: 1.0 I hope I explained the situation well enough for someone to be able to explain to me what I'm missing. If I could, I'd be putting a 10K bounty, but unfortunately I've got only 74 reputation (hey, I'm a newbie here!). I'm pretty sure the obvious "RecipNotFound" error is why its not working, my question is how to resolve this. The email account exists, it receives mail just fine, yet when I send it from the Exchange server it fails. EDIT In OC-Hub Transport, the Email Address Policies has 2 entries. "Windows SBS Email Address Policy" is set up to: Include All Recipient Types, no conditions, and SMTP %[email protected]. "Default Policy" set to: Include All Recipient Types, no conditions, and SMTP @domain.net. Three Authoritative Accepted domains domain.com domain.local (Default) domain.net Remote Domains tab has two entries. Default with domain * Windows SBS Company Web Domain with domain companyweb.

    Read the article

  • Can't configure frame relay T1 on Cisco 1760

    - by sonar
    For the past few days I've been trying to configure a data T1 via a Frame Relay. Now I've been pretty unsuccessful at it, and it's been a while, since I've done this so please bare with me. The ISP provided me the following information: 1. IP address 2. Gateway address 3. Encapsulation Frame Relay 4. DLCI 100 5. BZ8 ESF (I think the bz8 was supposed to be b8zs) 6. Time Slot (1 al 24). And what I have configured up until now is the following: interface Serial0/0 ip address <ip address> 255.255.255.252 encapsulation frame-relay service-module t1 timeslots 1-24 frame-relay interface-dlci 100 sh service-module s0/0 (outputs): Module type is T1/fractional Hardware revision is 0.128, Software revision is 0.2, Image checksum is 0x73D70058, Protocol revision is 0.1 Receiver has no alarms. Framing is **ESF**, Line Code is **B8ZS**, Current clock source is line, Fraction has **24 timeslots** (64 Kbits/sec each), Net bandwidth is 1536 Kbits/sec. Last module self-test (done at startup): Passed Last clearing of alarm counters 00:17:17 loss of signal : 0, loss of frame : 0, AIS alarm : 0, Remote alarm : 2, last occurred 00:10:10 Module access errors : 0, Total Data (last 1 15 minute intervals): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs Data in current interval (138 seconds elapsed): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs sh int: FastEthernet0/0 is up, line protocol is up Hardware is PQUICC_FEC, address is 000d.6516.e5aa (bia 000d.6516.e5aa) Internet address is 10.0.0.1/24 MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:20:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 0 packets input, 0 bytes Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog 0 input packets with dribble condition detected 191 packets output, 20676 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Serial0/0 is up, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY, loopback not set Keepalive set (10 sec) LMI enq sent 157, LMI stat recvd 0, LMI upd recvd 0, DTE LMI down LMI enq recvd 23, LMI stat sent 0, LMI upd sent 0 LMI DLCI 1023 LMI type is CISCO frame relay DTE FR SVC disabled, LAPF state down Broadcast queue 0/64, broadcasts sent/dropped 2/0, interface broadcasts 0 Last input 00:24:51, output 00:00:05, output hang never Last clearing of "show interface" counters 00:27:20 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/64/0 (size/max total/threshold/drops) Conversations 0/1/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1152 kilobits/sec 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 23 packets input, 302 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 1725 input errors, 595 CRC, 1099 frame, 0 overrun, 0 ignored, 30 abort 246 packets output, 3974 bytes, 0 underruns 0 output errors, 0 collisions, 48 interface resets 0 output buffer failures, 0 output buffers swapped out 4 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up Serial0/0.1 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never Serial0/0.100 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU Internet address is <ip address>/30 MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never And everything seems to be accounted for to me, but apparently I'm missing something. My issue is that I'm stuck on interface up, line protocol down, so the T1 doesn't go up. Any ideas? Thank you,

    Read the article

  • Can not open port 3306 on Ubuntu using iptables

    - by user94626
    I am trying to open port 3306 (for remote mysql connections) on my ubuntu 12.04 server machine but for the life of me can't get the damned thing to work! Here is what I did: 1) list current firewall rules: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 225 16984 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 220 69605 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 486 54824 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 4 208 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 4 208 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 735 182K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 225 16984 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 2) try to connect from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect 3) try to add a new rule to iptables: iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT 4) make sure the new rule is added: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 359 25972 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 251 78665 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 628 64420 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 5 260 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 5 260 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 919 213K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 359 25972 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 which appears to be the case (last line in "Chain INPUT" section). 5) try to connect again from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect which is failing again. 6) try to flush all rules: $> sudo iptables -F 7) this time I CAN CONNECT. 8) reboot server and try to connect, FAILURE. I suspect since the new rule is being appended at the end it will have no effect as there appears to be a "reject all" sort of rule before it. If this is the case, how to make sure the new rule is added in the right order? Otherwise, what am I missing? Please help.

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Android - creating a custom preferences activity screen

    - by Bill Osuch
    Android applications can maintain their own internal preferences (and allow them to be modified by users) with very little coding. In fact, you don't even need to write an code to explicitly save these preferences, it's all handled automatically! Create a new Android project, with an intial activity title Main. Create two more activities: ShowPrefs, which extends Activity Set Prefs, which extends PreferenceActivity Add these two to your AndroidManifest.xml file: <activity android:name=".SetPrefs"></activity> <activity android:name=".ShowPrefs"></activity> Now we'll work on fleshing out each activity. First, open up the main.xml layout file and add a couple of buttons to it: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"    android:orientation="vertical"    android:layout_width="fill_parent"    android:layout_height="fill_parent"> <Button android:text="Edit Preferences"    android:id="@+id/prefButton"    android:layout_width="wrap_content"    android:layout_height="wrap_content"    android:layout_gravity="center_horizontal"/> <Button android:text="Show Preferences"    android:id="@+id/showButton"    android:layout_width="wrap_content"    android:layout_height="wrap_content"    android:layout_gravity="center_horizontal"/> </LinearLayout> Next, create a couple button listeners in Main.java to handle the clicks and start the other activities: Button editPrefs = (Button) findViewById(R.id.prefButton);       editPrefs.setOnClickListener(new View.OnClickListener() {              public void onClick(View view) {                  Intent myIntent = new Intent(view.getContext(), SetPrefs.class);                  startActivityForResult(myIntent, 0);              }      });           Button showPrefs = (Button) findViewById(R.id.showButton);      showPrefs.setOnClickListener(new View.OnClickListener() {              public void onClick(View view) {                  Intent myIntent = new Intent(view.getContext(), ShowPrefs.class);                  startActivityForResult(myIntent, 0);              }      }); Now, we'll create the actual preferences layout. You'll need to create a file called preferences.xml inside res/xml, and you'll likely have to create the xml directory as well. Add the following xml: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android"> </PreferenceScreen> First we'll add a category, which is just a way to group similar preferences... sort of a horizontal bar. Add this inside the PreferenceScreen tags: <PreferenceCategory android:title="First Category"> </PreferenceCategory> Now add a Checkbox and an Edittext box (inside the PreferenceCategory tags): <CheckBoxPreference    android:key="checkboxPref"    android:title="Checkbox Preference"    android:summary="This preference can be true or false"    android:defaultValue="false"/> <EditTextPreference    android:key="editTextPref"    android:title="EditText Preference"    android:summary="This allows you to enter a string"    android:defaultValue="Nothing"/> The key is how you will refer to the preference in code, the title is the large text that will be displayed, and the summary is the smaller text (this will make sense when you see it). Let's say we've got a second group of preferences that apply to a different part of the app. Add a new category just below the first one: <PreferenceCategory android:title="Second Category"> </PreferenceCategory> In there we'll a list with radio buttons, so add: <ListPreference    android:key="listPref"    android:title="List Preference"    android:summary="This preference lets you select an item in a array"    android:entries="@array/listArray"    android:entryValues="@array/listValues" /> When complete, your full xml file should look like this: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android">  <PreferenceCategory android:title="First Category"> <CheckBoxPreference    android:key="checkboxPref"    android:title="Checkbox Preference"    android:summary="This preference can be true or false"    android:defaultValue="false"/> <EditTextPreference    android:key="editTextPref"    android:title="EditText Preference"    android:summary="This allows you to enter a string"    android:defaultValue="Nothing"/>  </PreferenceCategory>  <PreferenceCategory android:title="Second Category">   <ListPreference    android:key="listPref"    android:title="List Preference"    android:summary="This preference lets you select an item in a array"    android:entries="@array/listArray"    android:entryValues="@array/listValues" />  </PreferenceCategory> </PreferenceScreen> However, when you try to save it, you'll get an error because you're missing your array definition. To fix this, add a file called arrays.xml in res/values, and paste in the following: <?xml version="1.0" encoding="utf-8"?> <resources>  <string-array name="listArray">      <item>Value 1</item>      <item>Value 2</item>      <item>Value 3</item>  </string-array>  <string-array name="listValues">      <item>1</item>      <item>2</item>      <item>3</item>  </string-array> </resources> Finally (for the preferences screen at least...) add the code that will display the preferences layout to the SetPrefs.java file:  @Override     public void onCreate(Bundle savedInstanceState) {      super.onCreate(savedInstanceState);      addPreferencesFromResource(R.xml.preferences);      } OK, so now we've got an activity that will set preferences, and save them without the need to write custom save code. Let's throw together an activity to work with the saved preferences. Create a new layout called showpreferences.xml and give it three Textviews: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"     android:orientation="vertical"     android:layout_width="fill_parent"     android:layout_height="fill_parent"> <TextView   android:id="@+id/textview1"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview1"/> <TextView   android:id="@+id/textview2"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview2"/> <TextView   android:id="@+id/textview3"     android:layout_width="fill_parent"     android:layout_height="wrap_content"     android:text="textview3"/> </LinearLayout> Open up the ShowPrefs.java file and have it use that layout: setContentView(R.layout.showpreferences); Then add the following code to load the DefaultSharedPreferences and display them: SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(this);    TextView text1 = (TextView)findViewById(R.id.textview1); TextView text2 = (TextView)findViewById(R.id.textview2); TextView text3 = (TextView)findViewById(R.id.textview3);    text1.setText(new Boolean(prefs.getBoolean("checkboxPref", false)).toString()); text2.setText(prefs.getString("editTextPref", "<unset>"));; text3.setText(prefs.getString("listPref", "<unset>")); Fire up the application in the emulator and click the Edit Preferences button. Set various things, click the back button, then the Edit Preferences button again. Notice that your choices have been saved.   Now click the Show Preferences button, and you should see the results of what you set:   There are two more preference types that I did not include here: RingtonePreference - shows a radioGroup that lists your ringtones PreferenceScreen - allows you to embed a second preference screen inside the first - it opens up a new set of preferences when clicked

    Read the article

  • Why can't I reinstall MySQL?

    - by Johannes Nielsen
    I've been looking all around the Internet for an answer but didn't find anything. I hope you can help me now. I have a server with MySQL. From one day to another, MySQL didn't let me enter with my root password anymore (accsess denied for user 'root'@'localhost' using password: 'YES'). So I tried two ways to reset the password: No.1: I typed: shell> /etc/init.d/mysqld stop To stop MySQL. Then I restarted it skipping the grant-tables: shell> mysqld_safe --skip-grant-tables So I was able to log in as root and change the password using: mysql> UPDATE mysql.user SET Password = PASSWORD('MyNewPassword') WHERE User = 'root'; FLUSH PRIVILEGES; I restarted MySQL and tried to log in as root with my new password - didn't work. So I tried the solution that's described here: http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html (I don't want to post it here because this post is already pretty long). Didn't work either. Actually it made it worse, because since that day, every time I try to start MySQL, it doesn't even ask me for my password, but I get: shell> ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Well, I've looked up what it means and found that my mysqld.sock is missing. I tried to create it using touch but MySQL can't start with that socket. Now I'm trying to reinstall MySQL but everytime I type in shell> apt-get --purge remove mysql-server mysql-common mysql-client In that or any other order or every one of those three alone, I get: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> psa-firewall : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> psa-spamassassin : Depends: plesk-core (>= 11.0.9) but it is not installable shell> psa-vpn : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: plesk-base (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I said to my self "let's just remove those files with depenencies, too" (that psa-stuff since plesk is virtual and can't be uninstalled)... Guess what happened: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Of course I tried apt-get -f install, too many times even. What am I doing wrong? No matter, which other packages I include into apt-get --purge remove, I always get new dependencies. Do I have to delete every MySQL-related directory and file manually? Hope there's someone out there who can help me! Cheers! EDIT: After trying apt-get purge mysql-server mysql-common mysql-client libmysqlclient18 libmysqlclient18:i386 mysql-client-5.5 mysql-server-5.5 psa-firewall psa-spamassassin psa-vpn Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libdbd-mysql-perl : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libmyodbc : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libmysqlclient18:i386 (>= 5.5.13-1) but it is not going to be installed php5-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed ruby-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to remove all these and got: Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these:qlclient18:i386 mysql The following packages have unmet dependencies: libmysql-ruby1.8 : Depends: ruby-mysql but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And actually I think removing that file, too solved my problem :-S Next time I'll try everything before asking :D Thank you Eric for keeping me couraged to just go on removing :D

    Read the article

  • Upgrading Windows 8 boot to VHD to Windows 8.1&ndash;Step by step guide

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/19/upgrading-windows-8-boot-to-vhd-to-windows-8.1ndashstep-by.aspxBoot to VHD – dual booting Windows 7 and Windows 8 became easy When Windows 8 arrived, quite a few people decided that they would still dual boot their machines, and instead of mucking about with resizing disk partitions to free up space for Windows 8 they decided to use the boot from VHD feature to create a huge hard disc image into which Windows 8 could be installed.  Scott Hanselman wrote this installation guide, while I myself used the installation guide from Ed Bott of ZD net fame. Boot to VHD is a great solution, it achieves a dual boot, can be backed up easily and had virtually no effect on the original Windows 7 partition. As a developer who has dual booted Windows operating systems for years, hacking boot.ini files, the boot to VHD was a much easier solution. Upgrade to Windows 8.1 – ah, you can’t do that on a virtual disk installation (boot to VHD) Last week the final version of Windows 8.1 arrived, and I went into the Windows Store to upgrade.  Luckily I’m on a fast download service, and use an SSD, because once the upgrade was downloaded and prepared Windows informed that This PC can’t run Windows 8.1, and provided the reason, You can’t install Windows on a virtual drive.  You can see an image of the message and discussion that sparked my search for a solution in this Microsoft Technet forum post. I was determined not to have to resize partitions yet again and fiddle with VHD to disk utilities and back again, and in the end I did succeed in upgrading to a Windows 8.1 boot to VHD partition.  It takes quite a bit of effort though … tldr; Simple steps of how you upgrade Boot into Windows 7 – make a copy of your Windows 8 VHD, to become Windows 8.1 Enable Hyper-V in your Windows 8 (the original boot to VHD partition) Create a new virtual machine, attaching the copy of your Windows 8 VHD Start the virtual machine, upgrade it via the Windows Store to Windows 8.1 Shutdown the virtual machine Boot into Windows 7 – use the bcedit tool to create a new Windows 8.1 boot to VHD option (pointing at the copy) Boot into the new Windows 8.1 option Reactivate Windows 8.1 (it will have become deactivated by running under Hyper-V) Remove the original Windows 8 VHD, and in Windows 7 use bcedit to remove it from the boot menu Things you’ll need A system that can run Hyper-V under Windows 8 (Intel i5, i7 class CPU) Enough space to have your original Windows 8 boot to VHD and a copy at the same time An ISO or DVD for Windows 8 to create a bootable Windows 8 partition Step by step guide Boot to your base o/s, the real one, Windows 7. Make a copy of the Windows 8 VHD file that you use to boot Windows 8 (via boot from VHD) – I copied it from a folder on C: called VHD-Win8 to VHD-Win8.1 on my N: drive. Reboot your system into Windows 8, and enable Hyper-V if not already present (this may require reboot) Use the Hyper-V manager , create a new Hyper-V machine, using half your system memory, and use the option to attach an existing VHD on the main IDE controller – this will be the new copy you made in Step 2. Start the virtual machine, use Connect to view it, and you’ll probably discover it cannot boot as there is no boot record If this is the case, go to Hyper-V manager, edit the Settings for the virtual machine to attach an ISO of a Windows 8 DVD to the second IDE controller. Start the virtual machine, use Connect to view it, and it should now attempt a fresh installation of Windows 8.  You should select Advanced Options and choose Repair - this will make VHD bootable When the setup reboots your virtual machine, turn off the virtual machine, and remove the ISO of the Windows 8 DVD from the virtual machine settings. Start virtual machine, use Connect to view it.  You will see the devices to be re-discovered (including your quad CPU becoming single CPU).  Eventually you should see the Windows Login screen. You may notice that your desktop background (Win+D) will have turned black as your Windows installation has become deactivate due to the hardware changes between your real PC and Hyper-V. Fortunately becoming deactivated, does not stop you using the Windows Store, where you can select the update to Windows 8.1. You can now watch the progress joy of the Windows 8 update; downloading, preparing to update, checking compatibility, gathering info, preparing to restart, and finally, confirm restart - remember that you are restarting your virtual machine sitting on the copy of the VHD, not the Windows 8 boot to VHD you are currently using to run Hyper-V (confused yet?) After the reboot you get the real upgrade messages; setting up x%, xx%, (quite slow) After a while, Getting ready Applying PC Settings x%, xx% (really slow) Updating your system (fast) Setting up a few more things x%, (quite slow) Getting ready, again Accept license terms Express settings Confirmed previous password Next, I had to set up a Microsoft account – which is possibly now required, and not optional Using the Microsoft account required a 2 factor authorization, via text message, a 7 digit code for me Finalising settings Blank screen, HI .. We're setting up things for you (similar to original Windows 8 install) 'You can get new apps from the Store', below which is ’Installing your apps’ - I had Windows Media Center which is counts as an app from the Store ‘Taking care of a few things’, below which is ‘Installing your apps’ ‘Taking care of a few things’, below ‘Don't turn off your PC’ ‘Getting your apps ready’, below ‘Don't turn off your PC’ ‘Almost ready’, below ‘Don't turn off your PC’ … finally, we get the Windows 8.1 start menu, and a quick Win+D to check the desktop confirmed all the application icons I expected, pinned items on the taskbar, and one app moaning about a missing drive At this point the upgrade is complete – you can shutdown the virtual machine Reboot from the original Windows 8 and return to Windows 7 to configure booting to the Windows 8.1 copy of the VHD In an administrator command prompt do following use the bcdedit tool (from an MSDN blog about configuring VHD to boot in Windows 7) Type bcedit to list the current boot options, so you can copy the GUID (complete with brackets/braces) for the original Windows 8 boot to VHD Create a new menu option, copy of the Windows 8 option; bcdedit /copy {originalguid} /d "Windows 8.1" Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} device vhd=[D:]\Image.vhd Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} osdevice vhd=[D:]\Image.vhd Set autodetection of the HAL (may already be set); bcdedit /set {newguid} detecthal on Reboot from Windows 7 and select the new option 'Windows 8.1' on the boot menu, and you’ll have some messages to look at, as your hardware is redetected (as you are back from 1 CPU to 4 CPUs) ‘Getting devices ready, blank then %xx, with occasional blank screen, for the graphics driver, (fast-ish) Getting Ready message (fast) You will have to suffer one final reboots, choose 'Windows 8.1' and you can now login to a lovely Windows 8.1 start screen running on non virtualized hardware via boot to VHD After checking everything is running fine, you can now choose to Activate Windows, which for me was a toll free phone call to the automated system where you type in lots of numbers to be given a whole bunch of new activation codes. Once you’re happy with your new Windows 8.1 boot to VHD, and no longer need the Windows 8 boot to VHD, feel free to delete the old one.  I do believe once you upgrade, you are no longer licensed to use it anyway. There, that was simple wasn’t it? Looking at the huge list of steps it took to perform this upgrade, you may wonder whether I think this is worth it.  Well, I think it is worth booting to VHD.  It makes backups a snap (go to Windows 7, copy the VHD, you backed up the o/s) and helps with disk management – want to move the o/s, you can move the VHD and repoint the boot menu to the new location. The downside is that Microsoft has complete neglected to support boot to VHD as an upgradable option.  Quite a poor decision in my opinion, and if you read twitter and the forums quite a few people agree with that view.  It’s a shame this got missed in the work on creating the upgrade packages for Windows 8.1.

    Read the article

  • Tools to Help Post Content On Your WordPress Blog

    - by Matthew Guay
    Now that you’ve got a nice blog, you want to do more with it and start posting content.  Here we look at some tools that will allow you to post directly to your WordPress blog. Writing a new blog post is easy with WordPress as we saw in our previous post about Starting your own WordPress blog.  The web editor gives you a lot of features and even lets you edit your post’s source code if you enjoy hacking HTML.  There are other tools that will allow you to post content, here we look at how you can post with dedicated apps, browser plugins, and even by email. Windows Live Writer Windows Live Writer (part of the Windows Live Essentials Suite) is a great app for posting content to your blog.  This free program for Microsoft lets you post content to a variety of blogging services, including Blogger, Typepad, LiveJournal, and of course WordPress.  You can write blog posts directly from its Word-like editor, complete with pictures and advanced formatting.  Even if you’re offline, you can still write posts and save them for when you’re online again. For more information about installing Live writer, check out our article on how to Install Windows Live Essentials In Windows 7. Once Live Writer is installed, open it to add your blog.  If you already had Live Writer installed and configured for a blog, you can add your new blog, too.  Just click your blog’s name in the top right corner, and select “Add blog account”. Select “Other blog service” to add your WordPress blog to Writer, and click Next.   Enter your blog’s web address, and your username and password.  Check Remember my password so you don’t have to enter it every time you write something. Writer will analyze your blog and setup your account. During the setup process it may ask to post a temporary post.  This will let you preview blog posts using your blog’s real theme, which is helpful, so click Yes. Finally, add your Blog’s name, and click Finish. You can now use the rich editor to write and add content to a new blog post.   Select the Preview tab to see how your post will look on your blog… Or, if you’re a HTML geek, select the Source tab to edit the code of your blog post. From the bottom of the window, you can choose categories, insert tags, and even schedule the post to publish on a different day.  Live Writer is fully integrated with WordPress; you’re not missing anything by using the desktop editor. If you want to edit a post you’ve already published, click the Open button and select the post.  You can chose and edit any post, including ones you published via the web interface or other editors. Add Multimedia Content to your Posts with Live Writer Back in the Edit tab, you can add pictures, videos and more from the sidebar.  Select what you want to insert. Pictures If you insert a picture, you can add many nice borders and designs to it. Or, you can even add artistic effects from the Effects tab in the sidebar. Photo Gallery If you want to post several pictures, say some of your vacation shots, then inserting a picture gallery may be the best option.  Select Insert Photo Gallery in the sidebar, and then choose the pictures you want in the gallery. Once the gallery is inserted, you can choose from several styles to showcase your pictures. When you post the blog, you will be asked to sign in with your Windows Live ID as the gallery pictures will be stored in the free Skydrive storage service. Your blog readers can see the preview of your pictures directly on your blog, and then can view each individual picture, download them, or see a slideshow online via the link. Video If you want to add a video to your blog post, select Video from the sidebar as above.  You can select a video that’s already online, or you can choose a new video from file and upload it via YouTube directly from Windows Live Writer.   Note that you will have to sign in with your YouTube account to upload videos to YouTube, so if you’re not logged in you’ll be prompted to do so when you click Insert. Geek Tip:  If you ever want to copy your Live Writer settings to another computer, check out our article on how to Backup Your Windows Live Writer Settings. Microsoft Office Word Word 2007 and 2010 also let you post content directly to your blog.  This is especially nice if you’ve already typed up a document and think it would be good on your Blog as well.  Check out our in-depth tutorial on posting blog posts via Word 2007 using Word 2007 as a blogging tool. This works in Word 2010 too, except the Office Orb has been replaced by the new Backstage view.  So, in Word 2010, to start a new blog post, click File \ New then select Blog post.  Proceed as you would in Word 2007 to add your blog settings and post the content you want. Or, if you’ve already written a document and want to post it, select File \ Share (or Save and Send in the final version of Word 2010), and then click Publish as Blog Post.  If you haven’t setup your blog account yet, set it up as shown in the Word 2007 article. Post Via Email Most of us use email daily, and already have our favorite email app or service.  Whether on your desktop or mobile phone, it’s easy to create rich emails and add content.  WordPress lets you generate a unique email address that you can use to easily post content and email to your blog.  Just compose your email with the subject as the title of your post, and send it to this unique address.  Your new post will be up in minutes. To active this feature, click the My Account button in the top menu bar in your WordPress.com account, and select My Blogs. Click the Enable button under Post by Email beside your blog’s name.   Now you’ll have a private email you can use to post to your blog.  Anything you send to this email will be posted as a new post.  If you think your email may be compromised, click Regenerate to get a new publishing email address. Any email program or webapp now is a blog post editor.  Feel free to use rich formatting or insert pictures; it all comes through great.  This is also a great way to post to your blog from your mobile device.  Whether you’re using webmail or a dedicated email client on your phone, you can now blog from anywhere.   Mobile Applications WordPress also offer dedicated applications for blogging directly from your mobile device.  You can write new posts, edit existing ones, and manage comments all from your Smartphone.  Currently they offer apps for iPhone, Android, and Blackberry.  Check them out at the link below. Conclusion Whether you want to write from your browser or email a post to your blog, WordPress is flexible enough to work right along with your preferences.  However you post, you can be sure that it will look professional and be easily accessible with your WordPress blog. Download Windows Live Writer Download WordPress apps for your mobile device Similar Articles Productive Geek Tips Quick Tip: Set a Future Date for a Post in WordPressAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFuture Date a Post in Windows Live WriterHow To Start Your Own Professional Blog with WordPressUsing Word 2007 as a Blogging Tool TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Customize your icons in Windows 7 and Vista

    - by Matthew Guay
    Want to change out the icons on your desktop and more?  Personalizing your icons is a great way to make your PC uniquely yours,, and today we show you how to grab unique icons, and default Winnows. to be your own. Change the icon for Computer, Recycle Bin, Network, and your User folder Right-click on the desktop, and select Personalize. Now, click the “Change desktop icons” link on the left sidebar in the Personalization window. The window looks slightly different in Windows Vista, but the link is the same. Select the icon you wish to change, and click the Change Icon button.  In Windows 7, you will also notice a box to choose whether or not to allow themes to change icons, and you can uncheck it if you don’t want themes to change your icon settings. You can select one of the other included icons, or click browse to find the icon you want.  Click Ok when you are finished. Change Folder icons You can easily change the icon on most folders in Windows Vista and 7.  Simply right-click on the folder and select properties. Click the Customize tab, and then click the Change Icon button.  This will open the standard dialog to change your icon, so proceed as normal. This basically just creates a hidden desktop.ini file in the folder containing the following or similar data: [.ShellClassInfo]IconFile=%SystemRoot%\system32\SHELL32.dllIconIndex=20 You could manually create or edit the file if you choose, instead of using the dialogs. Simply create a new text file named desktop.ini with this same information, or edit the existing one.  Change the IconFile line to the location of your icon. If you are pointing to a .ico file you should change the IconIndex line to 0 instead. Note that this isn’t available for all folders, for instance you can’t use this to change the icon for the Windows folder.   In Windows 7, please note that you cannot change the icon of folder inside a library.  So if you are browsing your Documents library and would like to change an icon in that folder, right-click on it and select Open folder location.  Now you can change the icon as above. And if you would like to change a Library’s icon itself, then check out this tutorial: Change Your Windows 7 Library Icons the Easy Way Change the icon of any file type Want to make you files easier to tell apart?  Check out our tutorial on how to simply do this: Change a File Type’s Icon in Windows 7 Change the icon of any Application Shortcut To change the icon of a shortcut on your desktop, start menu, or in Explorer, simply right-click on the icon and select Properties. In the Shortcut tab, click the Change Icon button. Now choose one of the other available icons or click browse to find the icon you want. Change Icons of Running Programs in the Windows 7 taskbar If your computer is running Windows 7, you can customize the icon of any program running in the taskbar!  This only works on applications that are running but not pinned to the taskbar, so if you want to customize a pinned icon you may want to unpin it before customizing it.  But the interesting thing about this trick is that it can customize any icon anything running in the taskbar, including things like Control Panel! Right-click or click and push up to open the jumplist on the icon, and then right-click on the program’s name and select Properties.  Here we are customizing Control Panel, but you can do this on any application icon. Now, click Change Icon as usual. Select an icon you want (We switched the Control Panel icon to the Security Shield), or click Browse to find another icon.  Click Ok when finished, and then close the application window. The next time you open the program (or Control Panel in our example), you will notice your new icon on its taskbar icon. Please note that this only works on applications that are currently running and are not pinned to the taskbar.  Strangely, if the application is pinned to the taskbar, you can still click Properties and change the icon, but the change will not show up. Change the icon on any Drive on your Computer You can easily change the icon on your internal hard drives and portable drives with the free Drive Icon Changer application.  Simply download and unzip the file (link below), and then run the application as administrator by right-clicking on the icon and selecting “Run as administrator”. Now, select the drive that you want to change the icon of, and select your desired icon file. Click Save, and Drive Icon Changer will let you know that the icon has been changed successfully. You will then need to reboot your computer to complete the changes.  Simply click Yes to reboot. Now, our Drive icon is changed from this default image: to a Laptop icon we chose! You can do this to any drive in your computer, or to removable drives such as USB flash drives.  When you change these drives icons, the new icon will appear on any computer you insert the drive into.  Also, if you wish to remove the icon change, simply run the Drive Icon Changer again and remove the icon path. Download Drive Icon Changer This application actually simply creates or edits a hidden Autorun.inf file on the top of your drive.  You can edit or create the file yourself by hand if you’d like; simply include the following information in the file, and save it in the top directory of your drive: [autorun]ICON=[path of your icon] Remove Arrow from shortcut icons Many people don’t like the arrow on the shortcut icon, and there are two easy ways to do this. If you’re running the 32 bit version of Windows Vista or 7, simply use the Vista Shortcut Overlay Remover. If your computer is running the 64 bit version of Windows Vista or 7, use the Ultimate Windows Tweaker instead.  Simply select the Additional Tweaks section, and check the “Remove arrows from Shortcut Icons.” For more info and download links check out this article: Disable Shortcut Icon Arrow Overlay in Windows 7 or Vista Closing: This gives you a lot of ways to customize almost any icon on your computer, so you can make it look just like you want it to.  Stay tuned for more great desktop customization articles from How-to Geek! Similar Articles Productive Geek Tips Change Start Menu to Use Small Icons in Windows 7 or VistaResize Icons Quickly in Windows 7 or Vista ExplorerRoundup: 16 Tweaks to Windows Vista Look & FeelRestore Missing Desktop Icons in Windows 7 or VistaClean Up Past Notification Icons in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • Package Dependencies Error in almost every install

    - by Betaxpression
    New to Ubuntu. In the other sofware sources i have "Debian 4.0 eth" officially supported "non-us.debian.org/"; etc ... "ppa.launcpad.net" and installing applications has stopped working. I think i first came across this problem after installing Blender 2.58 When using update manager it is prompting for a partial upgrade. Almost every software when trying to install showing the same error Package Dependencies Error or GPG PUB KEY missing, tried to fixing to them but no luck. Output to: sudo apt-get update && sudo apt-get upgrade (links disabled http:// -- http:/ as new user can't put more no. of hyperlinks) Ign http:/non-us.debian.org stable/non-US InRelease Ign http:/non-us.debian.org stable/non-US Release.gpg Ign http:/non-us.debian.org stable/non-US Release Ign http:/non-us.debian.org stable/non-US/contrib TranslationIndex Ign http:/non-us.debian.org stable/non-US/main TranslationIndex Ign http:/non-us.debian.org stable/non-US/non-free TranslationIndex Err http:/non-us.debian.org stable/non-US/main Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/contrib Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/non-free Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/main amd64 Packages 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/contrib amd64 Packages 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/non-free amd64 Packages 503 Service Unavailable Ign http:/non-us.debian.org stable/non-US/contrib Translation-en_IN Ign http:/non-us.debian.org stable/non-US/contrib Translation-en Ign http:/non-us.debian.org stable/non-US/main Translation-en_IN Ign http:/non-us.debian.org stable/non-US/main Translation-en Ign http:/non-us.debian.org stable/non-US/non-free Translation-en_IN Ign http:/non-us.debian.org stable/non-US/non-free Translation-en Ign http:/archive.ubuntu.com natty InRelease Ign http:/archive.canonical.com natty InRelease Ign http:/extras.ubuntu.com natty InRelease Ign http:/http.us.debian.org stable InRelease Ign http:/ftp.us.debian.org etch InRelease Ign http:/archive.ubuntu.com natty-updates InRelease Hit http:/archive.canonical.com natty Release.gpg Get:1 http:/extras.ubuntu.com natty Release.gpg [72 B] Ign http:/ppa.launchpad.net natty InRelease Get:2 http:/http.us.debian.org stable Release.gpg [1,672 B] Ign http:/linux.dropbox.com natty InRelease Ign http:/ftp.us.debian.org etch Release.gpg Ign http:/archive.ubuntu.com natty-security InRelease Hit http:/archive.canonical.com natty Release Hit http:/extras.ubuntu.com natty Release Ign http:/ppa.launchpad.net natty InRelease Get:3 http:/linux.dropbox.com natty Release.gpg [489 B] Ign http:/ftp.us.debian.org etch Release Ign http:/dl.google.com stable InRelease Get:4 http:/archive.ubuntu.com natty Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.canonical.com natty/partner Sources Hit http:/extras.ubuntu.com natty/main Sources Get:5 http:/linux.dropbox.com natty Release [2,599 B] Get:6 http:/archive.ubuntu.com natty-updates Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.canonical.com natty/partner amd64 Packages Hit http:/extras.ubuntu.com natty/main amd64 Packages Get:7 http:/linux.dropbox.com natty/main amd64 Packages [784 B] Get:8 http:/archive.ubuntu.com natty-security Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Ign http:/archive.canonical.com natty/partner TranslationIndex Ign http:/extras.ubuntu.com natty/main TranslationIndex Get:9 http:/http.us.debian.org stable Release [104 kB] Ign http:/linux.dropbox.com natty/main TranslationIndex Hit http:/archive.ubuntu.com natty Release Ign http:/ppa.launchpad.net natty InRelease Ign http:/http.us.debian.org stable Release Hit http:/archive.ubuntu.com natty-updates Release Get:10 http:/ppa.launchpad.net natty InRelease [316 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.ubuntu.com natty-security Release Get:11 http:/ppa.launchpad.net natty InRelease [316 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.ubuntu.com natty/restricted Sources Get:12 http:/ppa.launchpad.net natty Release.gpg [316 B] Ign http:/http.us.debian.org stable/main Sources/DiffIndex Get:13 http:/ppa.launchpad.net natty Release.gpg [316 B] Hit http:/archive.ubuntu.com natty/main Sources Ign http:/ftp.us.debian.org etch/contrib TranslationIndex Ign http:/http.us.debian.org stable/contrib Sources/DiffIndex Get:14 http:/ppa.launchpad.net natty Release.gpg [1,502 B] Ign http:/http.us.debian.org stable/non-free Sources/DiffIndex Ign http:/ftp.us.debian.org etch/main TranslationIndex Get:15 http:/ppa.launchpad.net natty Release.gpg [1,928 B] Ign http:/http.us.debian.org stable/main amd64 Packages/DiffIndex Ign http:/ftp.us.debian.org etch/non-free TranslationIndex Ign http:/ppa.launchpad.net natty Release.gpg Hit http:/http.us.debian.org stable/contrib amd64 Packages/DiffIndex W: GPG error: http:/http.us.debian.org stable Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY AED4B06F473041FA NO_PUBKEY 64481591B98321F9 W: GPG error: http:/ppa.launchpad.net natty InRelease: File /var/lib/apt/lists/partial/ppa.launchpad.net_sunab_kdenlive-release_ubuntu_dists_natty_InRelease doesn't start with a clearsigned message W: GPG error: http:/ppa.launchpad.net natty InRelease: File /var/lib/apt/lists/partial/ppa.launchpad.net_ubuntu-wine_ppa_ubuntu_dists_natty_InRelease doesn't start with a clearsigned message E: Could not open file /var/lib/apt/lists/http.us.debian.org_debian_dists_stable_contrib_binary-amd64_Packages.IndexDiff - open (2: No such file or directory) output to: sudo cat /etc/apt/sources.list # deb cdrom:[Ubuntu 11.04 _Natty Narwhal_ - Release amd64 (20110427.1)]/ natty main restricted # See http:/help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http:/archive.ubuntu.com/ubuntu natty main restricted deb-src http:/archive.ubuntu.com/ubuntu natty restricted main multiverse universe #Added by software-properties ## Major bug fix updates produced after the final release of the ## distribution. deb http:/archive.ubuntu.com/ubuntu natty-updates main restricted deb-src http:/archive.ubuntu.com/ubuntu natty-updates restricted main multiverse universe #Added by software-properties ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http:/archive.ubuntu.com/ubuntu natty universe deb http:/archive.ubuntu.com/ubuntu natty-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http:/archive.ubuntu.com/ubuntu natty multiverse deb http:/archive.ubuntu.com/ubuntu natty-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http:/us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse # deb-src http:/us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse deb http:/archive.ubuntu.com/ubuntu natty-security main restricted deb-src http:/archive.ubuntu.com/ubuntu natty-security restricted main multiverse universe #Added by software-properties deb http:/archive.ubuntu.com/ubuntu natty-security universe deb http:/archive.ubuntu.com/ubuntu natty-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http:/archive.canonical.com/ubuntu natty partner deb-src http:/archive.canonical.com/ubuntu natty partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http:/extras.ubuntu.com/ubuntu natty main deb-src http:/extras.ubuntu.com/ubuntu natty main deb http:/ftp.us.debian.org/debian/ etch main contrib non-free deb-src http:/ftp.us.debian.org/debian/ etch main contrib non-free deb http:/http.us.debian.org/debian stable main contrib non-free deb-src http:/http.us.debian.org/debian stable main contrib non-free deb http:/non-us.debian.org/debian-non-US stable/non-US main contrib non-free deb-src http:/non-us.debian.org/debian-non-US stable/non-US main contrib non-free Thanks But after removing Debian repositories still getting this error: W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9BDB3D89CE49EC21, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 80E7349A06ED541C, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8C851674F96FD737, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 94E58C34A8670E8C, E:Unable to parse package file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_i18n_Index (1) I actually tried this before, but i am always getting this error --Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys 8C851674F96FD737 gpg: requesting key F96FD737 from hkp server keyserver.ubuntu.com ?: keyserver.ubuntu.com: Connection refused gpgkeys: HTTP fetch error 7: couldn't connect: Connection refused gpg: no valid OpenPGP data found. gpg: Total number processed: 0

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • SharePoint – The Most Important Feature

    - by Bil Simser
    Watching twitter and doing a search for SharePoint and you see a lot (almost one every few minutes) of tweets about the top 10 new features in SharePoint. What answer do you get when you ask the question, “What’s the most important feature in SharePoint?”. Chances are the answer will vary. Some will say it’s the collaboration aspect, others might say it’s the new ribbon interface, multi-item editing, external content types, faceted search, large list support, document versioning, Silverlight, etc. The list goes on. However I think most people might be missing the most important feature that’s sitting right under their noses all this time. The most important feature of SharePoint? It’s called User Empowerment. Huh? What? Is that something I find in the Site Actions menu? Nope. It’s something that’s always been there in SharePoint, you just need to get the word out and support it. How many times have you had a team ask you for a team site (assuming you had SharePoint up and running). Or to create them a contact list. Or how long have you employed that guy in the corner who’s been copying and pasting content from Corporate Communications into the web from a Word document. Let’s stop the insanity. It doesn’t have to be this way. SharePoint’s strongest feature isn’t anything you can find in the Site Settings screen or Central Admin. It’s all about empowering your users and letting them take control of their content. After all, SharePoint really is a bunch of tools to allow users to collaborate on content isn’t it? So why are you stepping in as IT and helping the user every moment along the way. It’s like having to ask users to fill out a help desk ticket or call up the Windows team to create a folder on their desktop or rearrange their Start menu. This isn’t something IT should be spending their time doing nor is it something the users should be burdened with having to wait until their friendly neighborhood tech-guy (or gal) shows up to help them sort the icons on their desktop. SharePoint IS all about empowerment. Site owners can create whatever lists and libraries they need for their team, and if the template isn’t there they can always turn to my friend and yours, the Custom List. From that can spew forth approval tracking systems, new hire checklists, and server inventory. You’re only limited by your imagination and needs. Users should be able to create new sites as they need. Want a blog to let everyone know what your team is up to? Go create one, here’s how. What’s a blog you ask? Here’s what it is and why you would use one. SharePoint is the shift in the balance of power and you need, and an IT group, let go of certain responsibilities and let your users run with the tools. A power user who knows how to create sites and what features are available to them can help a team go from the forming stage to the storming stage overnight. Again, this all hinges on you as an IT organization and what you can and empower your users with as far as features go. Running with tools is great if you know how to use them, running with scissors not recommended unless you enjoy trips to the hospital. With Great Power comes Great Responsibility so don’t go out on Monday and send out a memo to the organization saying “This Bil guy says you peeps can do anything so here it is, knock yourself out” (for one, they’ll have *no* idea who this Bil guy is). This advice comes with the task of getting your users ready for empowerment. Whether it’s through some kind of internal training sessions, in-house documentation; videos; blog posts; on how to accomplish things in SharePoint, or full blown one-on-one sit downs with teams or individuals to help them through their problems. The work is up to you. Helping them along also should be part of your governance (you do have one don’t you?). Just because you have InfoPath client deployed with your Office suite, doesn’t mean users should just start publishing forms all over your SharePoint farm. There should be some governance behind that in what you’ll support and what is possible. The other caveat to all this is that SharePoint is not everything for everyone. It can’t cook you breakfast and impregnate your cat or solve world hunger. It also isn’t suited for every IT solution out there. It’s a horrible source control system (even though some people try to use it as such) and really can’t do financials worth a darn. Again, governance is key here and part of that governance and your responsibility in setting up and unleashing SharePoint into your organization is to provide users guidance on what should be in SharePoint and (more importantly) what should not be in SharePoint. There are boundaries you have to set where you don’t want your end users going as they might be treading into trouble. Again, this is up to you to set these constraints and help users understand why these pylons are there. If someone understands why they can’t do something they might have a better understanding and respect for those that put them there in the first place. Of course you’ll always have the power-users who want to go skiing down dead mans curve so this doesn’t work for everyone, but you can catch the majority of the newbs who don’t wander aimlessly off the beaten path. At the end of the day when all things are going swimmingly your end users should be empowered to solve the needs they have on a day to day basis and not having to keep bugging the IT department to help them create a view to show only approved documents. I wouldn’t go as far as business users building out full blown solutions and handing the keys to SharePoint Designer or (worse) Visual Studio to power-users might not be a path you want to go down but you also don’t have to lock up the SharePoint system in a tight box where users can’t use what’s there. So stop focusing on the shiny things in SharePoint and maybe consider making a shift to what’s really important. Making your day job easier and letting users get the most our of your technology investment.

    Read the article

  • Monitor your Hard Drive’s Health with Acronis Drive Monitor

    - by Matthew Guay
    Are you worried that your computer’s hard drive could die without any warning?  Here’s how you can keep tabs on it and get the first warning signs of potential problems before you actually lose your critical data. Hard drive failures are one of the most common ways people lose important data from their computers.  As more of our memories and important documents are stored digitally, a hard drive failure can mean the loss of years of work.  Acronis Drive Monitor helps you avert these disasters by warning you at the first signs your hard drive may be having trouble.  It monitors many indicators, including heat, read/write errors, total lifespan, and more. It then notifies you via a taskbar popup or email that problems have been detected.  This early warning lets you know ahead of time that you may need to purchase a new hard drive and migrate your data before it’s too late. Getting Started Head over to the Acronis site to download Drive Monitor (link below).  You’ll need to enter your name and email, and then you can download this free tool. Also, note that the download page may ask if you want to include a trial of their for-pay backup program.  If you wish to simply install the Drive Monitor utility, click Continue without adding. Run the installer when the download is finished.  Follow the prompts and install as normal. Once it’s installed, you can quickly get an overview of your hard drives’ health.  Note that it shows 3 categories: Disk problems, Acronis backup, and Critical Events.  On our computer, we had Seagate DiskWizard, an image backup utility based on Acronis Backup, installed, and Acronis detected it. Drive Monitor stays running in your tray even when the application window is closed.  It will keep monitoring your hard drives, and will alert you if there’s a problem. Find Detailed Information About Your Hard Drives Acronis’ simple interface lets you quickly see an overview of how the drives on your computer are performing.  If you’d like more information, click the link under the description.  Here we see that one of our drives have overheated, so click Show disks to get more information. Now you can select each of your drives and see more information about them.  From the Disk overview tab that opens by default, we see that our drive is being monitored, has been running for a total of 368 days, and that it’s health is good.  However, it is running at 113F, which is over the recommended max of 107F.   The S.M.A.R.T. parameters tab gives us more detailed information about our drive.  Most users wouldn’t know what an accepted value would be, so it also shows the status.  If the value is within the accepted parameters, it will report OK; otherwise, it will show that has a problem in this area. One very interesting piece of information we can see is the total number of Power-On Hours, Start/Stop Count, and Power Cycle Count.  These could be useful indicators to check if you’re considering purchasing a second hand computer.  Simply load this program, and you’ll get a better view of how long it’s been in use. Finally, the Events tab shows each time the program gave a warning.  We can see that our drive, which had been acting flaky already, is routinely overheating even when our other hard drive was running in normal temperature ranges. Monitor Acronis Backups And Critical Errors In addition to monitoring critical stats of your hard drives, Acronis Drive Monitor also keeps up with the status of your backup software and critical events reported by Windows.  You can access these from the front page, or via the links on the left hand sidebar.  If you have any edition of any Acronis Backup product installed, it will show that it was detected.  Note that it can only monitor the backup status of the newest versions of Acronis Backup and True Image. If no Acronis backup software was installed, it will show a warning that the drive may be unprotected and will give you a link to download Acronis backup software.   If you have another backup utility installed that you wish to monitor yourself, click Configure backup monitoring, and then disable monitoring on the drives you’re monitoring yourself. Finally, you can view any detected Critical events from the Critical events tab on the left. Get Emailed When There’s a Problem One of Drive Monitor’s best features is the ability to send you an email whenever there’s a problem.  Since this program can run on any version of Windows, including the Server and Home Server editions, you can use this feature to stay on top of your hard drives’ health even when you’re not nearby.  To set this up, click Options in the top left corner. Select Alerts on the left, and then click the Change settings link to setup your email account. Enter the email address which you wish to receive alerts, and a name for the program.  Then, enter the outgoing mail server settings for your email.  If you have a Gmail account, enter the following information: Outgoing mail server (SMTP): smtp.gmail.com Port: 587 Username and Password: Your gmail address and password Check the Use encryption box, and then select TLS from the encryption options.   It will now send a test message to your email account, so check and make sure it sent ok. Now you can choose to have the program automatically email you when warnings and critical alerts appear, and also to have it send regular disk status reports.   Conclusion Whether you’ve got a brand new hard drive or one that’s seen better days, knowing the real health of your it is one of the best ways to be prepared before disaster strikes.  It’s no substitute for regular backups, but can help you avert problems.  Acronis Drive Monitor is a nice tool for this, and although we wish it wasn’t so centered around their backup offerings, we still found it a nice tool. Link Download Acronis Drive Monitor (registration required) Similar Articles Productive Geek Tips Quick Tip: Change Monitor Timeout From Command LineAnalyze and Manage Hard Drive Space with WinDirStatMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersDefrag Multiple Hard Drives At Once In WindowsFind Your Missing USB Drive on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >