Search Results

Search found 7216 results on 289 pages for 'low cost'.

Page 81/289 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • DNS failover in a two datacenter scenario

    - by wanson
    I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario: I have two servers with the same configuration, content, mysql replication (dual-master). They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup. Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down. My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP. Of course the TTL will be low, as it's supposed to be in DNS failovers. I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach? An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)? This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative. I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

    Read the article

  • Recover data from physically damaged harddrive. What are my options?

    - by Michael Kniskern
    I was trying to replace the power supply in my desktop PC and ended up physically damaging the data connection from the hard drive to the motherboard. The plastic shelf for the copper prongs on the hard drive broke into the cable. Here is a picture of my handy work: I went to Best Buy Geek Squad to discuss my options and they said that they will need to send it to the recover center it could cost anywhere between $250 to $1600 USD to recover the data out the hard drive Is this reasonable for data recovery from a physically damaged hard drive? Are there any other options I can explore? I am going to talk to the data doctors to see what my options are. Update I took the HD to Data Doctors, and they told me that the SATA connection was broken to they would need to replaced the data connector and then copy the data to a brand new hard drive. So, with the initial analysis, cost of replacement parts, and data recovery fee it came out to $865.00 USD. The technician specifically stated if this was an older hard drive that would just need to replace the data connector. But because there is specific information related to the individual hard drive in the flash ROM, they need to transfer the data to a brand new hard drive.

    Read the article

  • Corliss Expert Group Home Security: How to Secure Your Home without Spending Too Much?

    - by Mika Esmond
    HOME SECURITY: HOW TO SECURE YOUR HOME WITHOUT SPENDING TOO MUCH Imagine if there were no burglar or criminals who threaten the safety of our homes; we will be surprised how much savings we would have on several things we do to secure ourselves and our loved ones. We would not need fences, gates with locks, doors locks, window grills, CCTV cams, perimeter lighting, shotguns and baseball bats. The cost of maintaining these things can run up to the entire cost of building another room or, in some cases, a whole new house. The rationale for home security is the same for national security. A nation maintains an army whether it has enemies or not; so, whether burglars will come or not, we have to prepare for the eventuality. Hence, we end up spending for something we might never put into the actual use it was intended for. You buy a pistol and when a burglar breaks in you fire the gun either to scare or disable the intruder. We hope we will never have to use these things; but we still buy them for the peace of mind that comes from knowing we can secure or protect our family and home.

    Read the article

  • What is the sysadmin's dream network printer? 6-8k pg/mo. Xerox, OkiData, Lexmark and HP are all fail

    - by Jacob
    How do I find out what printer brand and/or type doesn't suck? This information is hard to find and manufacturer's websites won't reveal any issues with certain printers. After 10 years of dealing with network shared printers, I can't say that I have been impressed with any of the printer brands I've seen. Brother's little laser MFPs have been close to ideal for low volume, but that is it, period. OkiData, Lexmark, HP, Xerox solid ink printers, they all sucked in one way or another. Currently I'm looking to replace a Xerox ColorQube 8570 because it fails to print on a regular basis. Sometimes it doesn't even boot VxWorks fully - it just hangs at 2% or whatever. I've used Xerox 8860MFPs and they sucked just as bad. I won't talk about ink jets here, that's most likely not what I'm looking for. We currently spend about $4k on paper and ink per year for this printer at up to 6-8k pages per month, letter, mostly black and white, low color usage. I want the printer to feed paper correctly, not crash and burn when a PDF isn't according to its taste (my favorite Xerox problem here) and with decent drivers for Windows and OS X. Print quality is not of the utmost importance but paper does get sent to customers.

    Read the article

  • Slow filesystem access

    - by danneh3826
    I'm trying to diagnose a slow filesystem issue on a server I look after. It's been ongoing for quite some time, and I've run out of ideas as to what I can try. Here's the thick of it. The server itself is a Dell Poweredge T310. It has 4 SAS hard drives in it, configured at RAID5, and is running Citrix XenServer 5.6. The VM is a (relatively) old Debian 5.0.6 installation. It's given 4 cores, and 4Gb's of RAM. It has 3 volumes. A 10Gb volume (ext3) for the system, 980Gb volume (xfs) for data (~94% full), and another 200Gb volume (xfs) for data (~13% full). Now here's the weird thing. Read/write access to the 980Gb volume is really slow. I might get 5Mb/s out of it if I'm lucky. At first I figured it was actually disk access in the system, or at a hypervisor level, but ruled that out entirely as other VMs on the same host are running perfectly fine (a good couple hundred Mb/s disk r/w access). So then I started to target this particular VM. I started thinking it was XFS, but to prove it I wasn't going to attempt to change the filesystem on the 980Gb drive, with years and years of billions of files on there. So I provisioned the 200Gb drive, and did the same read/write test (basically dd), and got a good couple hundred Mb/s r/w access to it. So that ruled out the VM, the hardware, and the filesystem type. There's also a lot of these in /var/log/kern.log; (sorry, this is quite long) Sep 4 10:16:59 uriel kernel: [32571790.564689] httpd: page allocation failure. order:5, mode:0x4020 Sep 4 10:16:59 uriel kernel: [32571790.564693] Pid: 7318, comm: httpd Not tainted 2.6.32-4-686-bigmem #1 Sep 4 10:16:59 uriel kernel: [32571790.564696] Call Trace: Sep 4 10:16:59 uriel kernel: [32571790.564705] [<c1092a4d>] ? __alloc_pages_nodemask+0x476/0x4e0 Sep 4 10:16:59 uriel kernel: [32571790.564711] [<c1092ac3>] ? __get_free_pages+0xc/0x17 Sep 4 10:16:59 uriel kernel: [32571790.564716] [<c10b632e>] ? __kmalloc+0x30/0x128 Sep 4 10:16:59 uriel kernel: [32571790.564722] [<c11dd774>] ? pskb_expand_head+0x4f/0x157 Sep 4 10:16:59 uriel kernel: [32571790.564727] [<c11ddbbf>] ? __pskb_pull_tail+0x41/0x1fb Sep 4 10:16:59 uriel kernel: [32571790.564732] [<c11e4882>] ? dev_queue_xmit+0xe4/0x38e Sep 4 10:16:59 uriel kernel: [32571790.564738] [<c1205902>] ? ip_finish_output+0x0/0x5c Sep 4 10:16:59 uriel kernel: [32571790.564742] [<c12058c7>] ? ip_finish_output2+0x187/0x1c2 Sep 4 10:16:59 uriel kernel: [32571790.564747] [<c1204dc8>] ? ip_local_out+0x15/0x17 Sep 4 10:16:59 uriel kernel: [32571790.564751] [<c12055a9>] ? ip_queue_xmit+0x31e/0x379 Sep 4 10:16:59 uriel kernel: [32571790.564758] [<c1279a90>] ? _spin_lock_bh+0x8/0x1e Sep 4 10:16:59 uriel kernel: [32571790.564767] [<eda15a8d>] ? __nf_ct_refresh_acct+0x66/0xa4 [nf_conntrack] Sep 4 10:16:59 uriel kernel: [32571790.564773] [<c103bf42>] ? _local_bh_enable_ip+0x16/0x6e Sep 4 10:16:59 uriel kernel: [32571790.564779] [<c1214593>] ? tcp_transmit_skb+0x595/0x5cc Sep 4 10:16:59 uriel kernel: [32571790.564785] [<c1005c4f>] ? xen_restore_fl_direct_end+0x0/0x1 Sep 4 10:16:59 uriel kernel: [32571790.564791] [<c12165ea>] ? tcp_write_xmit+0x7a3/0x874 Sep 4 10:16:59 uriel kernel: [32571790.564796] [<c121203a>] ? tcp_ack+0x1611/0x1802 Sep 4 10:16:59 uriel kernel: [32571790.564801] [<c10055ec>] ? xen_force_evtchn_callback+0xc/0x10 Sep 4 10:16:59 uriel kernel: [32571790.564806] [<c121392f>] ? tcp_established_options+0x1d/0x8b Sep 4 10:16:59 uriel kernel: [32571790.564811] [<c1213be4>] ? tcp_current_mss+0x38/0x53 Sep 4 10:16:59 uriel kernel: [32571790.564816] [<c1216701>] ? __tcp_push_pending_frames+0x1e/0x50 Sep 4 10:16:59 uriel kernel: [32571790.564821] [<c1212246>] ? tcp_data_snd_check+0x1b/0xd2 Sep 4 10:16:59 uriel kernel: [32571790.564825] [<c1212de3>] ? tcp_rcv_established+0x5d0/0x626 Sep 4 10:16:59 uriel kernel: [32571790.564831] [<c121902c>] ? tcp_v4_do_rcv+0x15f/0x2cf Sep 4 10:16:59 uriel kernel: [32571790.564835] [<c1219561>] ? tcp_v4_rcv+0x3c5/0x5c0 Sep 4 10:16:59 uriel kernel: [32571790.564841] [<c120197e>] ? ip_local_deliver_finish+0x10c/0x18c Sep 4 10:16:59 uriel kernel: [32571790.564846] [<c12015a4>] ? ip_rcv_finish+0x2c4/0x2d8 Sep 4 10:16:59 uriel kernel: [32571790.564852] [<c11e3b71>] ? netif_receive_skb+0x3bb/0x3d6 Sep 4 10:16:59 uriel kernel: [32571790.564864] [<ed823efc>] ? xennet_poll+0x9b8/0xafc [xen_netfront] Sep 4 10:16:59 uriel kernel: [32571790.564869] [<c11e40ee>] ? net_rx_action+0x96/0x194 Sep 4 10:16:59 uriel kernel: [32571790.564874] [<c103bd4c>] ? __do_softirq+0xaa/0x151 Sep 4 10:16:59 uriel kernel: [32571790.564878] [<c103be24>] ? do_softirq+0x31/0x3c Sep 4 10:16:59 uriel kernel: [32571790.564883] [<c103befa>] ? irq_exit+0x26/0x58 Sep 4 10:16:59 uriel kernel: [32571790.564890] [<c118ff9f>] ? xen_evtchn_do_upcall+0x12c/0x13e Sep 4 10:16:59 uriel kernel: [32571790.564896] [<c1008c3f>] ? xen_do_upcall+0x7/0xc Sep 4 10:16:59 uriel kernel: [32571790.564899] Mem-Info: Sep 4 10:16:59 uriel kernel: [32571790.564902] DMA per-cpu: Sep 4 10:16:59 uriel kernel: [32571790.564905] CPU 0: hi: 0, btch: 1 usd: 0 Sep 4 10:16:59 uriel kernel: [32571790.564908] CPU 1: hi: 0, btch: 1 usd: 0 Sep 4 10:16:59 uriel kernel: [32571790.564911] CPU 2: hi: 0, btch: 1 usd: 0 Sep 4 10:16:59 uriel kernel: [32571790.564914] CPU 3: hi: 0, btch: 1 usd: 0 Sep 4 10:16:59 uriel kernel: [32571790.564916] Normal per-cpu: Sep 4 10:16:59 uriel kernel: [32571790.564919] CPU 0: hi: 186, btch: 31 usd: 175 Sep 4 10:16:59 uriel kernel: [32571790.564922] CPU 1: hi: 186, btch: 31 usd: 165 Sep 4 10:16:59 uriel kernel: [32571790.564925] CPU 2: hi: 186, btch: 31 usd: 30 Sep 4 10:16:59 uriel kernel: [32571790.564928] CPU 3: hi: 186, btch: 31 usd: 140 Sep 4 10:16:59 uriel kernel: [32571790.564931] HighMem per-cpu: Sep 4 10:16:59 uriel kernel: [32571790.564933] CPU 0: hi: 186, btch: 31 usd: 159 Sep 4 10:16:59 uriel kernel: [32571790.564936] CPU 1: hi: 186, btch: 31 usd: 22 Sep 4 10:16:59 uriel kernel: [32571790.564939] CPU 2: hi: 186, btch: 31 usd: 24 Sep 4 10:16:59 uriel kernel: [32571790.564942] CPU 3: hi: 186, btch: 31 usd: 13 Sep 4 10:16:59 uriel kernel: [32571790.564947] active_anon:485974 inactive_anon:121138 isolated_anon:0 Sep 4 10:16:59 uriel kernel: [32571790.564948] active_file:75215 inactive_file:79510 isolated_file:0 Sep 4 10:16:59 uriel kernel: [32571790.564949] unevictable:0 dirty:516 writeback:15 unstable:0 Sep 4 10:16:59 uriel kernel: [32571790.564950] free:230770 slab_reclaimable:36661 slab_unreclaimable:21249 Sep 4 10:16:59 uriel kernel: [32571790.564952] mapped:20016 shmem:29450 pagetables:5600 bounce:0 Sep 4 10:16:59 uriel kernel: [32571790.564958] DMA free:2884kB min:72kB low:88kB high:108kB active_anon:0kB inactive_anon:0kB active_file:5692kB inactive_file:724kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15872kB mlocked:0kB dirty:8kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:5112kB slab_unreclaimable:156kB kernel_stack:56kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 4 10:16:59 uriel kernel: [32571790.564964] lowmem_reserve[]: 0 698 4143 4143 Sep 4 10:16:59 uriel kernel: [32571790.564977] Normal free:143468kB min:3344kB low:4180kB high:5016kB active_anon:56kB inactive_anon:2068kB active_file:131812kB inactive_file:131728kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:715256kB mlocked:0kB dirty:156kB writeback:0kB mapped:308kB shmem:4kB slab_reclaimable:141532kB slab_unreclaimable:84840kB kernel_stack:1928kB pagetables:22400kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 4 10:16:59 uriel kernel: [32571790.564983] lowmem_reserve[]: 0 0 27559 27559 Sep 4 10:16:59 uriel kernel: [32571790.564995] HighMem free:776728kB min:512kB low:4636kB high:8760kB active_anon:1943840kB inactive_anon:482484kB active_file:163356kB inactive_file:185588kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3527556kB mlocked:0kB dirty:1900kB writeback:60kB mapped:79756kB shmem:117796kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 4 10:16:59 uriel kernel: [32571790.565001] lowmem_reserve[]: 0 0 0 0 Sep 4 10:16:59 uriel kernel: [32571790.565011] DMA: 385*4kB 16*8kB 3*16kB 9*32kB 6*64kB 2*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2900kB Sep 4 10:16:59 uriel kernel: [32571790.565032] Normal: 21505*4kB 6508*8kB 273*16kB 24*32kB 3*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 143412kB Sep 4 10:16:59 uriel kernel: [32571790.565054] HighMem: 949*4kB 8859*8kB 7063*16kB 6186*32kB 4631*64kB 727*128kB 6*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 776604kB Sep 4 10:16:59 uriel kernel: [32571790.565076] 198980 total pagecache pages Sep 4 10:16:59 uriel kernel: [32571790.565079] 14850 pages in swap cache Sep 4 10:16:59 uriel kernel: [32571790.565082] Swap cache stats: add 2556273, delete 2541423, find 82961339/83153719 Sep 4 10:16:59 uriel kernel: [32571790.565085] Free swap = 250592kB Sep 4 10:16:59 uriel kernel: [32571790.565087] Total swap = 385520kB Sep 4 10:16:59 uriel kernel: [32571790.575454] 1073152 pages RAM Sep 4 10:16:59 uriel kernel: [32571790.575458] 888834 pages HighMem Sep 4 10:16:59 uriel kernel: [32571790.575461] 11344 pages reserved Sep 4 10:16:59 uriel kernel: [32571790.575463] 1090481 pages shared Sep 4 10:16:59 uriel kernel: [32571790.575465] 737188 pages non-shared Now, I've no idea what this means. There's plenty of free memory; total used free shared buffers cached Mem: 4247232 3455904 791328 0 5348 736412 -/+ buffers/cache: 2714144 1533088 Swap: 385520 131004 254516 Though now I see the swap is relatively low in size, but would that matter? I've been starting to think about fragmentation, or inode usage on that large partition, but a recent fsck on it showed is as only like 0.5% fragmented. Which leaves me with inode usage, but how much of an effect really would a large inode table or filesystem TOC have? I've love to hear people's opinions on this. It's driving me potty! df -h output; Filesystem Size Used Avail Use% Mounted on /dev/xvda1 9.5G 6.6G 2.4G 74% / tmpfs 2.1G 0 2.1G 0% /lib/init/rw udev 10M 520K 9.5M 6% /dev tmpfs 2.1G 0 2.1G 0% /dev/shm /dev/xvdb 980G 921G 59G 94% /data

    Read the article

  • replacing buffalo lonkstations with FreeNAS, overall backup strategy, am I on the right path?

    - by Shreko
    We've been using 2 Buffalo LinkStations of 320Gb each for shared directory and employee's server storage (around 20 employees). So only documents (word, excel, cad drawings etc.) and database backup of the main application server (ERP, Accounting) 1 buffalo box serves as a main one, located at the server room, next to the main application server and the other buffalo box is located on the opposite side of the building (for fire protection) in a secure storage room and backs up the first one. We also have several external HDs that backs up everything from the buffalo box for an offsite backup. After 3.5 years of using these, capacity is a main limitation, I'm planning a replacement and would like to use FreeNAS (we already use monowall with great success). I would like to keep it simple and continue similar setup, building two low power boxes with 1 hd (2Tb) each. Is low power atom mobo OK? Not sure about HDs? I've read on this site somebody mentioning more seagate ES2 as more reliable and better performing. How would those eco/green drives compare. We've been pretty happy with speed of Buffalo boxes and I don't want my users to notice any slowdown. Any suggestion?

    Read the article

  • ASP.NET Session State SQL Server 2008 R2 Freezes with High CPU Usage

    - by jtseng
    Our ASP.Net website uses SQL Server as the session state provider. We currently host the database on SQL Server 2005 since it does not play well on 2008 R2. We would like to know why, and how to fix it. hardware setup Our current session state server has SQL Server 2005 with the files hosted on a single local disk. It is one of our oldest servers since it has served us well, and we never felt the need to upgrade it. The database is about 2 GB holding 6000 sessions. (The sessions are a little big, but we need it.) We have another server with SQL Server 2008 R2 with a much faster CPU, much more RAM, and a much faster hard disk. situation One day, we have a huge surge in traffic. The transaction log growth on SQL Server freezes the server for 10's of seconds, allowing only a few requests through in minutes. So we load up the new server with ASPState with very large data and log files and point all of our applications to the new server. It chugs along fine for about 5 minutes, and then the CPU usage jumps up to 50% of the 16 cores that Standard Edition can use and freezes for 10's of seconds at a time. The files do not record any autogrowth events. The disk queue is nice and low. RAM usage is low. CPU usage on our old server has never been higher than 5%. What happened on the new server? Alternatively, I would like to hear success stories with ASP.NET session state server running on SQL Server 2008 R2 with an average write load of 30MB/sec with bursts up to 200MB/sec.

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • Hibernate between OS X and Bootcamp Win 7

    - by Willem
    Wouldn't it be great if someone wrote a guide or an app which allowed you to switch instantly between OS X and Windows using Hibernate in both OS:s? Windows 7 already has an option "Hibernate" which allows you to boot back to your OS X partition, but OS X does not exactly offer the same. However, there are possibilities here. It seems that the recent Mac's have 3 different kinds of sleeping mode: Sleep: Low power consumption, RAM still active. Legacy Safe Sleep: No power consumption(?), writes RAM to disk and shuts down (is this the same as Hibernate?) Safe Sleep: Writes RAM to disk and enters sleep mode. If battery level drops too low it goes into Hibernate (is this Hibernate the same as #2 in this list? This is the Hibernate I will be referring to int he rest of this post) It seems that I am unable to force my MacBook Pro (Late 2011) OS X 10.7.3 into a true hibernate using either command line or apps that are supposed to do this. I believe the Mac should show that white loading bar whilst waking up if it was truly put into hibernate (which it does not). But I can get this white bar to show by letting my battery level drop to 0% so there is obviously a system function for it (obviously, duh! :). When Win 7 goes into hibernate it shuts down completely and you can then boot into OS X on startup. On OS X however, hibernate forces you to wake up into OS X. Can you hack this so that you're allowed to select boot partition after OS X hibernates? Would it be possible to use the true hibernate system functionalities of Win 7 and OS X to create a kind of instant switching between the two? Imagine this on a quick SATA-3 SSD like my 180GB Intel 520. Thanks / Willem

    Read the article

  • Looking to get a small server – need web, PHP, PostgreSQL.

    - by Javawag
    Hi all! I'm looking to get a cheap (low end) server to serve web pages (xHTML/PHP), but I also need to be able to set up PostGreSQL on the system too. Ideally the server would have low power consumption, run Linux (I prefer Mac OS X but a Mac Mini, although the size I'm looking for, is too much money!) and be around £100 (~$160US). EDIT: Just to make it clearer, I'm looking to purchase the server hardware myself – but I want something about Mac Mini sized. I don't want to pay for hosting! Also, quick question – if it's to serve web pages from my home (standard ISP connection, no static IP!), what do I need in place to get this working. I'm guessing I would sign up with some service like no-ip, and register a domain to point to my no-ip address (then install the no-ip software on the server to update that with the current IP). I know the idea of running a server behind a normal ISP connection isn't very elegant, but I'd prefer to have the server where I can see it then pay over the odds for a hosting service where I have little to no control over what happens. Also, I could write my own server software for apps/etc to connect to as well. Anyways I'm rambling! What do you guys think?! Javawag

    Read the article

  • High fan speed with no reason

    - by Klaus
    For a few weeks, the fans of my Lenovo B590 laptop, running on Xubuntu 14, turn to high speed a few minutes after it is turned on. The fans won't speed down until I turn the computer off. This is quite strange, since This didn't happen before The temperatures are quite low (are they ?) $sensors Adapter: Virtual device temp1: +36.0°C (crit = +88.0°C) temp2: +30.0°C (crit = +126.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +37.0°C (high = +72.0°C, crit = +90.0°C) Core 0: +34.0°C (high = +72.0°C, crit = +90.0°C) Core 1: +31.0°C (high = +72.0°C, crit = +90.0°C) thinkpad-isa-0000 Adapter: ISA adapter fan1: 0 RPM pkg-temp-0-virtual-0 Adapter: Virtual device temp1: +37.0°C $sudo hddtemp /dev/sda /dev/sda: ST500LT012-9WS142: 33°C The computer is under low load: top - 08:30:15 up 16 min, 2 users, load average: 0.28, 0.23, 0.23 Tasks: 197 total, 1 running, 196 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.8 us, 0.5 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3607944 total, 1973956 used, 1633988 free, 99660 buffers KiB Swap: 3744764 total, 0 used, 3744764 free. 789936 cached Mem The BIOS is up to date (and there are no fan settings in it) The fan is clean and dust-free Why would the BIOS turn the fans to high speed where there seem to be no reason for that ? It seems that we cannot control the fan manually with this model, so I guess the only solution is to understand why this happens.

    Read the article

  • Disk Activity Alert Windows SBS 2003 on Dell PowerEdge 830 with Raid

    - by Ron Whites
    Background: I have a Dell PowerEdge 830 Server running Windows SB Server 2003. It has 4gbs of RAM and a ATA CERC SATA 6CH controller with 3 160gb drives in a Raid 5 configuration. The Problem I am seeing Admin ---"Disk Activity Alert on Server" emails These often occur when disk backups, de-frag or high disk usage is going on. Generally the server isn't over stressed. The Disk Alert emails say in part ... The following disk has low idle time, which may cause slow response time when reading or writing files to the disk. Disk: 0 C: F: D: Review the Disk Transfers/sec and % Idle Time counters for the PhysicalDisk performance object. If the Disk Transfers/sec counter is consistently below 150 while the % Idle Time counter remains very low (close to 0), there may be a problem with the disk driver or hardware. The Questions I have: With what utility can I review the Disk Transfers/sec and Idle Time? It appears there is no utility for that on the server! I think I may need to download a very large (two DVD) Dell "OpenManage" utility to be able to monitor the raid system and see what is a problem is that true?

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • Why does apache httpd tell me that my name-based virtualhosts only works with SNI enabled browers (RFC 4366)

    - by Arlukin
    Why does apache give me this error message in my logs? Is it a false positive? [warn] Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) I have recently upgraded from Centos 5.7 to 6.3, and by that to a newer httpd version. I have always made my ssl virtualhost configurations like below. Where all domains that share the same certificate (mostly/always wildcard certs) share the same ip. But never got this error message before (or have I, maybe I haven't looked to enough in my logs?) From what I have learned this should work without SNI (Server Name Indication) Here is relevant parts of my httpd.conf file. Without this VirtualHost I don't get the error message. NameVirtualHost 10.101.0.135:443 <VirtualHost 10.101.0.135:443> ServerName sub1.domain.com SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNull:!EDH:!DH:!ADH:!eNull:!LOW:!EXP:RC4+RSA+SHA1:+HIGH:+MEDIUM SSLCertificateFile /opt/RootLive/etc/ssl/ssl.crt/wild.fareoffice.com.crt SSLCertificateKeyFile /opt/RootLive/etc/ssl/ssl.key/wild.fareoffice.com.key SSLCertificateChainFile /opt/RootLive/etc/ssl/ca/geotrust-ca.pem </VirtualHost> <VirtualHost 10.101.0.135:443> ServerName sub2.domain.com SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNull:!EDH:!DH:!ADH:!eNull:!LOW:!EXP:RC4+RSA+SHA1:+HIGH:+MEDIUM SSLCertificateFile /opt/RootLive/etc/ssl/ssl.crt/wild.fareoffice.com.crt SSLCertificateKeyFile /opt/RootLive/etc/ssl/ssl.key/wild.fareoffice.com.key SSLCertificateChainFile /opt/RootLive/etc/ssl/ca/geotrust-ca.pem </VirtualHost>

    Read the article

  • Apache suddenly very slow on http and faster on https

    - by hsnm
    Background: I have Apache 2 running on ubuntu. There is a low usage on it and mostly being accessed for a web service URL from mobile apps. It was working fine until I installed SSL certificates. I now have both http and https. When I access the server using https, I get a fairly quick response (but probably not as fast as before). When I use http, it's so slow. What I tried: From this post: I curl localhost from the host and it takes some time, meaning there is no routing issue. The server runs on Amazon EC2 instance and is managed by me only. Also: I see that Apache once running, creates the maximum number of processes it is allowed to, which was not the case before. I lowered the MaxClients to 20 and I think I'm getting faster responses but it still takes over a minute and I always have MaxClients Apache processes. dmesg returns many [ 1953.655703] TCP: Possible SYN flooding on port 80. Sending cookies. When I netstat I get many entries with SYN_RECV. Possibly a DDoS attack? From EC2's monitoring diagrams I see a pattern of high "Maximum Network In (Bytes)" since 2 days ago. By the way the server is still being tested, the actual traffic is very low and not consistent. I tried to go with this solution to limit incoming connections using iptables, still no luck, but I'm trying. Question: What could be the problem? Is this a DDoS attack?

    Read the article

  • How to fix high Load_Cycle_Count laptop drive (TOSHIBA MK6006GAH in Vaio TX1XP)?

    - by Sam Brightman
    Hoping someone knows exactly what's going on here. It seems this drive has some combination of aggressive power saving settings and Ubuntu defaults that has massively increased the Load_Cycle_Count for the drive: https://wiki.ubuntu.com/DanielHahler/Bug59695 So the drive is now so slow that it cannot boot because it takes long enough to access the data that the kernel will not recognise it properly. I'm not worried about the data on the drive, but would really like to keep the laptop functioning. There is some indication that this is possible because the figure is still low 200,000s and most drives supposedly go to 600,000. Additionally, SMART tests pass and consider the drive healthy and without errors. But the really surprising thing was when I ran mhdd... Every single read came up red (slow) until I pressed 'R' for reset drive. I noticed the next read was normal speed, so held down 'R'. Magically the drive read perfectly for as long as I held the key BUT resumed slow (and noisy) seeking/reading after releasing. I don't think the source code to mhdd is available, so I'm not exactly sure what this means (besides, I don't know enough low-level HDD stuff either). It seems like the drive should be able to work, but is stuck trying to power save or something. There are no BIOS options on the laptop. Does anyone know how I can stop the drive from doing extremely slow/noisy operations like this? Or is constantly resetting the drive also damanging, and only causing it to work well by luck (i.e. not a suggestion that it's fixable)?

    Read the article

  • Can applications use all of the memory in Windows 8?

    - by Barleyman
    Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit?

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • Youtube has no voice but the music continues just fine?

    - by Prix
    PC CONFIG: gigabyte EP45C UD3R with the realtek HD onboard 4gb dual channel Qcore 2.83ghz When i watch to videos on youtube now the voice some times is in static and some times so low that you can hear it while the sound continues just fine... For example if can hear to things like guitar or a train etc but the voice of whoever is speaking is gone or very low or pure static when watching the videos. I know some videos have a really great quality and some are HD 1080p so this was something not expected to happen. I can aswell play videos on my WMP11 just fine i have ccc-p installed also tried k-lite, both on the latest stable avaiable. I havent tried anything else related to flash but something is either wrong with my drivers or youtube. I have installed the latest drivers to make sure they are up-to-date but this didnt help either. What i have tried so far: removed the audio drivers and re-installed remove any codec pack i had and re-installed k-lite, test, didnt worked remove any codec pack i had and re-installed cccp, test, didnt worked checked the control panel sound configurations, tried chaging to phone stereo, to 5.1 which is what my headphone is. checked the realtek manager, tried changing the sound channels from 2CH to 6CH to reflect my headphone, didnt work. rebooted after every change of the above tries. tried chrome, firefox and internet explorer with the same results didnt w

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • Linux Centos 6 becomes unavailable from time to time - OS&network issue

    - by adoado0
    I am encountering following problem. There is one server (DL160 G5) running Centos 6.3 with default kernel 2.6.32-220.2.1.el6.x86_64 - at this point I'd like to add that issue appeared also at older version - 6.1 and older kernel (do not remember exactly which version). There is cPanel installed and from time to time it becomes unavailable (network connection). What I've checked is (via KVMoIP): load average is completely normal it does not lack memory or disk space when problem occurs no console notifications checked all access logs and there is no sign that it can be caused by a client script cannot even access local interface (127.0.0.1) or main IP address running tcpdump I can only see packets arriving to server - no responses all services seem to be running properly (mail,sql,http,ssh) checked crontab and all clients' crontabs too network port utilisation is low ( up to several Mbits) arriving packet rate is low - hundreds per second (according to tcpdump) console (via kvmoip) works fine, no lags there is no conntrack at this server there is no ipv6 at this server flushing iptables, unloading modules does not resolve problem restarting network does not resolve problem, no errors appear it also occurs when two sepearate networks are configured (and multiple gateways) as well as one IP, one default gw and one network is configured - so it seems network configuration independent it seems to repeat randomly (load,packet rate,bandwith usage,load independent) checked server with different rootkit detection tools - it seems to be clean server has been rebooted, it did not change anything there are no interface errors it apperas randomly can be once a week or several times per day It usually works fine after 1-15 minutes. What I can also check? It is definitely OS issue - there is traffic at interface only in one direction when problem occurs, can not even ping loopback. Any ideas? Recommended checks? Anything I did not checked above.

    Read the article

  • Good Hosting Providers With Zend Framework Support [closed]

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for?

    Read the article

  • SQL Server: Network pauses after installing cheap SATA card: Is there a solution?

    - by samsmith
    At the risk of being assigned to the "bad DBA" club... I did something desperate, and may have to undo it. Problem: After installing a low cost eSATA board, my SQL Server is intermittently unresponsive (seemingly when there is a lot of IO to the eSATA drive). Questions: 1) Is there a solution to the intermittent unresponsiveness that allows me to keep the eSATA in place? 2) Whether or not (1==true): What is a decent, low cost way to add 1-3 TB storage to SQL for non-critical SQL DBs? Detail: Our SAN is full, and expanding it is costly and will take a month. I have a pressing need to add 1-3 TB for some development DBs (e.g. not mission critical; data loss is OK). As a bandaid, I threw a $20 eSATA PCI board in the Dell 1950 server, and attached an external 2TB eSATA drive. This seemed to work fine, but I notice that our production SQL DBs, and even remote desktop, now experience network "pauses" that they never did before (with both SQL client apps and remote desktop throwing "networking problem" errors). This SQL Server has lots of memory, and runs an instance of SQL 2005 (where all line of business apps reside) and an instance SQL 2008 (for development db's). SQL Server RAM has been appropriately configured, and this setup has run great for years. The server is: Dell 1950 Win2003 x64 14GB RAM PERC controller, 2 mirrored hd's internal Dell SAN over gbit ethernet, dual homed 2 PCIx slots (1 used by NIC for SAN, 1 now in use for eSATA board) Thank you for suggestions!

    Read the article

  • Python 2.7 and Ubuntu 10.10: X11 fails

    - by c.a.p.
    Against every recommendation I have apt-get remove python in Ubuntu 10.10 (build 2.6.24-29-server x86_64) and apt-get install python2.7 Some built-in software with python2.6 dependencies (like firefox, other minor stuff) was fixed just by apt-get reinstalling or reinstalling from source and the system was stable for one day until I rebooted. Upon booting I got the message "ubuntu is running in low-graphics mode" I have an NVIDIA Quadro NVS 295 256 running on a HP Z600 Workstation so I sudoed: apt-get --purged nvidia* apt-get --purged xserver apt-get install linux-headers-generic apt-get install nvidia-* apt-get install xserver-xorg dpkg-reconfigure xserver-xorg nvidia-xconfig upon rebooting i get the same "ubuntu is running in low-graphics mode" if I decide to tell the boot menu to restart X, the Ubuntu 10.10 "load screen" shows up and does not do anything for hours. If I "X" the screen remains black for hours; ctrl-c shows that /etc/X11/xorg.conf did not produce any errors just a warning "Type "ONE_LEVEL"...". /var/log/Xorg.0.log issues the following warnings (no errors) (WW) AllowEmptyInput is on (WW) NVIDIA(0): UBB is incompatible If I "startX" I get /usr/bin/python: can't find '__main__.py"' in '/usr/share/command-not-found' Before re-installing xserver-xorg however "startX" ran, just by changing the resolution of the tty1 console. Any hints?

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >