Search Results

Search found 1429 results on 58 pages for 'fan'.

Page 20/58 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • xen + debian network after upgrade squeeze to wheeze

    - by rush
    I've got a Debian + Xen server. After a system upgrade to the stable version the network doesn't come up after boot. Every time after reboot I need to bring it up manually. The network configuration was not changed during upgrade. Here is /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 11.22.33.44 netmask 255.255.255.0 gateway 11.22.33.1 nameserver 8.8.8.8 After boot ip r shows no route and eth0 has no ip address. Manually ip and route setup goes fine and network starts working. Messages from dmesg about network I've found (looks like nothing interesting) [ 3.894401] ACPI: Fan [FAN3] (off) [ 3.894444] ACPI: Fan [FAN4] (off) [ 4.178348] e1000e 0000:00:19.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c9 [ 4.178351] e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection [ 4.178392] e1000e 0000:00:19.0: eth0: MAC: 10, PHY: 11, PBA No: 0100FF-0FF [ 4.178413] e1000e 0000:02:00.0: Disabling ASPM L0s L1 [ 4.178432] xen: registering gsi 16 triggering 0 polarity 1 -- [ 4.223667] ata5: DUMMY [ 4.223668] ata6: DUMMY [ 4.289153] e1000e 0000:02:00.0: eth1: (PCI Express:2.5GT/s:Width x1) 00:1e:67:14:66:c8 [ 4.289155] e1000e 0000:02:00.0: eth1: Intel(R) PRO/1000 Network Connection [ 4.289245] e1000e 0000:02:00.0: eth1: MAC: 3, PHY: 8, PBA No: 1000FF-0FF [ 4.506908] usb 1-1: new high-speed USB device number 2 using ehci_hcd [ 4.542920] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) -- [ 10.362999] EXT4-fs (dm-23): mounted filesystem with ordered data mode. Opts: (null) [ 10.419103] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null) [ 10.988255] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 13.175533] Event-channel device installed. [ 13.287555] XENBUS: Unable to read cpu state -- [ 13.288670] XENBUS: Unable to read cpu state [ 13.965939] Bridge firewalling registered [ 14.134048] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 14.283862] ADDRCONF(NETDEV_UP): peth0: link is not ready [ 14.284543] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [ 17.800627] e1000e: peth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 17.801377] ADDRCONF(NETDEV_CHANGE): peth0: link becomes ready [ 18.307278] device peth0 entered promiscuous mode [ 24.538899] eth1: no IPv6 routers present [ 28.570902] peth0: no IPv6 routers present I've upgraded two servers and I've such behaviour on two of them. How to fix this and get network starts automatically on boot?

    Read the article

  • Gaming blew fuse: how to overcome?

    - by George Tomlinson
    I've been gaming for a while now. When playing certain games this PC goes into overdrive. The fan/fans start/s to sound like a jet engine it/they get/s so busy. Also I have smelt burning when this has happened. The fuse blew on the 4 socket adapter I was using recently. On the following thread someone said this could be due to the PSU not being strong enough to handle the load, in what it seems could be a related issue someone had, although the person who posted this question did say that blowing a fan on their PC stopped it crashing in that case: http://www.tomshardware.co.uk/answers/id-2047543/gtx-650-overheating-issue.html. This is exactly what they said: Your GPU isn't overheating. 70+ before it would shutdown and cause a restart. Make sure your PSU is strong enough to handle your new system at load and possibly run Memtest to check your RAM (although not BSOD'ing and just shutting down points to the PSU). This (the PSU part) makes more sense to me than it being to do with dust etc, since it seems a more plausible explanation of why the fuse blew. The PC has no problems except when playing certain games: i.e. TERA Rising and WoW with add-ons (I think WoW is ok as long as I don't have more than 1 add-on (Healers Have To Die)). I'm just wondering if anyone knows or can suggest what I might be able to do to be able to play these games without this problem occurring. The PC's spec is this: Display: NVIDIA GeForce GTX 650 8GB RAM (6 available) Processor: AMD FX (tm) - 8120 Eight-Core Processor - 3.1 GHz, 4 Cores, 8 Logical Processors I have read on another post that forcing vsync in the Nvidia Control Panel helped with what seems could be a similar problem, so I plan to see if that solves it, God permitting.

    Read the article

  • Gateway laptop module bay light repeating 12 flashes - what error is that?

    - by Simurr
    I have a Gateway M465-E laptop currently running fine with a T2300E Core Duo installed. I wanted to upgrade it to a Core 2 Duo. My brother has the same model laptop and that took a Core 2 Duo (T7200) just fine. Picked up a T7200 on ebay and installed it. Normally when booting all the indicator lights flash once and the fan spins up before the machine actually starts to POST. With the T7200 installed all the lights flash and the fan spins up, but the module bay activity light flashes 12 times repeatedly. I'm assuming this is an error code, but can find no information about it. There are no beep codes. I've removed the ram, HD, Bay module and no change. Switched back to the T2300E and everything works fine. Anyone know what that error code is? The motherboard was actually manufactured by Foxconn if that helps. Update 1 Returned the CPU as defective. I tested it in 3 M465-E's and all of them did exactly the same thing. I still have no idea what the error code is. I'd still like to know for future reference. Perhaps I should try removing the CPU from one of them and see what happens.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Fluent NHibermate and Polymorphism and a Newbie!

    - by Andy Baker
    I'm a fluent nhibernate newbie and I'm struggling mapping a hierarchy of polymorhophic objects. I've produced the following Model that recreates the essence of what I'm doing in my real application. I have a ProductList and several specialised type of products; public class MyProductList { public virtual int Id { get; set; } public virtual string Name {get;set;} public virtual IList<Product> Products { get; set; } public MyProductList() { Products = new List<Product>(); } } public class Product { public virtual int Id { get; set; } public virtual string ProductDescription {get;set;} } public class SizedProduct : Product { public virtual decimal Size {get;set;} } public class BundleProduct : Product { public virtual Product BundleItem1 {get;set;} public virtual Product BundleItem2 {get;set;} } Note that I have a specialised type of Product called BundleProduct that has two products attached. I can add any of the specialised types of product to MyProductList and a bundle Product can be made up of any of the specialised types of product too. Here is the fluent nhibernate mapping that I'm using; public class MyListMap : ClassMap<MyList> { public MyListMap() { Id(ml => ml.Id); Map(ml => ml.Name); HasManyToMany(ml => ml.Products).Cascade.All(); } } public class ProductMap : ClassMap<Product> { public ProductMap() { Id(prod => prod.Id); Map(prod => prod.ProductDescription); } } public class SizedProductMap : SubclassMap<SizedProduct> { public SizedProductMap() { Map(sp => sp.Size); } } public class BundleProductMap : SubclassMap<BundleProduct> { public BundleProductMap() { References(bp => bp.BundleItem1).Cascade.All(); References(bp => bp.BundleItem2).Cascade.All(); } } I haven't configured have any reverse mappings, so a product doesn't know which Lists it belongs to or which bundles it is part of. Next I add some products to my list; MyList ml = new MyList() { Name = "Example" }; ml.Products.Add(new Product() { ProductDescription = "PSU" }); ml.Products.Add(new SizedProduct() { ProductDescription = "Extension Cable", Size = 2.0M }); ml.Products.Add(new BundleProduct() { ProductDescription = "Fan & Cable", BundleItem1 = new Product() { ProductDescription = "Fan Power Cable" }, BundleItem2 = new SizedProduct() { ProductDescription = "80mm Fan", Size = 80M } }); When I persist my list to the database and reload it, the list itself contains the items I expect ie MyList[0] has a type of Product, MyList[1] has a type of SizedProduct, and MyList[2] has a type of BundleProduct - great! If I navigate to the BundleProduct, I'm not able to see the types of Product attached to the BundleItem1 or BundleItem2 instead they are always proxies to the Product - in this example BundleItem2 should be a SizedProduct. Is there anything I can do to resove this either in my model or the mapping? Thanks in advance for your help.

    Read the article

  • Backup and Transfer Foobar2000 to a New Computer

    - by Mysticgeek
    If you are a fan of Foobar2000 you undoubtedly have tweaked it to the point where you don’t want to set it all up again on a new machine. Here we look at how to transfer Foobar2000 settings to a new Windows 7 machine. Note: For this article we are transferring Foobar2000 settings from on Windows 7 machine to another over a network running Windows Home Server.  Foobar2000 Foobar2000 is an awesome music player which is highly customizable and we’ve previously covered. Here we take a look at how it’s set up on the current machine. It’s a nothing flashy, but is set up for our needs and includes a lot of components and playlists.   Backup Files Rather than wasting time setting everything up again on a new machine, we can backup the important files and replace them on the new machine. First type or copy the following into the Explorer address bar. %appdata%\foobar2000 Now copy all of the files in the folder and store them on a network drive or some type removable media or device. New Machine Now you can install the latest version of Foobar2000 on your new machine. You can go with a Standard install as we will be replacing our backed up configuration files anyway. When it launches, it will be set with all the defaults…and we want what we had back. Browse to the following on the new machine… %appdata%\foobar2000 Delete all of the files in this directory… Then replace them with the ones we backed up from the other machine. You’ll also want to navigate to C:\Program Files\Foobar2000 and replace the existing Components folder with the backed up one. When you get the screen telling you there is already files of the same name, select Move and Replace, and check the box Do this for the next 6 conflicts. Now we’re back in business! Everything is exactly as it was on the old machine. In this example, we were moving the Foobar2000 files from a computer on the same home network. All the music is coming from a directory on our Windows Home Server so they hadn’t changed. If you’re moving these files to a computer on another machine… say your work computer, you’ll need to adjust where the music folders point to. Windows XP If you’re setting up Foobar2000 on an XP machine, you can enter the following into the Run line. %appdata%\foobar2000 Then copy your backed up files into the Foobar2000 folder, and remember to swap out the Components folder in C:\Program Files\Foobar2000. Confirm to replace the files and folders by clicking Yes to All… Conclusion This method worked perfectly for us on our home network setup. There might be some other things that will need a bit of tweaking, but overall the process is quick and easy. There is a lot of cool things you can do with Foobar2000 like rip an audio CD to FlAC. If you’re a fan of Foobar2000 or considering switching to it, we will be covering more awesome features in future articles. Download Foobar2000 – Windows Only Similar Articles Productive Geek Tips Backup or Transfer Microsoft Office 2007 Quick Parts Between ComputersBackup and Restore Internet Explorer’s Trusted Sites ListSecond Copy 7 [Review]Backup and Restore Firefox Profiles EasilyFoobar2000 is a Fully Customizable Music Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • TiVo Follow-up&hellip;Training Opportunities

    - by MightyZot
    A few posts ago I talked about my experience with TiVo Customer Service. While I didn’t receive bad service per se, I felt like the reps could have communicated better. I made the argument that it should be just as easy to leave a company as it is to engage with a company, even though my intention is to remain a TiVo fan. I worked for DataStorm Technologies in the early 90s. I pointed out to another developer that we were leaving files behind in our installations. My opinion was that, if the customer is uninstalling our application, there should be no trace of it left after uninstall except for the customer’s data. He replied with, “screw ‘em. They’re leaving us. Why do we care if we left anything behind?” Wow. Surely there is a lot of arrogance in that statement. Think about this…how often do you change your services, devices, or whatever?  Personally, I change things up about once every two or three years. If I don’t change things up, I at least think about it. So, every two or three years there is an opportunity for you (as a vendor or business) to sell me something. (That opportunity actually exists all the time, because there are many of these two or three year periods overlapping.) Likewise, you have the opportunity to win back my business every two or three years as well. Customer service on exit is just as important as customer service during engagement because, every so often, you have another chance to gain back my loyalty. If you screw that up on exit, your chances are close to zero. In addition, you need to consider all of the potential or existing customers that are part of or affected by my social organizations. “Melissa” at TiVo gave me a call last week and set up some time to talk about my experience. We talked yesterday and she gave me a few moments to pontificate about my thoughts on the importance of a complete customer experience. She had listened to my customer support calls and agreed that I had made it clear that I intended to remain a TiVo customer even though suddenLink is handling my subscription. She said that suddenLink is a very important partner for them and, of course, they want to do everything they can to support TiVo / suddenLink customers.  “Melissa” also said that they had turned this experience into a training opportunity for the reps involved. I hope that is true, because that “programmer arrogance” that I mentioned above (which was somewhat pervasive back then) may be part of the reason why that company is no longer around. Good job “Melissa”!  And, like I said, I am still a TiVo fan. In fact, we love our new TiVo and many of the great new features. In addition, if you’re one of the two people that read these posts, please remember that these are just opinions. Your experiences may be, and likely will be, completely unique to you.

    Read the article

  • St. Louis ALT.NET

    - by Brian Schroer
    I’m a huge fan of the St. Louis .NET User Group and a regular attendee of their meetings, but always wished there was a local group that discussed more advanced .NET topics. (That’s not a criticism of the group - I appreciate that they want to server developers with a broad range of skill levels). That’s why I was thrilled when Nicholas Cloud started a St. Louis ALT.NET group in 2010. Here’s the “about us” statement from the group’s web site: The ALT.NET community is a loosely coupled, highly cohesive group of like-minded individuals who believe that the best developers do not align themselves with platforms and languages, but with principles and ideas. In 2007, David Laribee created the term "ALT.NET" to explain this "alternative" view of the Microsoft development universe--a view that challenged the "Microsoft-only" approach to software development. He distilled his thoughts into four key developer characteristics which form the basis of the ALT.NET philosophy: You're the type of developer who uses what works while keeping an eye out for a better way. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc. You're not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc. You know tools are great, but they only take you so far. It's the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (e.g. Resharper.) The St. Louis ALT.NET meetup group is a place where .NET developers can learn, share, and critique approaches to software development on the .NET stack. We cater to the highest common denominator, not the lowest, and want to help all St. Louis .NET developers achieve a superior level of software craftsmanship. I don’t see a lot of ALT.NET talk in blogs these days. The movement was harmed early on by the negative attitudes of some of its early leaders, including jerk moves like the Entity Framework “vote of no confidence”, but I do see occasional mentions of local groups like the St. Louis one. I think ALT.NET has been successful at bringing some of its ideas into the .NET world, including heavily influencing ASP.NET MVC and raising the general level of software craftsmanship for developers working on the Microsoft stack. The ideas and ideals live on, they’re just not branded as “this is ALT.NET!” In the past 18 months, St. Louis ALT.NET meetups have discussed topics like: NHibernate F# and other functional languages AOP CoffeeScript “How Ruby Is Making Me a Stronger C# Developer” Using rake for builds CQRS .NET dynamic programming micro web frameworks – Nancy & Jessica Git ALT.NET doesn’t mean (to me, anyway) “alternatives to .NET”, but “alternatives for .NET”. We look at how things are done in Ruby and other languages/platforms, but always with the idea “What can I learn from this to take back to my “day job” with .NET?”. Meetings are held at 7PM on the fourth Wednesday of each month at the offices of Professional Employment Group. PEG is located at 999 Executive Parkway (Suite 100 – lower level) in Creve Coeur (South of Olive off of Mason Road - Here's a map). Food is not supplied (sorry if you’re a big fan of the Papa John’s Crust-Lovers’ Pizza that’s a staple of user group meetings), but attendees are encouraged to come early and bring/share beer, so that’s cool. Thanks to Nick for organizing, and to Professional Employment Group for lending their offices. Please visit the meetup site for more information.

    Read the article

  • Merging Social Accounts: What We Learned This Weekend

    - by Mike Stiles
    Guest Post by Erika BrookesWe learned that it’s not always as easy as you think it’s going to be. While it’s widely accepted that merging multiple owned Facebook Pages that are duplicating communities and putting out the same type of content is a best practice, actually pulling it off without rattling fans is a trickier proposition. Facebook is nice and clear about how to merge Facebook Pages. Although content is not carried over, Likes from the pages you’re merging are. So you can imagine the surprise when such fans start seeing posts in their News Feed from a page they don’t believe they ever Liked. One community member accurately likened it to having your bank come under another bank’s brand name. The Facebook Page changes to the new brand, just like your debit card, emails, signs and other communication. This weekend we did our merge. The Facebook communities of Vitrue, Involver and Collective Intellect were pulled into one community, Oracle Social. Could we have handled it better? Oh yeah. Our intent was to make sure, to the fullest extent possible, that the fans of the Vitrue, Involver, and Collective Intellect brand pages were well-informed about the pending page merges in ADVANCE of the merge. While many were aware that Oracle acquired the three companies, many were not. We learned from fan feedback that we should have sent notifications MUCH earlier to make the brand Page merge crystal clear and to answer any questions. That was our bad, our responsibility and we apologize for Oracle Social showing up in your News Feed if you were not aware that it was a result of your fandom of Vitrue, Involver or Collective Intellect. It was our job to make you aware well in advance. Some felt they had never Liked the fan Pages of Vitrue, Involver or Collective Intellect, so they were understandably upset (some cultures may call it “fit to be tied”) when they found themselves fans of Oracle Social. One thing to consider is that since 2009, brands and developers have used and enjoyed free Involver tab apps like Twitter, RSS and YouTube (1.2 million of which are currently active), which included an opt-in Liking the Involver Page. Often, when Liking happens in a manner outside of the traditional clicking of a Like button on a brand Page, it’s easy to forget a Page was indeed Liked. Lastly, a few felt that their Like of the Page had been “bought.” It was not. No fans or Likes were separately purchased. Yes, the companies and the social properties of Vitrue, Involver and Collective Intellect were acquired by Oracle. Those brands are now being coordinated into the larger Oracle brand. In social media, that means those brands are being integrated into the Oracle Social community. So what now? We apologize and apply lessons learned. We learned that you not only have to communicate thoroughly and clearly, but you have to communicate well in advance of any actionable items that will affect fans. We’re more than willing to walk straight to the woodshed when we deserve it. Going forward, the social team here is dedicated to facilitating content, discussion and sharing around social for marketers, agencies, IT stakeholders and social staffs, including community managers. We anticipate Oracle Social being the premier gathering place for true social innovators as we move into social’s exciting next phase of development. Inevitably, some will still feel they are fans of the Page in error. While we hate to see you go, you may unlike the Page if it’s not relevant or useful to you. Let’s continue to contribute, participate, foster our desire to learn, and move forward together positively and constructively - both for current fans of the community and the many fans to come.

    Read the article

  • External JS usage in FBML - Cannot access external script

    - by santhakr
    Hi, I am trying to create a small facebook app and associate it to a fan page in a tab. I am trying to include an external javascript file in my page and call a method on a button click event. Below is a part of the code <script language="Javascript" src="http://mysite.com/fb.js"></script> <input type="button" value="Click....." onClick="javascript:showDialog();" /> content of fb.js is as below function showDialog() { new Dialog().showMessage('Dialog', 'Button clickeed'); } When I load the tab in my fan page, it shows an error "Cannot allow external script", whereas when I load the canvas url [http://apps.facebook.com/...] directly and click on the button, it works [shows the dialog]. Does script include works only on the canvas and not on the profile page? I have another question though Initially I had the script src as a relative path but it errored out with the same error - "Cannot allow external script". Can't I use relative path for the external scripts?

    Read the article

  • Extending Programming Languages

    - by chpwn
    (Since I just posted this in another question, but my browser had to be annoying and submit it without content first, here it is again:) I'm a fan of clean code. I like my languages to be able to express what I'm trying to do, but I like the syntax to mirror that too. For example, I work on a lot of programs in Objective-C for jailbroken iPhones, which patch other code using the method_setImplementation() function of the runtime. Or, in pyobjc, I have to use the syntax UIView.initWithFrame_(), which is also pretty awful and unreadable with the way the method names are structured. In both cases, the language does not support this in syntax. I've found three basic ways that this is done: Insane macros. Take a look at this "CaptainHook", it does what I'm looking for in a usable way, but it isn't quite clean and is a major hack. There's also "Logos", which implements a very nice syntax, but is written in Perl parsing my code with a ton of regular expressions. This scares me. I like the idea of adding a %hook ClassName, but not by using regular expressions to parse C or Objective-C. Finally, there is Cycript. This is an extension to JavaScript which interfaces with the Objective-C runtime and allows you to use Objective-C style code in your JavaScript, and inject that into other processes. This is likely the cleanest as it actually uses a parser for the JavaScript, but I'm not a huge fan of that language in general. Basically, this is a two part question. Should, and how should, I create an extension to Python and Objective-C to allow me to do this? Is it worth writing a parser for my language to transform the syntax into something nicer, if it is only in a very specialized niche like this? Should I just live with the horrible syntax of the default Objective-C hooking or pyobjc?

    Read the article

  • Extending Python and Objective-C

    - by chpwn
    I'm a fan of clean code. I like my languages to be able to express what I'm trying to do, but I like the syntax to mirror that too. For example, I work on a lot of programs in Objective-C for jailbroken iPhones, which patch other code using the method_setImplementation() function of the runtime. Or, in PyObjC, I have to use the syntax UIView.initWithFrame_(), which is also pretty awful and unreadable with the way the method names are structured. In both cases, the language does not support this in syntax. I've found three basic ways that this is done: Insane macros. Take a look at this "CaptainHook", it does what I'm looking for in a usable way, but it isn't quite clean and is a major hack. There's also "Logos", which implements a very nice syntax, but is written in Perl parsing my code with a ton of regular expressions. This scares me. I like the idea of adding a %hook ClassName, but not by using regular expressions to parse C or Objective-C. Finally, there is Cycript. This is an extension to JavaScript which interfaces with the Objective-C runtime and allows you to use Objective-C style code in your JavaScript, and inject that into other processes. This is likely the cleanest as it actually uses a parser for the JavaScript, but I'm not a huge fan of that language in general. Should, and how should, I create an extension to Python and Objective-C to allow me to do this? Is it worth writing a parser for my language to transform the syntax into something nicer, if it is only in a very specialized niche like this? Should I just live with the horrible syntax of the default Objective-C hooking or PyObjC?

    Read the article

  • In C# should I reuse a function / property parameter to compute temp result or create a temporary v

    - by Hamish Grubijan
    The example below may not be problematic as is, but it should be enough to illustrate a point. Imagine that there is a lot more work than trimming going on. public string Thingy { set { // I guess we can throw a null reference exception here on null. value = value.Trim(); // Well, imagine that there is so much processing to do this.thingy = value; // That this.thingy = value.Trim() would not fit on one line ... So, if the assignment has to take two lines, then I either have to abusereuse the parameter, or create a temporary variable. I am not a big fan of temporary variables. On the other hand, I am not a fan of convoluted code. I did not include an example where a function is involved, but I am sure you can imagine it. One concern I have is if a function accepted a string and the parameter was "abused", and then someone changed the signature to ref in both places - this ought to mess things up, but ... who would knowingly make such a change if it already worked without a ref? Seems like it is their responsibility in this case. If I mess with the value of value, am I doing something non-trivial under the hood? If you think that both approaches are acceptable, then which do you prefer and why? Thanks.

    Read the article

  • How to obtain dependency metrics from Java source code?

    - by Bram Schoenmakers
    For an assignment we have to extract some software metrics from the Hibernate project. We have to extract the afferent coupling and efferent coupling metrics (dependency fan-in, fan-out) from each revision of each package in Hibernate. Some tools were provided which are able to extract these metrics, such as ckjm and JDepend. Other tools I have checked were Sonar, javancss and AOP. There is also the Metrics Eclipse plugin which I didn't get to work either. What these tools have in common, as far as I can see, is that they all operate on bytecode (*.class files). This is a problem, because I have to build every revision from source in order to run, say, JDepend on it. Older revisions won't build because my development stack is too recent. What I would like to do is to do this kind of analysis on source files so that I don't have to build each revision. Is this possible? Or is there a good reason why all these tools only operate on bytecode?

    Read the article

  • Excel VBA Userform Combobox problem

    - by Marc
    I'm having difficulties with a Combobox in a userform in an Excel document. The combobox either doesn't appear in the userform, or the combobox remains blank, and when I enter any character in it, the list of items appears, but 2 or 3 times, instead of just once. When I select an item, the chosen item doesn't appear in the box. It seems as if Excel^picks one at random, and whichever item I choose from the list, it's always the same one that ends up being displayed in the box. Can anyone help me on this one? Thanks a lot!!! This is the code I used: Private Sub ComboBox1_Change() Select Case ComboBox1.Text Case "Een nieuwe start" Case "Alles heeft zijn tijd" Case "De wereld aan je voeten" Case "Een levend boek" Case "Drempels" Case "Kerstmis" Case "Confituur of choco" Case "Hoe groot is de hemel?" Case "Ongelovige Thomas" Case "Feesten" Case "Er is er één jarig!" Case "Eén van hart" Case "Ervoor gaan" Case "Groen gras" Case "RELatie" Case "Vele plaatjes" Case "Iedereen fan" Case "Schattenjacht" Case "Lichtbakens" Case "Rijke Luis" Case "Hemel op aarde" Case "Op bezoek" Case Else End Select End Sub Private Sub UserForm1_Initialize() ComboBox1.Clear ComboBox1.AddItem "Een nieuwe start" ComboBox1.AddItem "Alles heeft zijn tijd" ComboBox1.AddItem "De wereld aan je voeten" ComboBox1.AddItem "Een levend boek" ComboBox1.AddItem "Drempels" ComboBox1.AddItem "Kerstmis" ComboBox1.AddItem "Confituur of choco" ComboBox1.AddItem "Hoe groot is de hemel?" ComboBox1.AddItem "Ongelovige Thomas" ComboBox1.AddItem "Feesten" ComboBox1.AddItem "Er is er één jarig!" ComboBox1.AddItem "Eén van hart" ComboBox1.AddItem "Ervoor gaan" ComboBox1.AddItem "Groen gras" ComboBox1.AddItem "RELatie" ComboBox1.AddItem "Vele plaatjes" ComboBox1.AddItem "Iedereen fan" ComboBox1.AddItem "Schattenjacht" ComboBox1.AddItem "Lichtbakens" ComboBox1.AddItem "Rijke Luis" ComboBox1.AddItem "Hemel op aarde" ComboBox1.AddItem "Op bezoek" ComboBox1.Text = ComboBox1.List(0) End Sub

    Read the article

  • The IOC "child" container / Service Locator

    - by Mystagogue
    DISCLAIMER: I know there is debate between DI and service locator patterns. I have a question that is intended to avoid the debate. This question is for the service locator fans, who happen to think like Fowler "DI...is hard to understand...on the whole I prefer to avoid it unless I need it." For the purposes of my question, I must avoid DI (reasons intentionally not given), so I'm not trying to spark a debate unrelated to my question. QUESTION: The only issue I might see with keeping my IOC container in a singleton (remember my disclaimer above), is with the use of child containers. Presumably the child containers would not themselves be singletons. At first I thought that poses a real problem. But as I thought about it, I began to think that is precisely the behavior I want (the child containers are not singletons, and can be Disposed() at will). Then my thoughts went further into a philosophical realm. Because I'm a service locator fan, I'm wondering just how necessary the notion of a child container is in the first place. In a small set of cases where I've seen the usefulness, it has either been to satisfy DI (which I'm mostly avoiding anyway), or the issue was solvable without recourse to the IOC container. My thoughts were partly inspired by the IServiceLocator interface which doesn't even bother to list a "GetChildContainer" method. So my question is just that: if you are a service locator fan, have you found that child containers are usually moot? Otherwise, when have they been essential? extra credit: If there are other philosophical issues with service locator in a singleton (aside from those posed by DI advocates), what are they?

    Read the article

  • Creating a Facebook session for getting page info

    - by Marty Haught
    I am trying to get info on a page that my user is admin for. This user has granted my fb_connect app offline access. I have saved the session_key that allows offline access (it has the user's id in it). I am able to publish to this fan page with this session key. But when I try to access the page's info I get an SessionExpired error. This doesn't make sense. Look at the code and output below: p is is a 'profile' object that holds the three pieces of relevant fb data (user_id, session_key and page id) fb_session = Facebooker::Session.create = # fb_session.secure_with!(p.fb_session_key, p.fb_user_id, 0) = nil fb_session.user.has_permission?("offline_access") = true fb_session.user.has_permission?("publish_stream") = true fb_session.user.has_permission?("read_stream") = true pages = fb_session.fql_query("select fan_count from page where page_id = #{p.fb_page_id}") Facebooker::Session::SessionExpired: Session key invalid or no longer valid ... pages = fb_session.pages(:fields = {:page_ids = p.fb_page_id}) Facebooker::Session::SessionExpired: Session key invalid or no longer valid ... pages = Facebooker::Session.create.fql_query("select fan_count from page where page_id = #{p.fb_page_id}") = [#] Perhaps I'm not creating the session right or maybe offline access doesn't give me access to the user's page even though I have permissions to push to it. As you can see when I just use an anon session I'm able to get the fan count, which I'm guessing is publicly available. Does anyone have an idea on this?

    Read the article

  • In C# should I reuse a function / property parameter to compute cleaner temporary value or create a

    - by Hamish Grubijan
    The example below may not be problematic as is, but it should be enough to illustrate a point. Imagine that there is a lot more work than trimming going on. public string Thingy { set { // I guess we can throw a null reference exception here on null. value = value.Trim(); // Well, imagine that there is so much processing to do this.thingy = value; // That this.thingy = value.Trim() would not fit on one line ... So, if the assignment has to take two lines, then I either have to abusereuse the parameter, or create a temporary variable. I am not a big fan of temporary variables. On the other hand, I am not a fan of convoluted code. I did not include an example where a function is involved, but I am sure you can imagine it. One concern I have is if a function accepted a string and the parameter was "abused", and then someone changed the signature to ref in both places - this ought to mess things up, but ... who would knowingly make such a change if it already worked without a ref? Seems like it is their responsibility in this case. If I mess with the value of value, am I doing something non-trivial under the hood? If you think that both approaches are acceptable, then which do you prefer and why? Thanks.

    Read the article

  • Gigabyte Onboard graphics card heating up and then crashing

    - by this is text comes from a db
    5 minutes after I turn on my PC the onboard graphics card is usually over 80° celsius and then crashes (random colors on screen, only way to get out is to just plug out the pc). I haven't installed any new drivers or added/changed hardware recently Everything went fine until yesterday What should I do next? Do I have to buy a new mainboard right now? There is no fan on the onboard graphics card, only a heatsink.

    Read the article

  • Powering-down USB Powered Devices on laptop sleep

    - by Carl
    I recently purchased a Targus Lap Chill Mat, which is a USB powered device, but doesn't have any interface/storage/etc function. I'd like it to power-down when my (macbook) laptop sleeps (since the fan obviously doesn't need to be running when the laptop is idle) without having to unplug it. Any suggestions?

    Read the article

  • Acer Aspire One AOA 150 netbook health monitoring software

    - by iceman
    I have a Acer Aspire One AOA 150 with Windows Xp Home. I want to monitor the voltages from the power supply, the temperature of the system, the CPU and fan speeds. Much like the functionality of GKrellM(though I haven't used this on the netbook yet) , a GTK applet designed to make an impressive panel of monitors or xsensors in Linux. Is this possible with the netbook? I want a software for Windows 7 and Windows Xp.

    Read the article

  • Windows 7 search not finding files

    - by Rob Nicholson
    Can anyone please explain this quirk in Windows 7 search (not a big fan of it - preferred XP method or least both). With Outlook, you sometimes have to find and delete your OST file. It resides in the user's profile folder. How come searching the entire C: drive for *.ost files works - they are in c:\Users\rob.nicholson\appdata somewhere but starting the search from c:\Users\rob.nicholson fails to find the files??? Cheers, Rob.

    Read the article

  • SMPS stops when I plug in a SATA drive?

    - by claws
    Hello, Part 1: my first question is all the 4 wire power connectors (intended for hardisks/dvd drives not mother board) are same. Right? I've been using all of them same and I had no problem for years. Yesterday I borrowed a SATA disk from my friend and connected it my computer using Sata Power adaptor (4 wire) and when I switched on the computer. There were fumes coming out of the connector. I immediately turned it off (in just one second). I tested the voltages in the 4 wire power connector of my SMPS: They were 5.3v & 12.2V. I couldn't measure the current. But my SMPTS label reads: DC Output: 3.3v (25A) +5v (32A) -5v (0.3A) +12V (17A) -12V (0.8A) And the SATA hardisk label reads Input: +5v (0.72A) +12V (0.52A) I'm shocked! I never noticed this. Does the "sata power adaptor" scale down the current to required? If it doesn't, I've been connecting same way for years. I never had any problem. This is the first time I'm encountering it. Part 2: I wanted to return the drive to my friend. He has two hard disks, SATA & PATA. Its the SATA that I borrowed. When he usually switches on. The CPU fan starts & then stops for a sec and starts again and continues working. That was the earlier situation. I don't know why it stops & starts? Well, Now when I connect this SATA disk and switch ON the computer. CPU fan starts (just for an instant, not even a 0.5 sec) and stops. It doesn't start again, I mean the power from SMPS has stopped. But if I disconnect this SATA disk. It works fine. What seems to be the problem? I've no idea about why there were fumes or why his SMPS starts & stops giving power? What is its relation with the SATA disk connection?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >