Search Results

Search found 9608 results on 385 pages for 'flash drives'.

Page 100/385 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • My thoughts on the future of the web with respect to flash, plugins, etc…

    - by joelvarty
    More than 10 years ago I was coding Java applets.  They were great at the time because I could reasonably expect them to run the same way in Netscape and Internet Explorer.  I could also reliably do asynchronous networking back to the server.  But then, Microsoft pulled their native Java runtime from Windows and Internet Explorer.  It got a lot harder to get applets running in people’s browsers. So I started writing ActiveX controls for IE and Java applets for Netscape. Then I switched to Flash, not for too long, but it was enough for me to see that it was a capable and curious implementation of animation, multimedia and script. I even wrote a few Silverlight controls, but then I stopped. I stepped back from all of the “richness” and “interactivity” and I thought about things like accessibility and SEO.  I wondered how my apps and sites might appear to the greater world.  I wondered how the developers I am working with, or who might be inheriting my code down the road, might interact with it. And I thought to myself, What the hell was I thinking? Those embedded controls are not what the web is about, and they run contrary to nearly all of the things that makes the web exciting and fosters innovation within and around.   Those plugins or controls, or whatever you want to refer to them as, are only stop-gaps that fill a hole in the basic HTML/Script/CSS specifications, and that’s all they should ever be used for.  Full stop.  Period.  For instance, I still make use of a nifty little flash control called SWFUpload because it lets me check file size before an upload starts.  I can do the same thing from a Silverlight control.  But rest assured, if I could do this from native javascript, I would in a second.  In fact, the only reason I chose SWFUpload over a ton of other alternatives is that it has a great javascript API so I can do (nearly) all of the UI in regular HTML.  And I ALWAYS provide a non-flash alternative for uploading, and for the rest of any website where the designer has insisted on some piece of creativity that requires flash (usually because the designer is also the flash developer, but that’s an aside…). The web is about openness, and about exposing that openness in such a way that it can be taken advantage of as a small part of a greater whole.  Sure we need security and authentication and ssl and all that stuff, but for me, its something more profound.  For me, the majority of what the web is, is about exposing something that delivers meaning.  What meaning can we derive from an <object> tag?   more later - joel

    Read the article

  • problems mounting an external IDE drive via USB in ubuntu

    - by Roy Rico
    I am having a problem connecting a specific IDE drive to my linux box. It's an old drive which I just want to get about 3 GB of files off of. INFO I am trying to connect a 200GB IDE Maxtor Drive, internally and externally... externally: I am using an self powered USB IDE external drive enclosure which I have used to connect various drives, under ubuntu and windows, in the past. The other posts stated it coudl be a problem I think i may have formatted the /dev/sdc partition instead of /dev/sdc1 partition when i originally formatted the drive. internally: I only have one machine left that has an internal IDE interface, and it's got XP on it. I plugged this drive internally into this machine with windows XP and used the ext2/ext3 drivers to mount this drive, but some files have question marks (?) in the file names which is messing up my copy process in windows. I can't delete the files under windows. Ubuntu Linux will not install on my only remaining machine that has IDE controller. I have tried the suggestions in the questions below http://superuser.com/questions/88182/mount-an-external-drive-in-ubuntu http://superuser.com/questions/23210/ubuntu-fails-to-mount-usb-drive it looks like i can see the drive in /proc/partitions $ cat /proc/partitions major minor #blocks name 8 0 78125000 sda 8 1 74894998 sda1 8 2 1 sda2 8 5 3229033 sda5 8 16 199148544 sdb <-- could be my drive? but it's not listed under fdisk -l $ fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd0f4738c Device Boot Start End Blocks Id System /dev/sda1 * 1 9324 74894998+ 83 Linux /dev/sda2 9325 9726 3229065 5 Extended /dev/sda5 9325 9726 3229033+ 82 Linux swap / Solaris and here is my log of /var/log/messages. with a bunch of weird output, can someone let me know what that weird output is? Mar 3 19:49:40 mala kernel: [687455.112029] usb 1-7: new high speed USB device using ehci_hcd and address 3 Mar 3 19:49:41 mala kernel: [687455.248576] usb 1-7: configuration #1 chosen from 1 choice Mar 3 19:49:41 mala kernel: [687455.267450] Initializing USB Mass Storage driver... Mar 3 19:49:41 mala kernel: [687455.269180] scsi4 : SCSI emulation for USB Mass Storage devices Mar 3 19:49:41 mala kernel: [687455.269410] usbcore: registered new interface driver usb-storage Mar 3 19:49:41 mala kernel: [687455.269416] USB Mass Storage support registered. Mar 3 19:49:46 mala kernel: [687460.270917] scsi 4:0:0:0: Direct-Access Maxtor 6 Y200P0 YAR4 PQ: 0 ANSI: 2 Mar 3 19:49:46 mala kernel: [687460.271485] sd 4:0:0:0: Attached scsi generic sg2 type 0 Mar 3 19:49:46 mala kernel: [687460.278858] sd 4:0:0:0: [sdb] 398297088 512-byte logical blocks: (203 GB/189 GiB) Mar 3 19:49:46 mala kernel: [687460.280866] sd 4:0:0:0: [sdb] Write Protect is off Mar 3 19:50:16 mala kernel: [687460.283784] sdb: Mar 3 19:50:16 mala kernel: [687491.112020] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:50:47 mala kernel: [687522.120030] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:18 mala kernel: [687553.112034] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:49 mala kernel: [687584.116025] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:52:02 mala kernel: [687596.170632] type=1505 audit(1267671122.035:31): operation="profile_replace" pid=8426 name=/usr/lib/cups/backend/cups-pdf Mar 3 19:52:02 mala kernel: [687596.171551] type=1505 audit(1267671122.035:32): operation="profile_replace" pid=8426 name=/usr/sbin/cupsd Mar 3 19:52:06 mala kernel: [687600.908056] async/0 D c08145c0 0 7655 2 0x00000000 Mar 3 19:52:06 mala kernel: [687600.908062] e5601d38 00000046 e5774000 c08145c0 e4c2a848 c08145c0 d203973a 0002713d Mar 3 19:52:06 mala kernel: [687600.908072] c08145c0 c08145c0 e4c2a848 c08145c0 00000000 0002713d c08145c0 f0a98c00 Mar 3 19:52:06 mala kernel: [687600.908079] e4c2a5b0 c20125c0 00000002 e5601d80 e5601d44 c056f3be e5601d78 e5601d4c Mar 3 19:52:06 mala kernel: [687600.908087] Call Trace: Mar 3 19:52:06 mala kernel: [687600.908099] [<c056f3be>] io_schedule+0x1e/0x30 Mar 3 19:52:06 mala kernel: [687600.908107] [<c01b2cf5>] sync_page+0x35/0x40 Mar 3 19:52:06 mala kernel: [687600.908111] [<c056f8f7>] __wait_on_bit_lock+0x47/0x90 Mar 3 19:52:06 mala kernel: [687600.908115] [<c01b2cc0>] ? sync_page+0x0/0x40 Mar 3 19:52:06 mala kernel: [687600.908121] [<c020f390>] ? blkdev_readpage+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908125] [<c01b2ca9>] __lock_page+0x79/0x80 Mar 3 19:52:06 mala kernel: [687600.908130] [<c015c130>] ? wake_bit_function+0x0/0x50 Mar 3 19:52:06 mala kernel: [687600.908135] [<c01b459f>] read_cache_page_async+0xbf/0xd0 Mar 3 19:52:06 mala kernel: [687600.908139] [<c01b45c2>] read_cache_page+0x12/0x60 Mar 3 19:52:06 mala kernel: [687600.908144] [<c0232dca>] read_dev_sector+0x3a/0x80 Mar 3 19:52:06 mala kernel: [687600.908148] [<c0233d3e>] adfspart_check_ICS+0x1e/0x160 Mar 3 19:52:06 mala kernel: [687600.908152] [<c023339f>] ? disk_name+0xaf/0xc0 Mar 3 19:52:06 mala kernel: [687600.908157] [<c0233d20>] ? adfspart_check_ICS+0x0/0x160 Mar 3 19:52:06 mala kernel: [687600.908161] [<c02334de>] check_partition+0x10e/0x180 Mar 3 19:52:06 mala kernel: [687600.908165] [<c02335f6>] rescan_partitions+0xa6/0x330 Mar 3 19:52:06 mala kernel: [687600.908171] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908175] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908180] [<c039fc43>] ? get_device+0x13/0x20 Mar 3 19:52:06 mala kernel: [687600.908185] [<c03c263f>] ? sd_open+0x5f/0x1b0 Mar 3 19:52:06 mala kernel: [687600.908189] [<c020fda0>] __blkdev_get+0x140/0x310 Mar 3 19:52:06 mala kernel: [687600.908194] [<c020f0ac>] ? bdget+0xec/0x100 Mar 3 19:52:06 mala kernel: [687600.908198] [<c020ff7a>] blkdev_get+0xa/0x10 Mar 3 19:52:06 mala kernel: [687600.908202] [<c0232f30>] register_disk+0x120/0x140 Mar 3 19:52:06 mala kernel: [687600.908207] [<c0308b4d>] ? blk_register_region+0x2d/0x40 Mar 3 19:52:06 mala kernel: [687600.908211] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908216] [<c0308cf0>] add_disk+0x80/0x140 Mar 3 19:52:06 mala kernel: [687600.908221] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908225] [<c0308860>] ? exact_lock+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908230] [<c03c53df>] sd_probe_async+0xff/0x1c0

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Simple Remote Shared Object with Red5 Flash Server

    - by John Russell
    Hello, I am trying to create a simple chat client using the red5 media server, but I seem to be having a slight hiccup. I am creating a shared object on the server side, and it seems to be creating it successfully. However, when I make changes to the object via the client (type a message), the SYNC event fires, but the content within the shared object remains empty. I suspect I am doing something wrong on the java end, any advice? Console Results: Success! Server Message: clear Server Message: [object Object] Local message: asdf Server Message: change Server Message: [object Object] Local message: fdsa Server Message: change Server Message: [object Object] Local message: fewa Server Message: change Server Message: [object Object] Server Side: package org.red5.core; import java.util.List; import org.red5.server.adapter.ApplicationAdapter; import org.red5.server.api.IConnection; import org.red5.server.api.IScope; import org.red5.server.api.service.ServiceUtils; import org.red5.server.api.so.ISharedObject; // import org.apache.commons.logging.Log; // import org.apache.commons.logging.LogFactory; public class Application extends ApplicationAdapter { private IScope appScope; // private static final Log log = LogFactory.getLog( Application.class ); /** {@inheritDoc} */ @Override public boolean connect(IConnection conn, IScope scope, Object[] params) { appScope = scope; createSharedObject(appScope, "generalChat", false); // Creates general chat shared object return true; } /** {@inheritDoc} */ @Override public void disconnect(IConnection conn, IScope scope) { super.disconnect(conn, scope); } public void updateChat(Object[] params) { ISharedObject so = getSharedObject(appScope, "generalChat"); // Declares and stores general chat data in general chat shared object so.setAttribute("point", params[0].toString()); } } Client Side: package { import flash.display.MovieClip; import flash.events.*; import flash.net.*; // This class is going to handle all data to and from from media server public class SOConnect extends MovieClip { // Variables var nc:NetConnection = null; var so:SharedObject; public function SOConnect():void { } public function connect():void { // Create a NetConnection and connect to red5 nc = new NetConnection(); nc.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler); nc.connect("rtmp://localhost/testChat"); // Create a StoredObject for general chat so = SharedObject.getRemote("generalChat", nc.uri, false); so.connect(nc); so.addEventListener(SyncEvent.SYNC, receiveChat) } public function sendChat(msg:String) { trace ("Local message: " + msg); nc.call("updateChat", null, msg) } public function receiveChat(e:SyncEvent):void { for (var i in e.changeList) { trace ("Server Message: " + e.changeList[i].code) trace ("Server Message: " + e.changeList[i]) } } // Given result, determine successful connection private function netStatusHandler(e:NetStatusEvent):void { if (e.info.code == "NetConnection.Connect.Success") { trace("Success!"); } else { trace("Failure!\n"); trace(e.info.code); } } } }

    Read the article

  • Flash -> ByteArray -> AMFPHP -> Invalid Image !??

    - by undefined
    Hi, Im loading images into Flash and using JPGEncoder to encode the image to a ByteArray and send this to AMF PHP which writes out the bytearray to a file. This all appears to work correctly and I can download the resulting file in Photoshop CS4 absolutely fine. When i try to open it from the desktop or open it back in Flash it doesnt work... Picasa my default image browser says "Invalid" Here is the code i use to write the bytearray to a file - $jpg = $GLOBALS["HTTP_RAW_POST_DATA"]; file_put_contents($filename, $jpg); That's it ... I use the NetConnection class to connect and call the service, do I need to say Im sending jpg data? I assumed that JPGEncoder took care of that. How can I validate the bytearray before writing the file? Do I need to set MIME type or something .. excuse the slightly noob questions, a little knowledge can be a dangerous thing. Thanks --------------------------------------- PART II ------------------------------------------ Here is some code - 1) load the image into Flash player item.load(); function _onImageDataLoaded(evt:Event):void { var tmpFileRef:FileReference=FileReference(evt.target); image_loader=new Loader ; image_loader.contentLoaderInfo.addEventListener(Event.COMPLETE, _onImageLoaded); image_loader.loadBytes(tmpFileRef.data); } function _onImageLoaded(evt:Event):void { bitmap=Bitmap(evt.target.content); bitmap.smoothing=true; if (bitmap.width>MAX_WIDTH||bitmap.height>MAX_HEIGHT) { resizeBitmap(bitmap); } uploadResizedImage(bitmap); } function resizeBitmap(target:Bitmap):void { if (target.height>target.width) { target.width=MAX_WIDTH; target.scaleY=target.scaleX; } else if (target.width >= target.height) { target.height=MAX_HEIGHT; target.scaleX=target.scaleY; } } function uploadResizedImage(target:Bitmap):void { var _bmd:BitmapData=new BitmapData(target.width,target.height); _bmd.draw(target, new Matrix(target.scaleX, 0, 0, target.scaleY)); var encoded_jpg:JPGEncoder=new JPGEncoder(90); var jpg_binary:ByteArray=encoded_jpg.encode(_bmd); _uploadService=new NetConnection(); _uploadService.objectEncoding=ObjectEncoding.AMF3 _uploadService.connect("http://.../amfphp/gateway.php"); _uploadService.call("UploadService.receiveByteArray",new Responder(success, error), jpg_binary, currentImageFilename); } Many thanks for you help

    Read the article

  • Integrate flex 3.5 projects in flash builder 4 beta 2

    - by Cyrill Zadra
    Hi I'm currently using Flex Builder 3 and Flex SDK 3.5 for my projects. But I'd like to try out the new Flash Builder 4. So I downloaded and installed the new software, configured all the additional software like subversion, server adapter .. and finally a importet my 2 projects. 1) Main Project (includes a swc generated by the Library Project) (flex sdk 3.5) 2) Library Project (flex sdk 3.4) After the import and project cleanup the project is running perfectly. But as soon as I replace the existing LibraryProject.swc through a new one (compiled with flash builder 4 beta 2 sdk 3.4) VerifyError: Error #1014: class mx.containers::Canvas not found. VerifyError: Error #1014: class mx.containers::HBox not found. VerifyError: Error #1014: class IWatcherSetupUtil not found. ... and several others not found errors. Does anyone has the same error. How can I get my project running again? thanks & regards cyrill

    Read the article

  • Zend Framework: Flash Messenger, add a message from the model

    - by Dan
    Any idea on how best to add a message to flash messenger from the model? As FlashMessenger is an action helper, this seems not to be possible, so the obvious solution is to create an internal message object in the model, and return that to the controller from where you can use addMessage(). But if you want to return something else as well, this falls down. Another idea is an additional session namespace for these internal messages, which is then merged in with the Flash Messenger namespace messages at output time? Anyone have any thoughts or experience on this? Cheers.

    Read the article

  • adding a flash object to FancyBox using jquery swfobject

    - by Freeman
    Fellows, I dont seem to be getting a solution. I have a a video gallery which I create by dynamically from a database. I have managed to create a flash object which plays well without FancyBox, however I want a person to click on the gallery and then an swobject is created which will then be show on a fancybox. Here is part of my javascript which is not working. $.ajax({ data:"search=vids&id="+ art, type: "POST", url: "doyit.php", success: function(msg){ $(".videogallery").html('<h3>Click to view videos of '+name+'</h3>'+msg); $('.video').bind('click',function(ev){ var vid=$(this).attr("href"); var img=$(this).find("img").attr("src") $(this).flash({swf: "mediaplayer.swf", flashvars: { file: vid, image: img } }) $(this).fancybox( { ajax:{ type:"POST"}, 'padding' : 0, 'autoScale' : false, 'transitionIn' : 'none', 'transitionOut' : 'none', 'title' : this.title, 'width' : 680, 'height' : 495 } ); $(this).unbind(ev) return false; }) } Any suggestions?

    Read the article

  • How to prevent autoplay and run my own app when inserting an USB-Flash drive

    - by Thomas
    when inserting an USB-Flash drive, Windows normally opens the Autoplay dialog that offers to browse the drive or if there are multimedia files it offers to choose an app to open them. We developed a media player that is connected to the USB-Drive and registers itself as Mass Storage Device. What I need is, that when inserting the Player that this Dialog is not shown, but instead my own application is launched. Ideally the application would be on the Flash Drive itself, but as I understood is that Autorun is disabled for USB-Drives. It would be enough if a preinstalled application is launched. I already tried to catch the WM_DRIVE_CHANGE message, but this only works if my application is the top most window, otherwise the Autoplay Dialog is displayed. Best Tom

    Read the article

  • Can Flash player play .m3u audio files from a remote location

    - by undefined
    Is it possible to play a .m3u file streamed from a remote location through Flash Player in a browser? I have a player that loads and plays .mp3 files but also want to be able to play .m3u files. I have looked at the as3plsreader on google code but I think this is only for AIR and desktop files. anyone tried this or know where I should start looking for an answer? If I wasnt to use flash, what other ways could I get remote m3u files to play in a browser?

    Read the article

  • clock and date showing on a live site but not on localhost

    - by grumpypanda
    I've got clock.swf and date.swf working fine on a live site, now I am using the same code to set up a local develop environment. Everything is working well except the clock.swf and date.swf stopped working on localhost. Two same yellow errors "You need to update your Flash plugin. Click here if you want to continue." but of course my Flash player is up to date since the live site is working fine. I'll post the code below which I think has caused the error. I've been searching online for the last couple of hours but no luck, anyone has got into an issue like this before? What can be the possible cause? Any help is appreciated. This is on the index.php, I can post more code here if needed. <?php embed_flash("swf/clock.swf", CLOCK_WIDTH, CLOCK_HEIGHT, "8", '', "flashcontent");?> <?php embed_flash("swf/date.swf", DATE_WIDTH, DATE_HEIGHT, "8", '', "flashcontent_date");?> configure.php define('CLOCK_WIDTH', '450'); define('CLOCK_HEIGHT', ''); define('DATE_WIDTH', '440'); define('DATE_HEIGHT', ''); flash_function.php <?php function embed_flash($name, $w, $h, $version, $bgcolor, $id) { $cacheBuster = rand(); $padTop = $h/3; ?> <style> a.noflash:link, a.noflash:visited, a.noflash:active {color: #1860C2; text-decoration: none; background:#FFFFFF;} a.noflash:hover {color:#000; text-decoration:none; background:#EEEEEE;} .message { width: <?=$w;?>px; font-size:12px; font-weight:normal; margin-bottom: 10px; padding: 5px; color: #EEE; background: orange;"} </style> <div id="<?=$id; ?>" align="center"> <noscript> <div class="message"> Please enable <a href="https://www.google.com/support/adsense/bin/answer.py?answer=12654" target="_blank" class="noflash">&nbsp;JavaScript&nbsp;</a> to view this page properly. </div> </noscript> <div class="message"> You need to update your Flash plugin. Click <a href="http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=ShockwaveFlash&promoid=BIOW" target="_blank" class="noflash">&nbsp;here&nbsp;</a> if you want to continue. </div> </div> <script type="text/javascript"> // <![CDATA[ var so = new SWFObject("<?=$name;?>", "", "<?=$w;?>", "<?=$h;?>", "<?=$version;?>", "<?=$bgcolor;?>"); so.addParam("quality", "high"); so.addParam("allowScriptAccess", "sameDomain"); so.addParam("scale", "showall"); so.addParam("loop", "false"); so.addParam("wmode", "transparent"); so.write("<?=$id;?>"); // ]]> </script>

    Read the article

  • Why does OS X insist on spinning up all external drives when loading a file from the local drive?

    - by Phillip Oldham
    Why does OS X insist on spinning up all the attached external drives (firewire, usb) when loading a file from the local (internal) drive? It's driving me insane that I have to wait for 3 attached drives (1 back-up, 2 media) to spin up -- a total of 20s -- to access a file that is located only on my local/internal drive. There is no obvious need to access the other drives; nothing is being read from them and nothing need be written. Examples: Quicktime X opening a file from the local HDD. Starting Caffeine, an app which doesn't access any other files at all. Can I tell OS X to only spin those drives up when actually accessing them?

    Read the article

  • Where do SATA drives get their power when using a PCI SATA controller on oler PCs?

    - by Sukima
    I've been looking into PCI SATA controller cards online for an older PC and noticed that the ports only have the SATA cable connector and the Power supply does not have the SATA power connectors. I also had a few external eSATA drives which don't power up unless I also plugin the USB cable. Therefore I realize that SATA and eSATA do not carry power and need power else where. When converting older PCs to use a PCI SATA controller how do you provide power to the SATA drives? Anticipating the answer to be some kind of converter cable (which I was unable to search for) then can older power supplies handle added drives? (Assuming a 4 port SATA controller means 4 more drives the power supply has to endure). Or do you have to get a second poer supply and kinda jerry-rig it into an old case?

    Read the article

  • Can the Intel C600-series SAS controller RAID SATA drives?

    - by CyberShadow
    (Sorry if this is off-topic - technically this is a home workstation setting, but is regarding mostly-server technology.) I have a GA-X79S-UP5-WIFI motherboard and four ST3000DM001 drives, and the IRST manager isn't letting me put the drives in a RAID (it lists them, but the option to select the C600-series controller is simply grayed out). Have I done a mistake? Do I need SAS drives to RAID them on the SAS controller?

    Read the article

  • Can I set up a VM Host's drives to appear as if they belong to the VM OS?

    - by Sinaesthetic
    A couple things. I added two of the host drives to the shared drives area of the VM. Neither of them show up in the VM's OS. What I actually would like to do is configure two of these drives that are on the host OS to appear as if they are on the VM. So that I can share them through the VM. Sounds wonkey, I know. The VM is Windows Server 2008 and the Host is Mac OSX Lion. I would like to host my media drives through windows server rather than over OSX as I have nothing but problems. I'm not sure if this is possible. Any input?

    Read the article

  • How to make network drives appear even if disconnected?

    - by Jake
    I have the same problem as many others: network and home drives set by group policy and AD are not connected on windows startup. The prime suspect is that the LAN or wireless does not connect until after user log in. I have already given up on that. Now, I just want the disconnected drives to continue to list in My Computer so that if the user goes in and double click the drive, it will connect again. However, on some machines the drive is completely missing from My Computer. If I right click My Computer Map Network Drive again, it does work. But it's very troublesome to do it all the time. And I don't want to use a script to map the drives because I don't want to appear to be using a hacky solution to the users. The drives listed as disconnected will look more like a "built-in feature", and gives users more confidence. How can I keep the disconnected drives in My Computer? I am using Windows 7 Professional and Win2k8.

    Read the article

  • Is it possible that solid state drives (or any faster drive) will make common applications faster even if they are cached?

    - by leladax
    I assumed that solid state drives are insignificant after, say, Firefox is fully brought up and no important disk activity after that is going on. However, I wonder if some kind of 'cached from the disk to the CPU' activity is going on that may make solid state drives (or any faster drive) better. Then again, I suspect that may be depended only on the Bus (or some kind of cache memory drives have). Hrm..

    Read the article

  • Why is it bad to map network drives in Windows?

    - by Beeblebrox
    There has been some spirited discussion within our IT department about mapping network drives. In particular, it has been said that mapping network drives is A Bad Thing and that adding DFS paths or network shares to your (Windows Explorer/Libraries) Favourites is a far better solution. Why is this the case? Personally I find the convenience of z:\folder to be better than \\server\path\folder', particularly with cmd line and scripting (of course I'm not talking about hard-coded links, naturally!). I have tried searching for pros and cons of mapped network drives, but I haven't seen anything other than 'should the network go down, the drive will be unavailable'. But this is a limitation of any network-accessed storage... I have also been told that mapped network drives poll the network when the network resource is unavailable, however I haven't found more information on this. Wouldn't this still be an issue with other network access mechanisms (that is, mapped Favourites) whenever Windows tries to enumerate the file system (for example, when a file/folder picker dialog is opened)? -- Do network drives poll the network any more than a Windows Explorer library/favourite?

    Read the article

  • SAS instead of SATA 2 for my hard drives?

    - by jasondavis
    I am building a new system soon, I will have multiple 1-2tb hard drives for storage in it. I only have experience uasing the sataII drives but I saw somewhere that I should be using something like SAS? I read that if I were going to have 20 drives that I could use 4 SAS cables vs 20 SATA cables. Can someone help me understand this better? If it were only 4 cables then how would 20 drives hook up? Also can a regualr sata2 drive hook up to that?

    Read the article

  • How do you passthrough native SATA drives to a guest on ESXi?

    - by John
    I have ESXi 4.0 running on an Intel DX58S0 Mothboardboard with an Intel Core i7 930 processor. VT-d is also enabled. I have three drives in the system, drive 0 is used for ESXi. Drive 1 and 2 contain data from an older machine and show up under the "Storage Adapters" section in configuration. I would like to allow a guest machine to access the data on these drives (as nativly as possible). I have enabled passthrough of the motherboard's built in SATA controller (Intel/Marvell 88SE6121 ). This controller shows up in my guest OS, but the guest shows no drives aside from the normal virtual drive. I have tried a Linux guest and Windows7. I have also configured the host machine to try IDE/RAID/ACHI modes for the SATA controller. Any ideas how I can configure one of my guests to get at the raw data on these drives?

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >