Search Results

Search found 21597 results on 864 pages for 'timer service'.

Page 610/864 | < Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >

  • Webcast - Set Your Sights on Enterprise 2.0 in the Cloud

    - by [email protected]
    To gain a competitive edge in your market, you need your business processes to be more collaborative, agile, and flexible to meet growing business demands. How can you make that happen? One way is to deploy portal, content management, and Enterprise 2.0 capabilities on a cloud infrastructure. According to top industry analysts, Enterprise 2.0 and cloud computing are two of the top three CIO initiatives in 2010. What are some of the advantages associated with deploying your Enterprise 2.0 initiatives in a cloud environment? Learn about the security, performance, and flexibility benefits that are available to you. Watch our complimentary live Webcast, Cloud Computing and Enterprise 2.0--Gain a Competitive Advantage, to get the answers you're looking for. Find out how Oracle pioneered the highly scalable and highly secure solutions that will enable you to: Quickly deploy on a cloud computing infrastructure that can scale as projects go viral Accelerate business processes, such as new product introduction, customer service, and new employee on-boarding Take advantage of best practices in cloud computing and Enterprise 2.0 implementations Join us for this LIVE webcast tomorrow as we show you how to achieve a higher level of performance and flexibility with Enterprise 2.0 and cloud computing. Register today for the live Webcast.

    Read the article

  • Installing LBP 2900 printer -> libs folders wrong?

    - by Peter Smit
    I am trying to get my Canon LBP2900 printer to work on Ubuntu 11.10 64 bit. What I have done is try to follow the steps on https://help.ubuntu.com/community/CanonCaptDrv190 So I downloaded the version 2.3 driver and tried to convert the rpm files to debian and installed them sudo alien cndrvcups-capt-2.30-1.x86_64.rpm cndrvcups-common-2.30-1.x86_64.rpm sudo dpkg -i cndrvcups-capt-2.30-1.x86_64.deb cndrvcups-common-2.30-1.x86_64.deb restarted cups and try to install the printer with lpadmin: sudo service cups restart sudo /usr/sbin/lpadmin -p LBP2900 -m /usr/share/cups/model/CNCUPSLBP2900CAPTK.ppd -v ccp://localhost:59787 -E What I noticed however that on the step with lpadmin it goes wrong with the error: lpadmin: Bad device-uri scheme "ccp" After trying to trace what has gone wrong, I think I nailed it to the fact that dpkg installed a file /usr/lib64/cups/backend/ccp instead of /usr/lib/cups/backend/ccp Checking the original rpm with archive manager shows indeed that /usr/lib and /usr/lib64 are used, with the backend/cpp file only installed in lib64. As I understand correctly, Ubuntu 11.10 uses /usr/lib32 and /usr/lib instead so the files are installed in the wrong place. Is there an automated method of converting the rpm/deb files with the wrong lib structure to one with the right lib structure for ubuntu 11.10? Or am I completely on the wrong track for getting my printer installed?

    Read the article

  • Webcast Tomorrow: Securing the Cloud for Public Sector

    - by Darin Pendergraft
    Securing the Cloud for Public Sector Click here, to register for the live webcast. Cloud computing offers government organizations tremendous potential to enhance public value by helping organizations increase operational efficiency and improve service delivery. However, as organizations pursue cloud adoption to achieve the anticipated benefits a common set of questions have surfaced. “Is the cloud secure? Are all clouds equal with respect to security and compliance? Is our data safe in the cloud?” Join us December 12th for a webcast as part of the “Secure Government Training Series” to get answers to your pressing cloud security questions and learn how to best secure your cloud environments. You will learn about a comprehensive set of security tools designed to protect every layer of an organization’s cloud architecture, from application to disk, while ensuring high levels of compliance, risk avoidance, and lower costs. Discover how to control and monitor access, secure sensitive data, and address regulatory compliance across cloud environments by: providing strong authentication, data encryption, and (privileged) user access control to ensure that information is only accessible to those who need it mitigating threats across your databases and applications protecting applications and information – no matter where it is – at rest, in use and in transit For more information, access the Secure Government Resource Center or to speak with an Oracle representative, please call1.800.ORACLE1. LIVE Webcast Securing the Cloud for Public Sector Date: Wednesday, December 12, 2012 Time: 2:00 p.m. ET Visit the Secure Government Resource CenterClick here for information on enterprise security solutions that help government safeguard information, resources and networks. ACCESS NOW Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices | Privacy Statement

    Read the article

  • Uninstall Dell Wave

    - by Onion-Knight
    The image we put on our company laptops includes the Dell Wave interface for Biometric log in. The Wave UI increases boot time by about 5 minutes (because it loads the fingerprint database(a feature I don't use)), so I'm trying to uninstall it, but with little success. There is no line-item in the Add/Remove Programs menu to formally delete it, nor is there a Service I can stop/remove to disable the Wave UI. I've tried looking online, but all I find instead are hits for Google Wave and virus-removal forums with HyjackThis dumps that include Dell Wave records. Any ideas?

    Read the article

  • Reflecting on 2010 and Looking into 2011

    - by Sam Abraham
    In early 2010, I had blogged and shared my excitement as I was about to embark on a new journey relocating to South Florida.     As I settled down and adjusted to my new life, I was presented with an opportunity to get actively involved and volunteer in the local Florida .Net and Project Management communities.  I have since devoted a significant portion of my time to community initiatives, coordinating the West Palm Beach .Net User Group, volunteering as a member of the INETA Speaker’s Bureau and traveling to attend/speak at .Net code camps and user groups throughout the states of Florida and New York. I have also taken on various volunteer roles at the South Florida Chapter of the Project Management Institute starting as core team member on the chapter’s mentoring initiative and ending the year as Project Manager of the chapter’s mentoring program and as Director of Electronic Communications on the chapter’s IT team. I am also serving a one year term (2010-2011) as secretary and founding board member of Florida’s first official chapter of the International Association for Software Architects (IASA).   A big thank you is due for those who afforded me the opportunity and privilege to take part of these initiatives and those who provided guidance and encouragement when I needed them the most.   Looking ahead into 2011, I hope to continue my community involvement and volunteer activities. I will start by dedicating the first 5 weekends in the New Year to teach a free comprehensive Microsoft PowerPoint class at church. My goal will be to start from scratch and slowly cover the various available PowerPoint features that can be leveraged to create captivating presentations. Starting February, I will be resuming my user group/code camp speaking engagements at our South Florida .Net Code Camp and the West Palm Beach .Net User Group.   I look forward to continuing to meet, chat and share with our technical community members and to another active year in community service.   All the best, --Sam Abraham

    Read the article

  • Windows host MIA on network

    - by andrewbadera
    I've had a machine effectively disappear off my home office network. 192.168.1.100 - Windows 7 laptop (on domain) - problem machine 192.168.1.42 - Windows 2008 server (domain controller) 192.168.1.101 - Windows 7 laptop (guest; not on domain) For some reason I am unable to ping, tracert or remote desktop to 192.168.1.100 from .42 or .101. I can remote between .42 and .101 no problem however. .100 cannot ping nor remote desktop to .42 or .101. Remote Desktop access is enabled on .100. I've opened the firewall rules. I've disabled the firewall domain profile. I've turned the firewall service off entirely. No matter what I do, the .100 host is unreachable by any other host on the network. I'm at my wit's end. Thanks in advance for any debug advice!

    Read the article

  • Reduce Bookmarks in Chrome to Toolbar Icons

    - by Asian Angel
    Do you want to make the most efficient use of the space in Chrome’s Bookmarks Toolbar? Now you can reduce the bookmarks to icons with just a few minutes work. Note: You may or may not wish to do some reorganizing with your bookmarks before-hand. Condensing the Bookmarks If your browser is anything like ours then it has not taken long to fill up your Bookmarks Toolbar. Accessing the drop-down section often throughout the day is not too fun. The bookmarks are the easiest part of your collection to condense. Right-click on each bookmark and select “Edit…” to open the Edit Bookmark Window. Delete the text, click OK, and you are finished. You still have a useable bookmark that looks nice and takes up very little room. These are our bookmarks from the first screenshot above…no problems with accessing all of them now. With just a few minutes work you can have a beautiful and compact Bookmarks Toolbar. If you have been looking for a more efficient and compact Bookmarks Toolbar in Chrome, then this little hack will certainly be useful for you. Similar Articles Productive Geek Tips Reduce Your Bookmarks Toolbar to a Toolbar ButtonAccess Your Bookmarks with a Toolbar Button in Google ChromeConvert Chrome Bookmark Toolbar Folders to IconsAdd the Bookmarks Menu to Your Bookmarks Toolbar with Bookmarks UI ConsolidatorCompact Toolbar Buttons in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes

    Read the article

  • Help in (re)designing my Swing application

    - by Harihar Das
    I have developed a Swing application that controls execution of several script like jobs. I need to display the interim output of the jobs concurrently. I have followed MVC while writing the application. The application is working as expected. But off late I have the following requirements in hand: A few of the script jobs need special user privileges to execute so as to access specialized resources. There seems to be now way in Java to impersonate as a different user while running an application.[examined in this question]. Also trying to run the Swing application as a scheduled task in windows is not helping. Once started the jobs should be running even if the user logs off after starting the jobs. I am thinking of separating the execution logic from the UI and run that as a service; and introduce JMS in between the two layers so as to store/retrieve the interim the output. Note: I need to run this application on windows Any ideas on meeting my requirements will be highly appreciated.

    Read the article

  • Annoying Search Behavior - Search Companion

    - by David Stein
    I'm running Windows XP Professional, and ever since the last service pack I've had a searching problem. When I want to search a network drive, I get the following message: This folder is not indexed. To search this directory plase use Search Companion or add this directory to your index via options. Basically, I have two questions. Is there some way that I can use the indexed search where appropriate and then have it switch over to the Search Companion automatically? Second, how does a programmer look at this code and think this is a good idea? I realize that this question is rhetorical. However, I must enter my search string into one search, receive the error, and then click "Search Companion" to bring up the new search window. This window doesn't even take the defaults from the previous one so I have to specify the search string and drive again.

    Read the article

  • Webcast: Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory

    - by Etienne Remillon
    Interested in upgrading from DSEE to OUD? Register to learn from one customer. Oracle Security Solutions Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory Oracle Unified Directory (OUD) is the world’s first unified directory solution with highly integrated storage, synchronization, and proxy capabilities. These capabilities help meet the evolving needs of enterprise architectures. OUD customers can lower the cost of administration and ownership by maintaining a single directory for all of their enterprise needs, while also simplifying their enterprise architecture. OUD is optimized for mobile and cloud computing environments where elastic scalability becomes critical as service providers need a solution that can scale by dynamically adding more directory instances without re-architecting their solutions to support exponential business growth. Join us for this webcast and you will: Learn from one customer that has successfully upgraded to the new platform See what technology and business drivers influenced the upgrade Hear about the benefits of OUD’s elastic scalability and unparalleled performance Get additional information and resources for planning an upgrade Register here for the webcast. REGISTER NOW Register now for this complimentary webcast: Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory Thursday September 13, 2012 10:00 a.m. PT / 1:00 p.m. ET

    Read the article

  • WCF Communication Problem

    - by vincpa
    Two separate servers, one is an IIS7 web application trying to connect to a WCF service on the other server. The initial connect always fails, sometimes the second, after that, everything works normal. What could be the cause of this problem? The exception that gets thrown is EndpointNotFoundException Could not connect to net.tcp://192.168.0.83/MgrService/Manager.svc. The connection attempt lasted for a time span of 00:00:21.0289348. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.0.83:808.

    Read the article

  • Why is the size of antivirus greater than that of anti malware? [on hold]

    - by Mistu4u
    Recently my computer was attacked by different kinds of worms and my computer was slowed down. So I tried to remove them by installing Avast free antivirus. The worms were copying themselves rapidly. But after installing avast, I observed it only blocked new copy of the worms to be created but could not delete the already created worms, even it could not find worms in a good amount. Then I downloaded Malwarewbyte Anti Malware and to my surprise I found out its service was way too better than Avast antivirus. It detected and deleted almost 2065 worms and malwares from my computer and now my computer is doing fine. As far as I know, anti malware functionality is also included in Antivirus, But then also its performance is poor. Now my question is if performance of antiviruses are meat to be poor than Antimalwares, then why the size of Avast is 179Mb and the size of Malwarebyte is 9.81mb?

    Read the article

  • samba share not on network after upgrading to Ubuntu 12.04LTS.

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • RPC Server Unavailable When Trying to Join W2003 Server to W2003 Active Directory Domain

    - by Roel Vlemmings
    I have an Active Directory Domain with a Windows 2003 Standard SP2 Server as the DC. When trying to join an additional Windows 2003 Standard SP2 server to the domain I get message "The following error occurred attempting to join the domain 'My Domain'. The RPC Server is unavailable. The computer is actually added to the Active Directory Computers. I can even right-click and Manage it. I can access file shares from the DC on the other server and vice versa. I can ping the DC from this server and ping the server from the DC using the computer name. The time on both server is the same more or less to the second. RPC service is running on both servers. I can join other computers to the domain and there are no other issues with the domain. Windows Firewall disabled on both computers. NetSetup.LOG shows: NetpSetNetloginDomainCache: DSEnumerateDomainTrustsW failed 0x6ba I looked up this WIN32 Error Code: It is RPC_S_SERVER_UNAVAILABLE.

    Read the article

  • DNSHost.exe trojan found, now after fix, no one can print

    - by Matt Dawdy
    What started today as an inability to get to the internet (but people could get in just fine), morphed to we realized that the DNS Server wasn't working, then we figured out that we had a trojan called DNSHost.exe (spybot.rl I think), and we disabled its service entry and deleted the offending file and all registry keys told to use by the Trend Micro site. Now, we can get on the internet, but the printer being served by this machine (called server2) cannot be printed to from any client machine on the network. We get the error "The RPC Server is unavailable". I'm assuming that this is related to the DNS issue we had earlier, as we were able to print just fine until this fun happiness started this morning. Anyone have any solid suggestions? Windows Server 2003 R2 SP2, and the client machine are all Windows XP SP2.

    Read the article

  • Getting rid of a trojan. SVCHOST question

    - by MasterPeter
    My antivirus keeps notifying me of a trojan. svchost.exe keeps creating some 'drivers' (.sys files in the drivers directory under system32 of my Windows XP installation) each of which is marked as Bubnix.AB trojan. The antivirus fails to remove many of the files as they are immediately used by svchost (I presume). How do I find out which service is the culprit? Why can't the antivirus effectively rid me of this plague? Also, how many svchost processes is it normal to have running at any one time? I am using Win XP SP3, and ESET NOD32 antivirus.

    Read the article

  • Domains with similar names and issues

    - by abel
    I recently purchased one of those domain names like del.ico.us. While registering I found that delicious.com was being used. Argument: I found that delicious.com belonged to the same category as my to-be website. It served premium delicious dishes. Counter Argument: My to-be domain though belonging to the same category, specialized in serving free but delicious dishes or in giving out links(affiliate) to other sites serving premium delicious dishes. Additional Counter Arguments: 1.delicious.com was not in English. 2.the del.icio.us in my domain name though having the same spelling, is not going to be used in the same fashion. For eg.(this may not make sense, because the names have been changed)the d in delicious on my website actually stands for the greek letter Delta(?/d) and since internationalized domains are still not easily typable, I am going for the english equivalent.The prefix holds importance for the theme of the service which my website intends to offer. My Question: Can I use the domain name del.icio.us for my website? How are these kinds of matters dealt? (The domain names used are fictitious. And I have already registered the domain but have not started using it.I chanced upon this domain name because it was short, easy to remember and suited the theme of my website.)

    Read the article

  • Book Review: Professional WCF 4

    - by Sam Abraham
    My Investigation of WCF internals have set the right stage to revisit Professional WCF 4 by Pablo Cibraro, Kurt Claeys, Fabio Cozzolino and Johann Grabner. In this book, the authors dive deep into all aspects of the WCF API in a reading targeted towards intermediate and advanced developers. Book quality so far as presentation, code completeness, content clarity and organization was superb. The authors have taken a hands-on approach to thoroughly covering the WCF 4.0 API with three chapters totaling 100+ pages completely dedicated to business cases with downloadable source code readily available. Chapter 1 outlines SOA best-practice considerations. Next three chapters take a top-down approach to the WCF API covering service and data contracts, bindings, clients, instancing and Workflow Services followed by another carefully-thought three chapters covering the security options available via the WCF API. In conclusion, Professional WCF 4.0 provides a thorough coverage of the WCF API and is a recommended read for anybody looking to reinforce their understanding of the various features available in the WCF framework. Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.   All the best, --Sam

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • AnyConnect SSL VPN split tunneling for a single website?

    - by Daniel Lucas
    We have a Cisco ASA 5510. We use split tunneling for AnyConnect SSL VPN clients. All internal addresses are tunnelled. Everything else is routed through the client's own internet connection. We use a SaaS service that only responds to requests when they come from one of our own public IP addresses. Because of this, VPN users are unable to access it currently. Is there a way to specify that a specific website should be tunneled and all others should not? NOTE: Worst case we will use a web bookmark on the clientless portal to tranlate through our network, but I'd like to see if the above is possible first.

    Read the article

  • Simple monitoring utility with up/down statuses of the host's network connectivity and services

    - by Beaming Mel-Bin
    We've looked at many monitoring tools (SolarWinds, Zabbix, Nagios) through out the last 10 years but they never took hold because they are overly complicated. I am willing to try them again or something new at this point but with a much simpler goal: ping to check up and down of host tcp probes to test up and down of service notifications via e-mail web GUI prefer an OSS solution Wanted to know if someone has any recommendations on this. This could be a Windows or Linux application. Preferably without the reqirement of agents. I don't even need SNMP support but that may be nice for expanding once we have the above mentioned bare minimum in place.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Experiences using VLC for video-on-demand streaming? (VLM)

    - by StackedCrooked
    I'm considering my options for implementing a VOD service. Until recently my choices seemed to be either Wowza or Darwin, but now I discovered VLM, which looks really cool. I am going to stream MPEG4 H.264 video with AAC audio. I'm probably going to use the RTSP protocol, but I'm willing to use HTTP as well (after reading this article). Can anyone comment on his or her experiences with VLM? How does it compare to Darwin or Wowza? Is it stable and worthy of production use? Are there any limitations or performance problems?

    Read the article

  • My search what the Cloud will mean for my Work

    - by Kay Sellenrode
    Since I finished my MCM Exchange 2007 training back in April 2009 I’m struggling with the Cloud. I know it will change the way we do things today, but how will it affect my work. My work is Exchange consultancy mostly in the Netherlands, but more and more across the globe.   In my job as a consultant I noticed last year that a large percentage of my customers showed interest in the cloud services available today. But in most situations it seemed that it wasn’t the right time for them to switch to a cloud service at this moment. Right now I’m helping one of my customers is exploring Exchange online and it looks like they will switch over from their on-premise Exchange solution. This made me more than ever realize that I need to do something to not miss the boat.     With Office 365 coming this year, my idea is that Cloud services will take off from now. Also I’m sure that quite some customers will expect me to help them with their decision between the cloud and the on premise solution. So in the next months I will explore all the possibilities of Office 365, but also some of the competition in this field.   In my search for what the cloud will mean for me and my customers, I will go over all the aspects of the offered solutions. Any help in my search is always welcome. I’m looking forward to ideas people have around the cloud and how it will change the IT environment, especially in the Unified communications field.   Next week I will post my first article about my experiences with the cloud until now.

    Read the article

  • Middleware Oracle Excellence Awards 2012 & HAPPY NEW YEAR!

    - by JuergenKress
    Thanks for the FY12 middleware business! Make sure you become our WebLogic partner of the year! The Oracle Excellence Awards 2012 are Open for Nominations Middleware Specialized Partners: Submit your Nominations for the Middleware Specialized Partner of the Year by 29 June! The Specialized Partner of the Year Award celebrates OPN Specialized partners in EMEA who have demonstrated success with specialization, delivering customer value, and outstanding solution or service innovation in categories that complement OPN Specialization investments. Nominate now to receive the recognition you deserve! Winners of the Specialized Partner of the Year - EMEA Awards will each receive: $5k MDF for market expansion and promotion of their winning solutions/services extensive visibility across the extended Oracle community through interviews, advertising and video prestige and recognition by being awarded in a ceremony at Oracle OpenWorld. In addition, winners from all the Oracle Excellence Awards categories will receive a free registration to Oracle OpenWorld 2012 in San Francisco, California, as well as be showcased at the conference in October, be given an opportunity to mingle with Oracle executives and their peers, and be featured in Oracle Magazine. Nomination tips: · Build your nomination with Oracle · Provide evidence of your success · Send supporting documents here. · Get a quote from Oracle product management or myself! Closing date: 29 June Full details of all Oracle Awards offered this year are available on the Oracle Excellence Awards Website. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Oracle Excellence Awards 2012,SOA Specialization award,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >