Search Results

Search found 17830 results on 714 pages for 'excel services'.

Page 458/714 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Introdução ao NHibernate on TechDays 2010

    - by Ricardo Peres
    I’ve been working on the agenda for my presentation titled Introdução ao NHibernate that I’ll be giving on TechDays 2010, and I would like to request your assistance. If you have any subject that you’d like me to talk about, you can suggest it to me. For now, I’m thinking of the following issues: Domain Driven Design with NHibernate Inheritance Mapping Strategies (Table Per Class Hierarchy, Table Per Type, Table Per Concrete Type, Mixed) Mappings (hbm.xml, NHibernate Attributes, Fluent NHibernate, ConfORM) Supported querying types (ID, HQL, LINQ, Criteria API, QueryOver, SQL) Entity Relationships Custom Types Caching Interceptors and Listeners Advanced Usage (Duck Typing, EntityMode Map, …) Other projects (NHibernate Validator, NHibernate Search, NHibernate Shards, …) ASP.NET Integration ASP.NET Dynamic Data Integration WCF Data Services Integration Comments?

    Read the article

  • How to get paid and figure out if I want to keep this client [migrated]

    - by Heiner Fawkes
    I have a client who is not paying on time, but it looks like the specifics don't match similar questions on this SE site. I got a call from a client I did website work for years ago. I had not done this kind of work for many years and frankly I'm not sure I want to now, but nevertheless about a month ago I agreed to bring his website, SEO, social media, and overall marketing for his small business up to speed. Why? He has told me many times how I'm the most honest, most well-informed contractor he's had experience with. And I personally kind of like him too. So I started working on an hourly basis. I sent one very small invoice and got paid. Then we talked a whole lot about all sorts of feature he would like me to implement. I started that work, and sent a second invoice on the first of the month (one of my two stated billing days). I didn't get paid. On every invoice it states that I charge a whopping ten percent per week late. I sent many voicemails and emails asking to please let me know what's going on with payment, and didn't get replies. Then the 15th of the month rolled around (which I stated initially as one of my invoicing dates). Since I hadn't been paid for the last invoice, I simply didn't send him an invoice at that time but emailed him and said that I will combine it with the next scheduled invoice for this reason (probably a bad idea I realize). Eventually he sent a portion of the invoice payment. I emailed back to let him know that he's three weeks late and what the remaining balance is. Finally we got in touch via phone. He basically told me that he thought I hadn't done all of the work I said I did. He looked at the page source code and it didn't look complete to him. I explained why his perception would be different and what work I had done as specified. He accepted this and said that part of the reason he didn't pay in full is that he's been swamped with personal family stuff, and part of the reason is that he didn't think I did all the work. That struck me as pretty weird. He also expressed concern that he has no idea now how much all the changes he has asked for are going to cost. And once again, he told me how honest and high-quality my services are compared to others he has dealt with. He also said he would pay me more (but not all) of the now three weeks overdue invoice that day. I didn't receive any payment. Basically this is how the client relationship strikes me: He's not good at communication. He's very busy and English isn't his first language. He almost never replies to emails but phone calls are fine. He's asked me to avoid emails for communication and I've asked him to please use email. He might not have enough money to afford all the things he has asked for. But so far I have been working for an hourly fee (which is quite high). He also has started paying monthly for hosting and social media services from me. What seems very abnormal is for a client to be so overdue on payments and to actually withhold payment of an invoice without any communication because he didn't think the work was done. I told him that I will send dollar estimates of each module of remaining work so that we can decide which ones are the highest priority if he cannot afford them all. I also reiterated that in the future if he has doubts about the work or an inability to pay, he must contact me immediately to say so. I basically plan to state the following to him: I would like to work for him and help his business. I also have sympathy for his recent family difficulties. I am happy to figure out payment plans that would work better for him, but first I need to be paid in full for all outstanding invoices, especially given that I skipped one of them just to be nice. The most crucial thing I need is communication about any problems with my work or his ability to pay. Once again, he heeds to pay in full immediately before we negotiate anything else. Does the above seem like an appropriate communication? Is anything missing from it? Is anything I'm doing here really abnormal?

    Read the article

  • Selling On Demand

    - by andrea.mulder
    In May 2010, eSilicon management began evaluating providers for a new CRM system - vetting a variety of CRM offerings. Using a rating system that scored vendors according to marketing, sales, services, features, usability, implementation time, and cost, the team chose Oracle CRM On Demand for the project. "Overall, Oracle CRM On Demand was the best system that was able to address all our pain points," says Janet Ang, senior applications developer and project manager of the CRM implementation at eSilicon. Read Selling On Demand, a feature article in the February 2011 issue of Profit Magazine, and find out how eSilicon achieved:Easy Implementation and Adoption Sales and Management Benefits High Productivity for Tech

    Read the article

  • What to do when projects are slow and you are being held up by others?

    - by antonpug
    Where I work, projects take a significant amount of time because the teams are large, there is a lot of "design and analysis", a lot of documentation, and work always gets pushed off. I work in the middle tier and I always have to wait for the services and client folks to get their work done. Oftentimes there are weeks at a time when I can't get any work done. I feel bored and weird just sitting here scrambling to at least appear like I am busy. Management seems to do little when asked for more work. What do you do in such cases?

    Read the article

  • Are there any risk if your DNS's SOA or admin contact are using the same domain as the DNS

    - by Yoga
    For example, Google.com [1] The SOA email is : dns-admin.google.com The contact is: Administrative Contact: DNS Admin Google Inc. dns-admin.google.com As you can see, both are using google.com, I am thinking it is safe to use the same domain, i.e. consider the case you lost control of the domain, you can receive email also. (Of course Google is a public company so the chance is low, but might occur for smaller company that their domain might be stolen..) So, do you recommend use your the same domain as the contact or others free services such as gmail? [1] http://whois.domaintools.com/google.com

    Read the article

  • How to make creating viewmodels at runtime less painful

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painful. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genius since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use a DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an in-depth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

  • Project planning and customer tracking system

    - by Daniel Hollands
    First off, sorry if this is the wrong 'stack' site, but it seemed like a good place to start. I'm happy to report that my services as a web developer are starting to be in quite a lot of demand, and I have a few existing and potentially new customers all lining up - but I'm finding it very hard to keep track of everything. What I'm hoping for is some (preferably web-based) system which I can use to keep track of who my customers are, the various projects that I've got going on for them, and (if possible) the individual sub-tasks that make up each project. What would be even better is if the relevant customer was able to log into the site, and see the process of their projects. I do hope you know what I'm talking about, and that you'll be able to offer some suggestions of either web-base sites that offer something along these lines, or of some open source solution or something like that? Thank you

    Read the article

  • OBIEE 11g 11.1.1.7.1 is Available For BI Enterprise

    - by p.anda
    (in via Ian) The Business Intelligence Enterprise Edition (OBIEE) 11.1.1.7.1 patch set has been released.  This patch set is available for all customers who are using Oracle Business Intelligence Enterprise Edition 11.1.1.7.0. Now available to download from My Oracle Support Patch 16556157: OBIEE BUNDLE PATCH 11.1.1.7.1 This single OBIEE 11.1.1.7.1 patch set download is comprised of the following: 1 of 6 Oracle Business Intelligence Installer (BIINST) 2 of 6 Oracle Business Intelligence Publisher (BIP) 3 of 6 EPM Components Installed from BI Installer 11.1.1.7.0 (BIFNDNEPM)) 4 of 6 Oracle Business Intelligence Server (BIS) 5 of 6 Oracle Business Intelligence Presentation Services (BIPS) 6 of 6 Oracle Business Intelligence Platform Client Installers and MapViewer Ensure to review the readme file on the Installer download for important installation instructions The following is also required to be downloaded and applied: Patch 16569379: Dynamic Monitoring Service patch Additional important notes are available in  the following document: Document 1566124.1: OBIEE 11g 11.1.1.7.1 is Available for Oracle Business Intelligence Enterprise Edition

    Read the article

  • recurring billing / profiles management system

    - by Karl Cassar
    As a company, we have various recurring fees which our clients pay - these can include: hosting plans maintenance agreements SLAs ... I would like to know if anyone knows of a good, web-based recurring billing / payments management system which we could use to help us get more organised regarding this aspect of our business. Basically, we would need to: Create recurring profiles, e.g: Hosting, emails / domain services @ 200eur / year Be able to give free / extend the subscription period, for any reason. Also, we don't have specific products which we would like to choose and charge - all these recurring fees are discussed with the clients, and are created on a per-client basis. I'm not sure if this is the best place to ask for, however since I think most 'webmasters' require such a system to keep track of payment, I thought this would be the place to go for. Thanks in advance!

    Read the article

  • Render Ruby object to interactive html

    - by AvImd
    I am developing a tool that discovers network services enabled on host and writes short summary on them like this: init,1 +-- login,1560 -- +-- bash,1629 +-- nc,12137 -lup 50505 { :net = [ [0] "*:50505 IPv4 UDP " ], :fds = [ [0] "/root (cwd)", [1] "/", [2] "/bin/nc.traditional", [3] "/xochikit/ld_poison.so (stat: No such file or directory)", [4] "/dev/tty2", [5] "*:50505" ] } It proved to be very nice formatted and useful for quick discovery thanks to colors provided by the awesome_print gem. However, its output is just a text. One issue is that if I want to share it, I lose colors. I'd also like to fold and unfold parts of objects, quickly jump to specific processes and what not? Adding comments, for example. Thus I want something web-based. What is the best approach to implement features like these? I haven't worked with web interfaces before and I don't have much experience with Ruby.

    Read the article

  • Computacenter first partner to offer Oracle Exadata proof-of-concept environment for real-world test

    - by kimberly.billings
    Computacenter (http://www.computacenter.com/), Europe's leading independent provider of IT infrastructure services, recently announced that it is the first partner to offer an Oracle Exadata 'proof-of concept' environment for real-world testing. This new center, combined with Computacenter's extensive database storage skills, will enable organisations to accurately test Oracle Exadata with their own workloads, clearly demonstrating the case for migration. For more information, read the press release. Are you planning to migrate to Oracle Exadata? Tell us about it! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Using a SMTP Service for email

    - by Josh S.
    This may be a horribly obvious question, but I'm learning and just need someone to confirm it for me. I putting together a private social network that needs to email their members (through the social network software, Elgg) regularly. I'm hosting it on a shared HostGator plan (because they won't receive much traffic) and they'll email 10-1000 emails a few times a week. HostGator restricts you to 500 per hour. I'm also worried about deliverability. I've been searching up and down about how to throttle the emails so it will all send reliably... but then I came across the idea of an outside SMTP relay service. Would using an SMTP service resolve this issue? If so, any opinions on quality SMTP services?

    Read the article

  • Le système de gestion des mots de passe de Google pourrait être compromis, suite aux attaques chinoi

    Mise à jour du 21.04.2010 par Katleen Le système de gestion des mots de passe de Google pourrait être compromis, suite aux attaques chinoises de fin 2009 Lors des attaques de décembre dernier visant Google et émanant de Chine, le système de gestion des mots de passe de la firme aurait été compromis (le niveau d'intrusion du système n'avait pas été communiqué). Chez Google, c'est un programme nommé Gaia qui gère les mots de passe des utilisateurs pour les accès à tous les services web de la firme, y compris ceux dédiés aux professionnels (Google Apps). Cette application est très confidentielle et rarement évoquée. Les pirates asiatiques ont donc réussi à s'introduire dans l'infra...

    Read the article

  • Does the Ubuntu One sync work?

    - by bisi
    I have been on this for several hours now, trying to get a simple second folder to sync with my (paid) account. I cannot tell you how many times I removed all devices, removed stored passwords, killed all processes of u1, logged out and back in online...and still, the tick in the file browser (Synchronize this folder) is loading and loading and loading. Also, I have logged out, rebooted countless times. And this is after me somehow managing to get the u1 preferences to finally "connect" again. I have also checked the status of your services, and none are close to what I am experiencing. And I have checked the suggested related questions above! So please, just confirm whether it is a problem on my side, or a problem on your side.

    Read the article

  • Nagios suddenly stops working

    - by pankaj sharma
    I have configure passive checks on one my host system for this i am using nsca. it was running fine. suddenly host is showing down on the monitoring. but host was fine and running when i check the logs on the host showing [1347941895] Warning: Attempting to execute the command "/submit_check_result host.example.com 'Current Load' OK 'OK - load average: 0.69, 0.53, 0.42'" resulted in a return code of 127. Make sure the script or binary you are trying to execute actually exists... i restarted nagios services many times but still it is showing the same error. can anyone help me regarding this. thanks in advance..

    Read the article

  • Twitter s'éloigne de plus en plus de ses développeurs externes, quel avenir pour leurs applications

    Mise à jour du 14.04.2010 par Katleen Twitter s'éloigne de plus en plus de ses développeurs externes, quel avenir pour leurs applications face aux outils officiels ? A ses débuts, le site de micro-blogging Twitter n'avait pas les moyens financiers de ses ambitions. Aussi s'est-il appuyé sur l'aide de développeurs externes qui lui ont crée gratuitement des services et des outils, en échange de revenus publicitaires. C'est grâce à ces programmeurs que les utilisateurs du réseau communautaire peuvent aujourd'hui raccourcir leurs URLs, poster des twitpics, gèrer plusieurs comptes en même temps, etc. Le travail des développeurs extérieurs a donc largement contribué à l'ascension vertigineuse du sit...

    Read the article

  • Epsilon : An Oracle Customer Profile

    - by Anand Akela
    ZDNet published an article today based on the interview of Jeff White, vice president, technology, strategic database services at Epsilon. Jeff discussed Oracle Exadata Database Machine and Oracle Enterprise Manager with the ZDNet writer Dan Kusnetzky . Read the article  Epsilon : An Oracle Customer Profile . Jeff White, Epsilon VP, was honored with Oracle’s Data Warehouse Leader of the Year for Innovative Data Warehouse Deployment of Oracle Exadata and Oracle Enterprise Manager earlier this year. In one of the videos earlier this year, Jeff mentioned that Epsilon has streamlined IT administration, monitoring, and engineered systems maintenance with Oracle Enterprise Manager. Having gained in operational efficiencies, Epsilon is now providing greater efficiencies to its customers. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • What Is Disk Fragmentation and Do I Still Need to Defragment?

    - by Jason Fitzpatrick
    Do modern computers still need the kind of routine defragmentation procedures that older computers called for? Read on to learn about fragmentation and what modern operating systems and file systems do to minimize performance impacts. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • OPN SPECIALIZED Webcasts

    - by Claudia Costa
    OPN Specialized Webcast Series for Partners For the EMEA region the webcasts start at 11:00 CET/10:00GMT. Each training session will run for approximately one hour and include live Q&A. "How to become Specialized in the Applications products portfolio," 25th May 2010,11:00 CET/10:00GMT. Click here for more information& registration. "How to become an OPN Specialized Reseller of Oracle's Sun SPARC Servers, Storage, Software and Services," 1st June 2010,11:00 CET/10:00GMT. Click here for more information& registration.  

    Read the article

  • Is there an 'off the shelf' platform for making a new website similar to Elance?

    - by user17747
    I am interested in developing a website that is similar to 'elance' but for a particular vertical. Is there an 'off the shelf' platform you can recommend for getting started with this, or would I need to develop this web service from scratch? When I write 'platform' I am referring to things such as 'shopify' for e-commerce sites, or 'ning' for social websites. I want to create a multi-merchant professional services site. The site would need to support functionality such as: allowing merchants to open and manage their own profile. merchants would accept payments from customers through the site. file transfers between merchants and customers. merchant ratings by customers.

    Read the article

  • DNS query re website Status: inactive

    - by Matthew Brookes
    There is a website that I am assisting with which, when you do a DNS look up on Who.is, returns a Website Status of "inactive". I also noticed the server type is incorrectly reported. This is not a website I generally use for DNS queries so am unsure if it is reliable. Using other DNS checking services reports what Iwould expect and the site is functioning correctly. Research I have done with regard to Website Status: inactive suggests an issue with the DNS configuration? I am looking for help understanding if this is something to be concerned with and if possible how to update this value or how it gets set in the first place.

    Read the article

  • Affaire Wikileaks : OVH publie une lettre ouverte et saisit le Juge après la demande d'Éric Besson d'expulser le site de ses serveurs

    Affaire Wikileaks : OVH répond par une lettre ouverte et saisit le Juge Après la demande d'Eric Besson d'expulser le site polémique de ses serveurs Mise à jour du 03/12/10 L'affaire Wikileaks n'en finit plus de rebondir. Après avoir essuyé deux attaques par déni de service, y avoir échappé en utilisant les technologies Cloud (Amazon Web Services), puis après s'être fait expulsé des serveurs du géant américain, le site de plus en plus sulfureux, et visiblement pourchassé, a décidé de poser « ses valises » (de documents) en France et en Suisse. Ne cherchez donc plus Wikileaks.org. Vous ne trouverez rien. Le site s'...

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • CloudFlare DNS: Downtime failover host

    - by Dr. McKay
    My company uses CloudFlare for its DNS, but as our site is HTTPS-secured and we're on the free plan, we can't utilize CloudFlare's CDN services. Our host has fairly rare but not insignificant downtime. We can't migrate servers just yet, and I'd like to be able to either have the main domain redirect to the status domain, or simply resolve to the alternative status host in the event of downtime so users will stop bugging me asking if the site is down. Is this possible to do automatically using the free CloudFlare plan, or will I have to manually edit my DNS every time the site goes down?

    Read the article

  • File manager respawns with ubuntuone

    - by pygator
    Starting Feb 11, my Ubuntu 10.10 desktop respawns FileManager many times(hundreds). You can observe the "Starting File Manager" processes at the bottom of the gnome desktop. I can make this behaviour stop by: System - Preferences - Ubuntu One - Services - uncheck "Files". Can someone walk me though the debug process? Linux 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:48 UTC 2011 i686 GNU/Linux I'm trying to reset the Ubuntu One configuration. I found good information here: https://wiki.ubuntu.com/UbuntuOne/Bugs Look for "ROOT_MISMATCH in syncdaemon.log" After running through the steps to reset and restart UbuntuOne, no more "Starting File Mangager" respawns.

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >