Search Results

Search found 3521 results on 141 pages for 'parallel computing'.

Page 21/141 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Microsoft launches two new Data Centres for Azure in US to meet growing demand

    - by Gopinath
    In order to meet the growing demand for Windows Azure in US, Microsoft has launched two new data centres in US – East US and West US. With the addition of these two data centres the number of Azure data centres across the globe has grown to 8 and 4 among them are located in US. The two new data centres are providing Computer and Storage resources and few enthusiastic customers already deployed their applications. The other services like SQL Azure and AppFabric will be offered by these data centres in the coming months. The addition of new data centres is a good sign to Microsoft as the customer demand for their Cloud offering is growing. Amazon Web Services is the pioneer in Cloud Computing and they offer wider range of Cloud Services compared to Microsoft. Source: Windows Azure Blog

    Read the article

  • Using Resources the Right Way

    - by BuckWoody
    It’s an interesting time in computing technology. At one point there was a dearth of information available for solving a given problem, or educating ourselves on broader topics so that we can solve problems in the future. With dozens, perhaps hundreds or thousands of web sites and content available (for free, in many cases) from vendors, peers, even colleges and universities, it seems like there is actually too much information. Who has the time to absorb all this information and training? Even if you had the inclination, where to start? In fact, it seems so overwhelming that I often hear people saying that they can’t find the training they need, or that vendor X or Y “doesn’t help their users”. On questioning these folks, however, I often find that they – and sometimes I - haven’t put in the effort to learn what resources we have. That’s where blogs, like this one, can help. If you follow a blog, either by checking it often or perhaps subscribing to the Really Simple Syndication (RSS) feed, you’ll be able to spread out the search or create a mental filter for the information you need. But it’s not enough just read a blog or a web page. The creators need real feedback – what doesn’t work, and what does. Yes, you’re allowed to tell a vendor or writer “This helped me because…” so that you reinforce the positives. To be sure, bring up what doesn’t work as well –  that’s fine. But be specific, and be constructive. You’d be surprised at how much it matters. I know for a fact at Microsoft we listen – there is a real live person that reads your comments. I’m sure this is true of other vendors, and I also know that most blog authors – yours truly most especially – wants to know what you think.   In this blog entry I’d to call your attention to three resources you have at your disposal, and how you can use them to help. I’ll try to bring up things like this from time to time that I find useful, and cover in them in more depth like this. Think of this as a synopsis of a longer set of resources that you can use to filter whether you want to research further, bookmark, or forward on to a circle of friends where you think it might help them.   Data Driven Design Concepts http://msdn.microsoft.com/en-us/library/windowsazure/jj156154 I’ll start with a great site that walks you through the process of designing a solution from a data-first perspective. As you know, I believe all computing is merely re-arranging data. If you follow that logic as well, you’ll realize that whenever you create a solution, you should start at the data-end of the application. This resource helps you do that. Even if you don’t use the specific technologies the instructions use, the concepts hold for almost any other technology that deals with data. This should be a definite bookmark for a developer, DBA, or Data Architect. When I mentioned my admiration for this resource here at Microsoft, the team that created it contacted me and asked if I’d share an e-mail address to my readers so that you can comment on it. You’re guaranteed to be heard – you can suggest changes, talk about how useful – or not – it is, and so on. Here’s that address:  [email protected]   End-to-End Example of a complete Hybrid Application – with Live Demo https://azurestocktrader.cloudapp.net/Default.aspx I learn by example. I also like having ready-made, live, functional demos that show the completed solution at work. If you’ve ever wanted to learn how a complex, complete, hybrid application that bridges on-premises systems with cloud-based databases, code, functions and more, this is it. It’s a stock-trading simulator, and you can get everything from the design to the code itself, or you can just play with the application. It’s running on Windows Azure, the actual production servers we use for everything else. Using a Cloud-Based Service https://azureconfigweb.cloudapp.net/Default.aspx Along with that stock-trading application, you have a full demonstration and usable code sample of a web-based service available. If you’re a developer, this is a style of code you need to understand for everything from iPhone development to a full Service-Oriented Architecture (SOA) environment. So check out these resources. I’ll post more from time to time as I run across them. Hopefully they’ll be as useful to you as they are to me. Oh, and if you have a comment on any of the resources, let them know. And if you have any comments about these or any of my entries, feel free to post away. To quote a famous TV Show: “Hello Seattle – I’m listening…”

    Read the article

  • Selenium Grid with parallel testing using C#/NUnit

    - by seth
    I've got several unit tests written with NUnit that are calling selenium commands. I've got 2 win2k3 server boxes setup, one is running selenium grid hub along with 2 selenium rc's. The other box is running 5 selenium rc's. All of them are registered with the hub as running Firefox on Windows (to keep it simple). In my unit test setup method I've got it connected to the hub's hostname at port 4444. When running the tests, they only run sequentially (as expected). I've done a lot of reading on NUnit's roadmap and how they are shooting for parallel testing abilities. I've seen lots of pointers to using PNUnit in the meantime. However this seems to completely defeat the purpose of the Selenium Grid. Have any of you successfully implemented parallel testing using C#/NUnit connected to a Selenium Grid setup? If so, please elaborate. I'm at a complete loss at how this will/can work using NUnit as it exists now (I'm using version 2.9.3)

    Read the article

  • Significant new inventions in computing since 1980

    - by Alan Kay
    This question arose from comments about different kinds of progress in computing over the last 50 years or so. I was asked by some of the other participants to raise it as a question to the whole forum. Basic idea here is not to bash the current state of things but to try to understand something about the progress of coming up with fundamental new ideas and principles. I claim that we need really new ideas in most areas of computing, and I would like to know of any important and powerful ones that have been done recently. If we can't really find them, then we should ask "Why?" and "What should we be doing?"

    Read the article

  • Combine static files or load in parallel

    - by Niall Collins
    I am at present introducing code to my site to combine css and javascript files. Is there a way without having to include an external library to load javascript asynchronously or in parallel? I have read on some blogs that combining of files can be counter productive as the load of the http request can be large and its better to load multiple files in parallel. Opinions on this? I am caching my javascript/css. And would have thought it was better to combine rather than have multiple http requests.

    Read the article

  • Pros and cons of cloud computing?

    - by Vimvq1987
    After 3 months of research, my thesis is nearly complete. Now I'm writing the report. Interesting parts are finished, now the boring and hard-to-write parts. I need to write about pros and cons of cloud computing. What it gives us and what it take us. I've searched much but there's only list, no explains. So I need your helps, to list and explains all of pros and cons of cloud computing. Thank you so much for this.

    Read the article

  • Running junit tests in parallel ?

    - by krosenvold
    I'm using junit 4.4 and maven and I have a large number of long-running integration tests. When it comes to parallellizing test suites there are a few solutions that allow me to run each test method in a single test-class in parallel. But all of these require that I change the tests in one way or another. I really think it would be a much cleaner solution to run X different test classes in X threads in parallel. I have hundreds of tests so I don't really care about threading individual test-classes. Is there any way to do this ?

    Read the article

  • Two parallel line segments intersection

    - by Judarkness
    I know there are many algorithms to verify whether two line segments are intersected. But once they encountered parallel condition, they just tell the user a big "No" and pretend there is no overlap, share end point, or end point collusion. I know I can can calculate the distance between 2 lines segments. If the distance is 0, check the end points located in the other line segments or not. And this means I have to use a lot of if else and && || conditions. This is not difficult, but my question is "Is there a trick( or mathematics) method to calculate this special parallel case?"

    Read the article

  • Lançamento do Oracle Enterprise Manager 11g - (27/Mai/10)

    - by Claudia Costa
    Não perca este evento exclusivo para executivos, responsáveis de TI e Parceiros Oracle, e explore em que medida a versão mais recente do Oracle Enterprise Manager permite que a gestão das TI seja orientada para o negócio. Registe-se hoje! Descubra as novas capacidades do Oracle Enterprise Manager 11g, que incluem: ·         Gestão integrada, desda a aplicação até ao Cloud Computing, visando a maximização do retorno do investimento em TI ·         Gestão de aplicações orientadas para o negócio, que permte ao departamento de TI identificar e corrigir os problemas antes de estes terem impacto no negócio ·         Gestão e suporte intregrados dos sistemas, fornecendo notificações e correcções proactivas, associadas à partilha de conhecimento entre pares, para aumentar a satisfação dos clientes Junte-se a nós e fique a saber como somente o Oracle Enterprise Manager 11g pode ajudar as TI a melhorarem proactivamente o valor empresarial em diversas tecnologias, incluindo sistemas Sun; sistema operativo Oracle Solaris; Oracle Database; Oracle Fusion Middleware; Oracle E Business Suite; soluções Siebel, PeopleSoft e JD Edwards da Oracle; tecnologias de virtualização e ambientes de nuvem privada. Irá decorrer uma sessão exclusiva para parceiros da Oracle onde falará de temas como a especialização e exploração de oportunidades de negócio conjunto nas áreas de Gestão de aplicações e sitemas. Agenda - Sana Lisboa Park Hotel Avenida Fontes Pereira de Melo, 8 Lisboa Quinta-Feira, 27 de Maio de 2010 Horario: 9:00- 15:30h 9:00    Registo e Café 9:30    Introdução 9:40    Keynote: Business-driven IT Mnagement with Oracle Enterprise Manager 11g 10:25  Experiências de Cliente 11:00  Pausa 11:15  Integrated Application-to-disk Mangement 11:45  Business-driven Application Management 12:15  Integrated Cloud Management 12:45  Integrated Systems Management and Support Experience 13:15  Almoço 14:30  Sessão para Parceiros - Especialização e Oportunidades de negócio com Oracle      Enterprise Manager   Registe-se hoje mesmo para reservar o seu lugar neste evento exclusivo.      

    Read the article

  • Building a Redundant / Distrubuted Application

    - by MattW
    This is more of a "point me in the right direction" question. I (and my team of 3) have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Improve heavy work in a loop in multithreading

    - by xjaphx
    I have a little problem with my data processing. public void ParseDetails() { for (int i = 0; i < mListAppInfo.Count; ++i) { ParseOneDetail(i); } } For 300 records, it usually takes around 13-15 minutes. I've tried to improve by using Parallel.For() but it always stop at some point. public void ParseDetails() { Parallel.For(0, mListAppInfo.Count, i => ParseOneDetail(i)); } In method ParseOneDetail(int index), I set an output log for tracking the record id which is under processing. Always hang at some point, I don't know why.. ParseOneDetail(): 89 ... ParseOneDetail(): 90 ... ParseOneDetail(): 243 ... ParseOneDetail(): 92 ... ParseOneDetail(): 244 ... ParseOneDetail(): 93 ... ParseOneDetail(): 245 ... ParseOneDetail(): 247 ... ParseOneDetail(): 94 ... ParseOneDetail(): 248 ... ParseOneDetail(): 95 ... ParseOneDetail(): 99 ... ParseOneDetail(): 249 ... ParseOneDetail(): 100 ... _ <hang at this point> Appreciate your help and suggestions to improve this. Thank you! Edit 1: update for method: private void ParseOneDetail(int index) { Console.WriteLine("ParseOneDetail(): " + index + " ... "); ApplicationInfo appInfo = mListAppInfo[index]; var htmlWeb = new HtmlWeb(); var document = htmlWeb.Load(appInfo.AppAnnieURL); // get first one only HtmlNode nodeStoreURL = document.DocumentNode.SelectSingleNode(Constants.XPATH_FIRST); appInfo.StoreURL = nodeStoreURL.Attributes[Constants.HREF].Value; } Edit 2: This is the error output after a while running as Enigmativity suggest, ParseOneDetail(): 234 ... ParseOneDetail(): 87 ... ParseOneDetail(): 235 ... ParseOneDetail(): 236 ... ParseOneDetail(): 88 ... ParseOneDetail(): 238 ... ParseOneDetail(): 89 ... ParseOneDetail(): 90 ... ParseOneDetail(): 239 ... ParseOneDetail(): 92 ... Unhandled Exception: System.AggregateException: One or more errors occurred. --- > System.Net.WebException: The operation has timed out at System.Net.HttpWebRequest.GetResponse() at HtmlAgilityPack.HtmlWeb.Get(Uri uri, String method, String path, HtmlDocum ent doc, IWebProxy proxy, ICredentials creds) in D:\Source\htmlagilitypack.new\T runk\HtmlAgilityPack\HtmlWeb.cs:line 1355 at HtmlAgilityPack.HtmlWeb.LoadUrl(Uri uri, String method, WebProxy proxy, Ne tworkCredential creds) in D:\Source\htmlagilitypack.new\Trunk\HtmlAgilityPack\Ht mlWeb.cs:line 1479 at HtmlAgilityPack.HtmlWeb.Load(String url, String method) in D:\Source\htmla gilitypack.new\Trunk\HtmlAgilityPack\HtmlWeb.cs:line 1103 at HtmlAgilityPack.HtmlWeb.Load(String url) in D:\Source\htmlagilitypack.new\ Trunk\HtmlAgilityPack\HtmlWeb.cs:line 1061 at SimpleChartParser.AppAnnieParser.ParseOneDetail(ApplicationInfo appInfo) i n c:\users\nhn60\documents\visual studio 2010\Projects\FunToolPack\SimpleChartPa rser\AppAnnieParser.cs:line 90 at SimpleChartParser.AppAnnieParser.<ParseDetails>b__0(ApplicationInfo ai) in c:\users\nhn60\documents\visual studio 2010\Projects\FunToolPack\SimpleChartPar ser\AppAnnieParser.cs:line 80 at System.Threading.Tasks.Parallel.<>c__DisplayClass21`2.<ForEachWorker>b__17 (Int32 i) at System.Threading.Tasks.Parallel.<>c__DisplayClassf`1.<ForWorker>b__c() at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask) at System.Threading.Tasks.Task.<>c__DisplayClass7.<ExecuteSelfReplicating>b__ 6(Object ) --- End of inner exception stack trace --- at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceled Exceptions) at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationTo ken cancellationToken) at System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int 32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWit hState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEachWorker[TSource,TLocal](TSource[] ar ray, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Act ion`3 bodyWithStateAndIndex, Func`4 bodyWithStateAndLocal, Func`5 bodyWithEveryt hing, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEachWorker[TSource,TLocal](IEnumerable` 1 source, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState , Action`3 bodyWithStateAndIndex, Func`4 bodyWithStateAndLocal, Func`5 bodyWithE verything, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.ForEach[TSource](IEnumerable`1 source, Act ion`1 body) at SimpleChartParser.AppAnnieParser.ParseDetails() in c:\users\nhn60\document s\visual studio 2010\Projects\FunToolPack\SimpleChartParser\AppAnnieParser.cs:li ne 80 at SimpleChartParser.Program.Main(String[] args) in c:\users\nhn60\documents\ visual studio 2010\Projects\FunToolPack\SimpleChartParser\Program.cs:line 15

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Windows Azure Learning Plan - Architecture

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with what an Architect needs to know about Windows Azure.   General Architectural Guidance Overview and general  information about Azure - what it is, how it works, and where you can learn more. Cloud Computing, A Crash Course for Architects (Video) http://www.msteched.com/2010/Europe/ARC202 Patterns and Practices for Cloud Development http://msdn.microsoft.com/en-us/library/ff898430.aspx Design Patterns, Anti-Patterns and Windows Azure http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/27/design-patterns-anti-patterns-and-windows-azure.aspx Application Patterns for the Cloud http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx Architecting Applications for High Scalability (Video) http://www.msteched.com/2010/Europe/ARC309 David Aiken on Azure Architecture Patterns (Video) http://blogs.msdn.com/b/architectsrule/archive/2010/09/09/arcast-tv-david-aiken-on-azure-architecture-patterns.aspx Cloud Application Architecture Patterns (Video) http://blogs.msdn.com/b/bobfamiliar/archive/2010/10/19/cloud-application-architecture-patterns-by-david-platt.aspx 10 Things Every Architect Needs to Know about Windows Azure http://geekswithblogs.net/iupdateable/archive/2010/10/20/slides-and-links-for-windows-azure-platform-session-at-software.aspx Key Differences Between Public and Private Clouds http://blogs.msdn.com/b/kadriu/archive/2010/10/24/key-differences-between-public-and-private-clouds.aspx Microsoft Application Platform at a Glance http://blogs.msdn.com/b/jmeier/archive/2010/10/30/microsoft-application-platform-at-a-glance.aspx Windows Azure is not just about Roles http://vikassahni.wordpress.com/2010/11/17/windows-azure-is-not-just-about-roles/ Example Application for Windows Azure http://msdn.microsoft.com/en-us/library/ff966482.aspx Implementation Guidance Practical applications for the architect to consider 5 Enterprise steps for adopting a Platform as a Service http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0 Performance-Based Scaling in Windows Azure http://msdn.microsoft.com/en-us/magazine/gg232759.aspx Windows Azure Guidance for the Development Process http://blogs.msdn.com/b/eugeniop/archive/2010/04/01/windows-azure-guidance-development-process.aspx Microsoft Developer Guidance Maps http://blogs.msdn.com/b/jmeier/archive/2010/10/04/developer-guidance-ia-at-a-glance.aspx How to Build a Hybrid On-Premise/In Cloud Application http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx A Common Scenario of Multi-instances in Windows Azure http://blogs.msdn.com/b/windows-azure-support/archive/2010/11/03/a-common-scenario-of-multi_2d00_instances-in-windows-azure-.aspx Slides and Links for Windows Azure Platform Best Practices http://geekswithblogs.net/iupdateable/archive/2010/09/29/slides-and-links-for-windows-azure-platform-best-practices-for.aspx AppFabric Architecture and Deployment Topologies guide http://blogs.msdn.com/b/appfabriccat/archive/2010/09/10/appfabric-architecture-and-deployment-topologies-guide-now-available-via-microsoft-download-center.aspx Windows Azure Platform Appliance http://www.microsoft.com/windowsazure/appliance/ Integrating Cloud Technologies into Your Organization Interoperability with Open Source and other applications; business and cost decisions Interoperability Labs at Microsoft http://www.interoperabilitybridges.com/ Windows Azure Service Level Agreements http://www.microsoft.com/windowsazure/sla/

    Read the article

  • In the Cloud, Everything Costs Money

    - by BuckWoody
    I’ve been teaching my daughter about budgeting. I’ve explained that most of the time the money coming in is from only one or two sources – and you can only change that from time to time. The money going out, however, is to many locations, and it changes all the time. She’s made a simple debits and credits spreadsheet, and I’m having her research each part of the budget. Her eyes grow wide when she finds out everything has a cost – the house, gas for the lawnmower, dishes, water for showers, food, electricity to run the fridge, a new fridge when that one breaks, everything has a cost. She asked me “how do you pay for all this?” It’s a sentiment many adults have looking at their own budgets – and one reason that some folks don’t even make a budget. It’s hard to face up to the realities of how much it costs to do what we want to do. When we design a computing solution, it’s interesting to set up a similar budget, because we don’t always consider all of the costs associated with it. I’ve seen design sessions where the new software or servers are considered, but the “sunk” costs of personnel, networking, maintenance, increased storage, new sizes for backups and offsite storage and so on are not added in. They are already on premises, so they are assumed to be paid for already. When you move to a distributed architecture, you'll see more costs directly reflected. Store something, pay for that storage. If the system is deployed and no one is using it, you’re still paying for it. As you watch those costs rise, you might be tempted to think that a distributed architecture costs more than an on-premises one. And you might be right – for some solutions. I’ve worked with a few clients where moving to a distributed architecture doesn’t make financial sense – so we didn’t implement it. I still designed the system in a distributed fashion, however, so that when it does make sense there isn’t much re-architecting to do. In other cases, however, if you consider all of the on-premises costs and compare those accurately to operating a system in the cloud, the distributed system is much cheaper. Again, I never recommend that you take a “here-or-there-only” mentality – I think a hybrid distributed system is usually best – but each solution is different. There simply is no “one size fits all” to architecting a solution. As you design your solution, cost out each element. You might find that using a hybrid approach saves you money in one design and not in another. It’s a brave new world indeed. So yes, in the cloud, everything costs money. But an on-premises solution also costs money – it’s just that “dad” (the company) is paying for it and we don’t always see it. When we go out on our own in the cloud, we need to ensure that we consider all of the costs.

    Read the article

  • How to prevent parallel builds per build configuration across multiple Build Agents

    - by vanslly
    I have many build configurations in TeamCity, each servicing a large project. In the past if a build is kicked off the Build Agent could be busy for up to 20min! In order to improve throughput I installed a second Build Agent on the same machine such that if a build run is kicked off by say Build Agent 1 and it is busy for 20min and someone from another project makes a change then Build Agent 2 can do the build for the other project without needing to wait on the current build run to finish. All was well until two successive check-ins resulted in both Build Agents running a build for a single build configuration in parallel. Since some resources are shared, IIS directories & databases, I don't want a single build configuration to run on both Build Agents in parallel. How can I ensure a build isn't triggered if a build is currently running for that build configuration on a different build agent? One way seems to involve environmental variables and ensuring a 50/50 split by Build Agent in terms of build configuration compatibility, but that seems a little clunky.

    Read the article

  • OpenMP - running things in parallel and some in sequence within them

    - by Sayan Ghosh
    Hi, I have a scenario like: for (i = 0; i < n; i++) { for (j = 0; j < m; j++) { for (k = 0; k < x; k++) { val = 2*i + j + 4*k if (val != 0) { for(t = 0; t < l; t++) { someFunction((i + t) + someFunction(j + t) + k*t) } } } } } Considering this is block A, Now I have two more similar blocks in my code. I want to put them in parallel, so I used OpenMP pragmas. However I am not able to parallelize it, because I am a tad confused that which variables would be shared and private in this case. If the function call in the inner loop was an operation like sum += x, then I could have added a reduction clause. In general, how would one approach parallelizing a code using OpenMP, when we there is a nested for loop, and then another inner for loop doing the main operation. I tried declaring a parallel region, and then simply putting pragma fors before the blocks, but definitely I am missing a point there! Thanks, Sayan

    Read the article

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • a problem with parallel.foreach in initializing conversation manager

    - by Adrakadabra
    i use mvc2, nhibernate 2.1.2 in controller class i call foreachParty method like this: OrganizationStructureService.ForEachParty<Department>(department, null, p => { p.AddParentWithoutRemovingExistentAccountability(domainDepartment, AccountabilityTypeDbId.SupervisionDepartmentOfDepartment); } }, x => (!(x.AccountabilityType.Id == (int)AccountabilityTypeDbId.SupervisionDepartmentOfDepartment))); static public void ForEachParty(Party party, PartyTypeDbId? partyType, Action action, Expression expression = null) where T : Party { IList chilrden = new List(); IList acc = party.Children; if (party != null) action(party); if (partyType != null) acc = acc.Where(p => p.Child.PartyTypes.Any(c => c.Id == (int)partyType)).ToList(); if (expression != null) acc = acc.AsQueryable().Where(expression).ToList(); Parallel.ForEach(acc, p => { if (partyType == null) ForEachParty<T>(p.Child, null, action); else ForEachParty<T>(p.Child, partyType, action); }); } but just after executing the action on foreach.parallel, i dont know why the conversation is getting closed and i see "current conversation is not initilized yet or its closed"

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Storage subsystem borking after server restart (all on a Parallel SCSI bus)

    - by Dat Chu
    I have a server (with a SCSI HBA) connected to two Promise VTrak M310p RAID enclosure on the same bus. Everything is working fine until I have to restart my server. Once restarted, the server can no longer communicate with the enclosures: lots of read errors and bus resets. I have to turn off both enclosure, then turn off the server, then turn on the enclosure, then turn on the server for things to work. I don't believe this is the normal behavior, what could I be missing?

    Read the article

  • Oracle parameter array binding from c# executed parallel and serial on different servers

    - by redir_dev_nut
    I have two Oracle 9i 64 bit servers, dev and prod. Calling a procedure from a c# app with parameter array binding, prod executes the procedure simultaneously for each value in the parameter array, but dev executes for each value serially. So, if the sproc does: select count(*) into cnt from mytable where id = 123; if cnt = 0 then insert into mytable (id) values (123); end if; Assuming the table initially does not have an id = 123 row. Dev gets cnt = 0 for the first array parameter value, then 1 for each of the subsequent. Prod gets cnt = 0 for all array parameter values and inserts id 123 for each. Is this a configuration difference, an illusion due to speed difference, something else?

    Read the article

  • Privoxy-like proxy that handles multiple parallel connections?

    - by overtherainbow
    Hello I use Privoxy on my XP host to filter/rewrite web pages, but it's slower because all connections go through Privoxy's single port. According to this post on StackOverflow, by default, browsers support more than one simultaneous connection, which would explain why going through Privoxy is slower. Does someone know of a similar application that could handle more than one connection? Thank you.

    Read the article

  • Distributed, Parallel, Fault-tolerant File System

    - by Eddified
    There are so many choices that it's hard to know where to start. My requirements are these: Runs on Linux Most of the files will be between 5-9 MB in size. There will also be a significant number of small-ish jpgs (100px x 100px). All of the files need to be available over http. Redundancy -- ideally it would provide the space efficiency similar to RAID 5 of 75% (in RAID 5 this would be calculated thus: with 4 identical disks, 25% of the space is used for parity = 75% efficent) Must support several petabytes of data scalable runs on commodity hardware In addition, I look for these qualities, though they are not "requirements": Stable, mature file system Lots of momentum and support etc I would like some input as to which file system works best for the given requirements. Some people at my organization are leaning towards MogileFS, but I'm not convinced of the stability and momentum of that project. GlusterFS and Lustre, based on my limited research, appear to be better supported... Thoughts?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >