Search Results

Search found 4204 results on 169 pages for 'green computing'.

Page 14/169 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Significant new inventions in computing since 1980

    - by Alan Kay
    This question arose from comments about different kinds of progress in computing over the last 50 years or so. I was asked by some of the other participants to raise it as a question to the whole forum. Basic idea here is not to bash the current state of things but to try to understand something about the progress of coming up with fundamental new ideas and principles. I claim that we need really new ideas in most areas of computing, and I would like to know of any important and powerful ones that have been done recently. If we can't really find them, then we should ask "Why?" and "What should we be doing?"

    Read the article

  • Pros and cons of cloud computing?

    - by Vimvq1987
    After 3 months of research, my thesis is nearly complete. Now I'm writing the report. Interesting parts are finished, now the boring and hard-to-write parts. I need to write about pros and cons of cloud computing. What it gives us and what it take us. I've searched much but there's only list, no explains. So I need your helps, to list and explains all of pros and cons of cloud computing. Thank you so much for this.

    Read the article

  • Lançamento do Oracle Enterprise Manager 11g - (27/Mai/10)

    - by Claudia Costa
    Não perca este evento exclusivo para executivos, responsáveis de TI e Parceiros Oracle, e explore em que medida a versão mais recente do Oracle Enterprise Manager permite que a gestão das TI seja orientada para o negócio. Registe-se hoje! Descubra as novas capacidades do Oracle Enterprise Manager 11g, que incluem: ·         Gestão integrada, desda a aplicação até ao Cloud Computing, visando a maximização do retorno do investimento em TI ·         Gestão de aplicações orientadas para o negócio, que permte ao departamento de TI identificar e corrigir os problemas antes de estes terem impacto no negócio ·         Gestão e suporte intregrados dos sistemas, fornecendo notificações e correcções proactivas, associadas à partilha de conhecimento entre pares, para aumentar a satisfação dos clientes Junte-se a nós e fique a saber como somente o Oracle Enterprise Manager 11g pode ajudar as TI a melhorarem proactivamente o valor empresarial em diversas tecnologias, incluindo sistemas Sun; sistema operativo Oracle Solaris; Oracle Database; Oracle Fusion Middleware; Oracle E Business Suite; soluções Siebel, PeopleSoft e JD Edwards da Oracle; tecnologias de virtualização e ambientes de nuvem privada. Irá decorrer uma sessão exclusiva para parceiros da Oracle onde falará de temas como a especialização e exploração de oportunidades de negócio conjunto nas áreas de Gestão de aplicações e sitemas. Agenda - Sana Lisboa Park Hotel Avenida Fontes Pereira de Melo, 8 Lisboa Quinta-Feira, 27 de Maio de 2010 Horario: 9:00- 15:30h 9:00    Registo e Café 9:30    Introdução 9:40    Keynote: Business-driven IT Mnagement with Oracle Enterprise Manager 11g 10:25  Experiências de Cliente 11:00  Pausa 11:15  Integrated Application-to-disk Mangement 11:45  Business-driven Application Management 12:15  Integrated Cloud Management 12:45  Integrated Systems Management and Support Experience 13:15  Almoço 14:30  Sessão para Parceiros - Especialização e Oportunidades de negócio com Oracle      Enterprise Manager   Registe-se hoje mesmo para reservar o seu lugar neste evento exclusivo.      

    Read the article

  • Building a Redundant / Distrubuted Application

    - by MattW
    This is more of a "point me in the right direction" question. I (and my team of 3) have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • How to remove a "green screen" portrait background

    - by danbystrom
    I'm looking for a way to automatically remove (=make transparent) a "green screen" portrait background from a lot of pictures. My own attempts this far have been... ehum... less successful. I'm looking around for any hints or solutions or papers on the subject. Commercial solutions are just fine, too. And before you comment and say that it is impossible to do this automatically: no it isn't. There actually exists a company which offers exactly this service, and if I fail to come up with a different solution we're going to use them. The problem is that they guard their algorithm with their lives, and therefore won't sell/license their software. Instead we have to FTP all pictures to them where the processing is done and then we FTP the result back home. (And no, they don't have an underpaid staff hidden away in the Philippines which handles this manually, since we're talking several thousand pictures a day...) However, this approach limits its usefulness for several reasons. So I'd really like a solution where this could be done instantly while being offline from the internet.

    Read the article

  • Windows Azure Learning Plan - Architecture

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with what an Architect needs to know about Windows Azure.   General Architectural Guidance Overview and general  information about Azure - what it is, how it works, and where you can learn more. Cloud Computing, A Crash Course for Architects (Video) http://www.msteched.com/2010/Europe/ARC202 Patterns and Practices for Cloud Development http://msdn.microsoft.com/en-us/library/ff898430.aspx Design Patterns, Anti-Patterns and Windows Azure http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/27/design-patterns-anti-patterns-and-windows-azure.aspx Application Patterns for the Cloud http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx Architecting Applications for High Scalability (Video) http://www.msteched.com/2010/Europe/ARC309 David Aiken on Azure Architecture Patterns (Video) http://blogs.msdn.com/b/architectsrule/archive/2010/09/09/arcast-tv-david-aiken-on-azure-architecture-patterns.aspx Cloud Application Architecture Patterns (Video) http://blogs.msdn.com/b/bobfamiliar/archive/2010/10/19/cloud-application-architecture-patterns-by-david-platt.aspx 10 Things Every Architect Needs to Know about Windows Azure http://geekswithblogs.net/iupdateable/archive/2010/10/20/slides-and-links-for-windows-azure-platform-session-at-software.aspx Key Differences Between Public and Private Clouds http://blogs.msdn.com/b/kadriu/archive/2010/10/24/key-differences-between-public-and-private-clouds.aspx Microsoft Application Platform at a Glance http://blogs.msdn.com/b/jmeier/archive/2010/10/30/microsoft-application-platform-at-a-glance.aspx Windows Azure is not just about Roles http://vikassahni.wordpress.com/2010/11/17/windows-azure-is-not-just-about-roles/ Example Application for Windows Azure http://msdn.microsoft.com/en-us/library/ff966482.aspx Implementation Guidance Practical applications for the architect to consider 5 Enterprise steps for adopting a Platform as a Service http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0 Performance-Based Scaling in Windows Azure http://msdn.microsoft.com/en-us/magazine/gg232759.aspx Windows Azure Guidance for the Development Process http://blogs.msdn.com/b/eugeniop/archive/2010/04/01/windows-azure-guidance-development-process.aspx Microsoft Developer Guidance Maps http://blogs.msdn.com/b/jmeier/archive/2010/10/04/developer-guidance-ia-at-a-glance.aspx How to Build a Hybrid On-Premise/In Cloud Application http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx A Common Scenario of Multi-instances in Windows Azure http://blogs.msdn.com/b/windows-azure-support/archive/2010/11/03/a-common-scenario-of-multi_2d00_instances-in-windows-azure-.aspx Slides and Links for Windows Azure Platform Best Practices http://geekswithblogs.net/iupdateable/archive/2010/09/29/slides-and-links-for-windows-azure-platform-best-practices-for.aspx AppFabric Architecture and Deployment Topologies guide http://blogs.msdn.com/b/appfabriccat/archive/2010/09/10/appfabric-architecture-and-deployment-topologies-guide-now-available-via-microsoft-download-center.aspx Windows Azure Platform Appliance http://www.microsoft.com/windowsazure/appliance/ Integrating Cloud Technologies into Your Organization Interoperability with Open Source and other applications; business and cost decisions Interoperability Labs at Microsoft http://www.interoperabilitybridges.com/ Windows Azure Service Level Agreements http://www.microsoft.com/windowsazure/sla/

    Read the article

  • In the Cloud, Everything Costs Money

    - by BuckWoody
    I’ve been teaching my daughter about budgeting. I’ve explained that most of the time the money coming in is from only one or two sources – and you can only change that from time to time. The money going out, however, is to many locations, and it changes all the time. She’s made a simple debits and credits spreadsheet, and I’m having her research each part of the budget. Her eyes grow wide when she finds out everything has a cost – the house, gas for the lawnmower, dishes, water for showers, food, electricity to run the fridge, a new fridge when that one breaks, everything has a cost. She asked me “how do you pay for all this?” It’s a sentiment many adults have looking at their own budgets – and one reason that some folks don’t even make a budget. It’s hard to face up to the realities of how much it costs to do what we want to do. When we design a computing solution, it’s interesting to set up a similar budget, because we don’t always consider all of the costs associated with it. I’ve seen design sessions where the new software or servers are considered, but the “sunk” costs of personnel, networking, maintenance, increased storage, new sizes for backups and offsite storage and so on are not added in. They are already on premises, so they are assumed to be paid for already. When you move to a distributed architecture, you'll see more costs directly reflected. Store something, pay for that storage. If the system is deployed and no one is using it, you’re still paying for it. As you watch those costs rise, you might be tempted to think that a distributed architecture costs more than an on-premises one. And you might be right – for some solutions. I’ve worked with a few clients where moving to a distributed architecture doesn’t make financial sense – so we didn’t implement it. I still designed the system in a distributed fashion, however, so that when it does make sense there isn’t much re-architecting to do. In other cases, however, if you consider all of the on-premises costs and compare those accurately to operating a system in the cloud, the distributed system is much cheaper. Again, I never recommend that you take a “here-or-there-only” mentality – I think a hybrid distributed system is usually best – but each solution is different. There simply is no “one size fits all” to architecting a solution. As you design your solution, cost out each element. You might find that using a hybrid approach saves you money in one design and not in another. It’s a brave new world indeed. So yes, in the cloud, everything costs money. But an on-premises solution also costs money – it’s just that “dad” (the company) is paying for it and we don’t always see it. When we go out on our own in the cloud, we need to ensure that we consider all of the costs.

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Computing Number of Bits in Public Key

    - by eb80
    I am working with DKIM and trying to compute the public key size of some DKIM signatures. I know from tools that Gmail's is now 2048, but how could I have figured this out myself (i.e., what exact Linux commands and why)? user@host$ dig txt 20120113._domainkey.gmail.com ; <<>> DiG 9.8.3-P1 <<>> txt 20120113._domainkey.gmail.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52228 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;20120113._domainkey.gmail.com. IN TXT ;; ANSWER SECTION: 20120113._domainkey.gmail.com. 300 IN TXT "k=rsa\; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1Kd87/UeJjenpabgbFwh+eBCsSTrqmwIYYvywlbhbqoo2DymndFkbjOVIPIldNs/m40KF+yzMn1skyoxcTUGCQs8g3FgD2Ap3ZB5DekAo5wMmk4wimDO+U8QzI3SD0" "7y2+07wlNWwIt8svnxgdxGkVbbhzY8i+RQ9DpSVpPbF7ykQxtKXkv/ahW3KjViiAH+ghvvIhkx4xYSIc9oSwVmAl5OctMEeWUwg8Istjqz8BZeTWbf41fbNhte7Y+YqZOwq1Sd0DbvYAD9NOZK9vlfuac0598HY+vtSBczUiKERHv1yRbcaQtZFh5wtiRrN04BLUTD21MycBX5jYchHjPY/wIDAQAB" ;; Query time: 262 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Nov 19 10:52:06 2012 ;; MSG SIZE rcvd: 462

    Read the article

  • Does cloud computing offer this? [closed]

    - by TheBlackBenzKid
    I have some newb questions I want answering please about cloud hosting - we are currently looking at Rackspace and getting a windows box. This is the situation: We have 15 computers in our office. We have 3 printers, some wifi and some network plugged. We have a standard router and the office share things via dropbox. The computers are not on Windows SBS or something similar. We want a cloud hosting solution that will offer User can login on any machine in the office and see the machine software User can login on any machine in the office and open Outlook and their emails and signature will be on exchange automatically A shared company folder on the network All printers automatically installed on the network Users can login remotely to access emails via the web At the moment we have a network company saying we need Xeon server in house with backup and psu and Windows SBS with license for each machine and also we need cabinets and cabling setup and also load balancers and modification of our DNS for emails. My question is this. Can cloud offer this? Can we have a server in the cloud that does this? Is it possible I mean the computers would be wireless connected to this cloud and you turn the machine on and its hosted?

    Read the article

  • How to Smooth the drawing Stroke?

    - by user1852420
    I am creating drawing.. i can undo, and put colors on it. but when i draw using my fingers the stroke is not that smooth and has edge lines,, here my codes. on which I can Paint on a view, Undo, change color, and the opacity. stroke.h #import <UIKit/UIKit.h> @interface stroke : UIView{ NSMutableArray *strokeArray; UIColor *strokeColor; int strokeSize; float strokeAlpha; int strokeAlpha2; IBOutlet UISlider *slides; float red; float green; float blue; CGPoint mid1; CGPoint mid2; CGPoint endingPoint,previousPoint1,previousPoint2; CGPoint currentTouch; } @property (nonatomic, retain) UIColor *strokeColor; @property (nonatomic) int strokeSize; @property (nonatomic, retain) NSMutableArray *strokeArray; - (IBAction)changeAlphaValue; -(void)loadSLider; -(void)blueColor; -(void)darkvioletColor; -(void)violetColor; -(void)pinkColor; -(void)darkbrownColor; -(void)redColor; -(void)magentaRedColor; -(void)lightBrownColor; -(void)lightOrangeColor; -(void)OrangeColor; -(void)YellowColor; -(void)greenColor; -(void)lightYellowColor; -(void)darkGreenColor; -(void)TurquioseColor; -(void)PaleTurquioseColor; -(void)skyBlueColor; -(void)whiteColor; -(void)DirtyWhiteColor; -(void)SilverColor; -(void)LightGrayColor; -(void)GrayColor; -(void)LightBlackColor; -(void)BlackColor; @end stroke.m #import "stroke.h" @implementation stroke @synthesize strokeColor; @synthesize strokeSize; @synthesize strokeArray; - (void) awakeFromNib{ self.strokeArray = [[NSMutableArray alloc] init]; self.strokeColor = [UIColor colorWithRed:0 green:0 blue:232 alpha:1]; self.strokeSize = 3; } - (void)drawRect:(CGRect)rect{ NSMutableArray *stroke; for (stroke in strokeArray) { CGContextRef contextRef = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(contextRef, [[stroke objectAtIndex:1] intValue]); CGFloat *color = CGColorGetComponents([[stroke objectAtIndex:2] CGColor]); CGContextSetRGBStrokeColor(contextRef, color[0], color[1], color[2], color[3]); CGContextBeginPath(contextRef); CGPoint points[[stroke count]]; for (NSUInteger i = 3; i < [stroke count]; i++) { points[i-3] = [[stroke objectAtIndex:i] CGPointValue]; } CGContextAddLines(contextRef, points, [stroke count]-3); CGContextStrokePath(contextRef); } } -(void)loadSLider{ } - (IBAction)changeAlphaValue{ strokeAlpha2 =((int)slides.value); } -(void)blueColor{ red = 0/255.0; green = 0/255.0; blue = 255/255.0; } -(void)darkvioletColor{ red = 75/255.0; green = 0/255.0; blue = 130/255.0; } -(void)violetColor{ red = 128/255.0; green = 0/255.0; blue = 128/255.0; } -(void)pinkColor{ red = 255/255.0; green = 0/255.0; blue = 255/255.0; } -(void)darkbrownColor{ red = 0.200; green = 0.0; blue = 0.0; } -(void)redColor{ red = 255/255.0; green = 0/255.0; blue = 0/255.0; } -(void)magentaRedColor{ red = 0.350; green = 0.0; blue = 0.0; } -(void)lightBrownColor{ red = 0.480; green = 0.0; blue = 0.0; } -(void)lightOrangeColor{ red = 0.600; green = 0.200; blue = 0.0; } -(void)OrangeColor{ red = 1.0; green = 0.300; blue = 0.0; } -(void)YellowColor{ red = 0.950; green = 0.450; blue = 0.0; } -(void)greenColor{ red = 0.0; green = 1.0; blue = 0.0; } -(void)lightYellowColor{ red = 1.0; green = 1.0; blue = 0.0; } -(void)darkGreenColor{ red = 0.0; green = 0.500; blue = 0.0; } -(void)TurquioseColor{ red = 0.0; green = 0.700; blue = 0.200; } -(void)PaleTurquioseColor{ red = 0.0; green = 0.700; blue = 0.600; } -(void)skyBlueColor{ red = 0.0; green = 0.400; blue = 0.800; } -(void)whiteColor{ red = 1.0; green = 1.0; blue = 1.0; } -(void)DirtyWhiteColor{ red = 0.800; green = 0.800; blue = 0.800; } -(void)SilverColor{ red = 0.600; green = 0.600; blue = 0.600; } -(void)LightGrayColor{ red = 0.500; green = 0.500; blue = 0.500; } -(void)GrayColor{ red = 0.300; green = 0.300; blue = 0.300; } -(void)LightBlackColor{ red = 0.150; green = 0.150; blue = 0.150; } -(void)BlackColor{ red = 0.0; green = 0.0; blue = 0.0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch; NSEnumerator *counter = [touches objectEnumerator]; while ((touch = (UITouch *)[counter nextObject])) { switch (strokeAlpha2) { case 1: strokeAlpha = .1; break; case 2: strokeAlpha = .2; break; case 3: strokeAlpha = .3; break; case 4: strokeAlpha = .4; break; case 5: strokeAlpha = .5; break; case 6: strokeAlpha = .6; break; case 7: strokeAlpha = .7; break; case 8: strokeAlpha = .8; break; case 9: strokeAlpha = .9; break; case 10: strokeAlpha = 1; break; default: strokeAlpha = 1; break; } self.strokeColor = [UIColor colorWithRed:red green:green blue:blue alpha:strokeAlpha]; NSValue *touchPos = [NSValue valueWithCGPoint:[touch locationInView:self]]; UIColor *color = [UIColor colorWithCGColor:strokeColor.CGColor]; NSNumber *size = [NSNumber numberWithInt:strokeSize]; NSMutableArray *stroke = [NSMutableArray arrayWithObjects: touch, size, color, touchPos, nil]; [strokeArray addObject:stroke]; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch; NSEnumerator *counter = [touches objectEnumerator]; while ((touch = (UITouch *)[counter nextObject])) { NSMutableArray *stroke; for (stroke in strokeArray) { if ([stroke objectAtIndex:0] == touch) { [stroke addObject:[NSValue valueWithCGPoint:[touch locationInView:self]]]; } [self setNeedsDisplay]; } } } @end

    Read the article

  • Ikoula : les serveurs Green Fish et Crazy Fish gagnent en espace de stockage, que pensez-vous des serveurs dédiés de l'hébergeur ?

    Ikoula : les serveurs Green Fish et Crazy Fish gagnent en espace de stockage que pensez-vous des serveurs dédiés de l'hébergeur ?L'espace de stockage, les performances et le cout sont des aspects clés lorsqu'un développeur, administrateur ou tout autre professionnel de l'IT se lance dans la location d'un serveur pour ses travaux.Ikoula, tout en maintenant la même grille de prix, fait évoluer ses offres de serveurs Green Fish et Crazy Fish avec l'introduction d'un disque dur d'1 To SATA, contre...

    Read the article

  • Ikoula offre 40 Go d'espace de sauvegarde pour toute souscription à son serveur Web dédié Green Fish

    Ikoula offre 40 Go d'espace de sauvegarde Pour toute souscription à son serveur Web dédié Green Fish En novembre dernier, Ikoula lançait une nouvelle gamme de serveurs Web dédiés à partir de 24 Euros par mois. Une offre d'entrée de gamme bien pensée, baptisée « Green Fish », qui vise particulièrement les développeurs à la recherche d'un serveur dédié à peu de frais avec un bon rapport qualité prix (lire par ailleurs). Aujourd'hui, pour continuer à promouvoir ce nouveau produit, Ikoula offre 40 Go d'espace de sauvegarde pour toute nouvelle souscription. Cette promotion est sans aucun engagemen...

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • Algorithm for computing the inverse of a polynomial

    - by Neville
    I'm looking for an algorithm (or code) to help me compute the inverse a polynomial, I need it for implementing NTRUEncrypt. An algorithm that is easily understandable is what I prefer, there are pseudo-codes for doing this, but they are confusing and difficult to implement, furthermore I can not really understand the procedure from pseudo-code alone. Any algorithms for computing the inverse of a polynomial with respect to a ring of truncated polynomials?

    Read the article

  • computing hash values, integral types versus struct/class

    - by aaa
    hello I would like to know if there is a difference in speed between computing hash value (for example std::map key) of primitive integral type, such as int64_t and pod type, for example struct { int16_t v[4]; };. I know this is going to implementation specific, so my question ultimately pertains to gnu standard library. Thanks

    Read the article

  • What does "Computing additional info" mean?

    - by JesperE
    Eclipse Helios periodically starts running a job which displays "Computing additional info". During this time, Eclipse is very sluggish, bordering on unusable. What does this job do? Can I shut it off? I just hope that someone in the JDT team comes to sense and gets rid of it, make it go faster, or at the very least change it to something meaningful.

    Read the article

  • R: optimal way of computing the "product" of two vectors

    - by Musa
    Hi, Let's assume that I have a vector r <- rnorm(4) and a matrix W of dimension 20000*200 for example: W <- matrix(rnorm(20000*200),20000,200) I want to compute a new matrix M of dimension 5000*200 such that m11 <- r%*%W[1:4,1], m21 <- r%*%W[5:8,1], m12 <- r%*%W[1:4,2] etc. (i.e. grouping rows 4-by-4 and computing the product). What's the optimal (speed,memory) way of doing this? Thanks in advance.

    Read the article

  • Cloud security and privacy

    - by Rakesh K
    Hi, I have a very basic doubt regarding cloud computing that is catching up pretty fast these days. To my understanding, cloud computing is a paradigm in which companies put up their data and applications on somebody else's machines aka 'The Cloud'. I want to know just how secure is it to put up my data on some third party machines, especially if my data contains private details. In particular, how can an enterprise trust the cloud computing service providers in this data privacy aspect? Thanks, rakesh.

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Le cloud computing pourrait faire gagner 763 milliards d'euros à l'Europe, et générer la création de 2.4 millions d'emplois

    Le cloud computing pourrait faire gagner 763 milliards d'euros à l'Europe, et générer la création de 2.4 millions d'emplois Nos voisins d'outre-manche viennent de publier une étude très intéressante. Elle révèle ainsi qu'une adoption majeure du cloud computing en Europe pourrait avoir des conséquences extrêmement positives pour l'économie. Une utilisation de masse de cette technologie permettrait aux pays de l'Union d'économiser 763 milliards d'euros sur cinq ans ! En effet, cela éviterait aux entreprises d'avoir à créer toutes leurs infrastructures IT. A la place, elles auraient juste à louer divers services et stockages. Le cloud computing permet aussi de réaliser des économies d'énergie, et si son prix parait enc...

    Read the article

  • computing matting Laplacian matrix of an Image

    - by ajith
    hi everyone, i need to compute Laplacian Matrix-L for an image(nXn) in opencv...computing goes as follows.......... dij - 1/|wk|{[1+1/(e/|wk|+s2)][(Ii-µk)*(Ij -µk)]}.... for all(i,j)?wk,summing over k yields (i,j)th element of L. where Here dij is the Kronecker delta,µk and s2k are the mean & variance of intensities in the window wk around k,and |wk| is the number of pixels in this window.wk is 3X3 window... here am not clear about 2 things... 1.what ll be the size of L?nXn or (nXn)X(nXn)?? 2.how to select Ii and Ij separately in 2D image?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >