Search Results

Search found 8824 results on 353 pages for 'cloud virtualization vmware density scalability'.

Page 317/353 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • How to use your computer to save the world?

    - by Francisco Garcia
    Sometimes I miss the "help other people" factor within computer-related fields. However, there are little things that we all can do to make this a better place—beyond trying to eradicate annoying stuff such as Visual Basic. You could join a cloud computing network such as World Community Grid to fight cancer, write a charityware application such as Vim, improve office IT infrastructure to support telecommuting and reduce CO2 emissions, use an ebook reader to save paper, ... What else can we do to help others? Which projects can have the biggest impact?

    Read the article

  • Integrating Incoming Email Into a php/mysql App

    - by phirschybar
    I am looking to create an incoming email daemon switchboard that I can integrate with various remote php/mysql apps. Ideally I want to check the 'to' address to see if it is in a mysql database and if it is, have the email parsed and posted via CURL to a target destination as well as have attachments saved somewhere locally. I will likely set up a rackspace cloud server dedicated to this task (just accepting emails and posting to 3rd party APIs). However, I do not know where to start. Which server platform / distribution should I go with? Which software needs to be customized, etc?

    Read the article

  • What's the proper way of importing option lists into an Android app?

    - by Scott
    I have been storing option lists for my Android app in a cloud table. For example, categories like "historical fiction","biography","science fiction", etc. I see the following pros and cons: Pro: I can make changes to the list without sending an app update to Google Play Not normalized - I can use the text in my other data tables instead of a reference ID Con: App needs to take time to download from the web each time (or at least check for changes) English only I believe the "proper" way to do this is the use the XML resource files. But I need to make sure the selection references correctly with my data. That is, my app needs to understand that "Poetry" and "Poesía" are the same thing. Is the correct thing to do: Forget about it since I'll never get to the point where I'm translating my app anyway Use a string-array and use the index (0...x) to know what the selection is Use a 2-dimensional string-array with a reference ID in the first column and the text in the second?

    Read the article

  • Best practices for (over)using Azure queues

    - by John
    Hi, I'm in the early phases of designing an Azure-based application. One of the things that attracts me to Azure is the scalability, given the variability of the demand I'm likely to expect. As such I'm trying to keep things loosely coupled so I can add instances when I need to. The recommendations I've seen for architecting an application for Azure include keeping web role logic to a minimum, and having processing done in worker roles, using queues to communicate and some sort of back-end store like SQL Azure or Azure Tables. This seems like a good idea to me as I can scale up either or both parts of the application without any issue. However I'm curious if there are any best practices (or if anyone has any experiences) for when it's best to just have the web role talk directly to the data store vs. sending data by the queue? I'm thinking of the case where I have a simple insert to do from the web role - while I could set this up as a message, send it on the queue, and have a worker role pick it up and do the insert, it seems like a lot of double-handling. However I also appreciate that it may be the case that this is better in the long run, in case the web role gets overwhelmed or more complex logic ends up being required for the insert. I realise this might be a case where the answer is "it depends entirely on the situation, check your perf metrics" - but if anyone has any thoughts I'd be very appreciative! Thanks John

    Read the article

  • IIS7: URL Rewrite - can it be used to hide a CDN path?

    - by Wild Thing
    Hi, I am using Rackspace Cloud CDN (Limelight CDN) for my website. The URLs of the CDN are in the format http://cxxxxxx.cdn.cloudfiles.rackspacecloud.com/something.jpg My domain is mydomain.com. Can I use IIS URL rewriting to show http://cxxxxxx.cdn.cloudfiles.rackspacecloud.com/something.jpg as http://images.mydomain.com/something.jpg? Or is this impossible without the CDN setup accepting my CNAME? If so, can you please help create the URL rewrite rule? (Sorry, don't know how to use regular expressions) Thanks, WT

    Read the article

  • Continuous build infrastructure recommendations for primarily C++; GreenHills Integrity

    - by andersoj
    I need your recommendations for continuous build products for a large (1-2MLOC) software development project. Characteristics: ClearCase revision control Approx 80% C++; 15% Java; 5% script or low-level Compiles for Green Hills Integrity OS, but also some windows and JVM chunks Mostly an embedded system; also includes some UI pieces and some development support (simulation tools, config tools, etc...) Each notional "version" of the deliverable includes deployment images for a number of boards, UI machines, etc... (~10 separate images; 5 distinct operating systems) Need to maintain/track many simultaneous versions which, notably, are built for a variety of different board support packages Build cycle time is a major issue on the project, need support for whatever features help address this (mostly need to manage a large farm of build machines, I guess..) Operates in a secure environment (this is a gov't program) (Edited to add: This is a classified program; outsourcing the build infrastructure is a non-starter.) Interested in any best practices or peripheral guidance you might offer. The build automation issues is one of several overlapping best practices that appear to be missing on the program, but try to keep your answers focused on build infrastructure piece and observations directly related. Cost is not an object. Scalability and ease of retrofitting onto an existing infrastructure are key. JA

    Read the article

  • How to partition a plane

    - by puls200
    Let's say I have a fixed number (X) of points, e.g. coordinates within a given plane (I think you can call it a 2-D point cloud). These points should be partitioned into Y polygons where Y < X. The polygons should not overlap. It would be wonderful if the polygons were konvex (like a Voronoi diagram). Imagine it like locations forming countries. For example, I have 12 points and want to create 3 polygons with 4 points each. I thought about creating a grid which covers the points. Then iterate across the points, assigning them to the closest grid cells. Maybe I miss the obvious? I am sure there are better solutions. Thanks, Daniel I just found an optimization (kmeans++) .Maybe this will yield better results..

    Read the article

  • pitfalls with mixing storage engines in mysql with django?

    - by Dave Orr
    I'm running a django system over mysql in amazon's cloud, and the database default is innodb. But now I want to put a fulltext index on a couple of tables for searching, which evidently requires myisam. The obvious solution is to just tell mysql to ALTER TABLE to myisam, but are there going to be any issues with that? One that comes to mind is that I'll have to remember to do that any time I build a new version of the database, which should theoretically be rare, but there doesn't seem to be a way to tell django to please set the storage engine at the table level. I guess I could write a migration (we use south). Any other things I might be missing? What could possibly go wrong?

    Read the article

  • Windows Phone: Updating backend datastore (via web service) while keeping UI very responsive

    - by will
    I am developing a Windows Phone app where users can update a list. Each update, delete, add etc need to be stored in a database that sits behind a web service. As well as ensuring all the operations made on the phone end up in the cloud, I need to make sure the app is really responsive and the user doesn’t feel any lag time whatsoever. What’s the best design to use here? Each check box change, each text box edit fires a new thread to contact the web service? Locally store a list of things that need to be updated then send to the server in batch every so often (what about the back button)? Am I missing another even easier implementation? Thanks in advance,

    Read the article

  • share image with url from database

    - by LauroSkr
    In my PHP web site will have a text converted to jpeg file,so it is a image. I want that i give users option to share the image. every image will have unique url so they can share image on facebook,twitter , do i need to put images with their url in mysql database or you do it in cloud? user will write text and then script will convert it to image and show it as image. then i want to provide user that he/she can share their created image. It would be great if you could provide me with a link or tutorial for my problem. Dont be hard on beginners,you were all in the same boat.. Thanks, LauroSkr

    Read the article

  • Enterprise Platform in Python, Design Advice

    - by Jason Miesionczek
    I am starting the design of a somewhat large enterprise platform in Python, and was wondering if you guys can give me some advice as to how to organize the various components and which packages would help achieve the goals of scalability, maintainability, and reliability. The system is basically a service that collects data from various outside sources, with each outside source having its own separate application. These applications would poll a central database and get any requests that have been submitted to perform on the external source. There will be a main website and REST/SOAP API that should also have access to the central data service. My initial thought was to use Django for the web site, web service and data access layer (using its built-in ORM), and then the outside source applications can use the web service(s) to get the information they need to process the request and save the results. Using this method would allow me to have multiple instances of the service applications running on the same or different machines to balance out the load. Are there more elegant means of accomplishing this? i've heard of messaging systems such as MQ, would something like that be beneficial in this scenario? My other thought was to use a completely separate data service not based on Django, and use some kind of remoting or remote objects (in they exist in Python) to interact with the data model. The downside here would be with the website which would become much slower if it had to push all of its data requests through a second layer. I would love to hear what other developers have come up with to achieve these goals in the most flexible way possible.

    Read the article

  • Why await is not taken in consideration after deploy?

    - by Cristian Boariu
    I have a method which does some sync calls to a specific REST api, something like: WSRequestHolder url = WS.url("rest_api_url"); Promise<WS.Response> promisePerPage = url.get(); promisePerPage.getWrappedPromise().await(3000, TimeUnit.MILLISECONDS); WS.Response responsePerPage = promisePerPage.get(); ProductsWrapper productsWrapper = new Gson().fromJson(responsePerPage.getBody(), ProductsWrapper.class); As you notice, I put 3 seconds between calls so each request can be parsed in time and inserted in DB. All works great locally but after I deploy to cloud, all goes continuously, without any more waiting (3 seconds) between requests... Do you know why?

    Read the article

  • How to model in J2EE / JEE?

    - by Harry
    Let's say, I have decided to go with J(2)EE stack for my enterprise application. Now, for domain modelling (or: for designing the M of MVC), which APIs can I safely assume and use, and which I should stay away from... say, via a layer of abstraction? For example, Should I go ahead and litter my Model with calls to Hibernate/JPA API? Or, should I build an abstraction... a persistence layer of my own to avoid hard-coding against these two specific persistence APIs? Why I ask this: Few years ago, there was this Kodo API which got superseded by Hibernate. If one had designed a persistence layer and coded the rest of the model against this layer (instead of littering the Model with calls to specific vendor API), it would have allowed one to (relatively) easily switch from Kodo to Hibernate to xyz. Is it recommended to make aggressive use of the *QL provided by your persistence vendor in your domain model? I'm not aware of any real-world issues (like performance, scalability, portability, etc) arising out of a heavy use of an HQL-like language. Why I ask this: I would like to avoid, as much as possible, writing custom code when the same could be accomplished via query language that is more portable than SQL. Sorry, but I'm a complete newbie to this area. Where could I find more info on this topic? Many thanks in advance. /HS

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?

    Read the article

  • helper function not found in view

    - by cbrulak
    I'm following the instructions at: http://agilewebdevelopment.com/plugins/acts_as_taggable_on_steroids to add the tag cloud to my view: in the controller: class PostController < ApplicationController def tag_cloud @tags = Post.tag_counts end end I also added the tag_cloud method as a helper method in the controller and in the view: <% tag_cloud @tags, %w(css1 css2 css3 css4) do |tag, css_class| %> (line 1) <%= link_to tag.name, { :action => :tag, :id => tag.name }, :class => css_class %> (line2) <% end %> (line 3) However: 1) if I don't add the helper_method :tag_cloud in the controller I get a undefined method error for tag_cloud 2) if I do add the helper method I get: wrong number of arguments (2 for 0) on the same line 1 of my sample code above. Suggestions?

    Read the article

  • c++ programming for clusters and HPC

    - by Abruzzo Forte e Gentile
    HI All I need to write a scientific application in C++ doing a lot of computations and using a lot of memory. I have part of the job but due to high requirements in terms of resources I was thinking to start moving to OpenMPI. Before doing that I have a simple curiosity: If I understood the principle of OpenMPI is the developer that has the task of splitting the jobs over different nodes calling SEND and RECEIVE based on node available at that time. Do you know if it does exist some library or OS or whatever that has this capability letting my code reamain as it is now? Basically something that connects all computers and let share as one their memory and CPU? I am a bit confused because of the high material available on the topic. Should I look at cloud computing? or Distributed Shared Memory? Can you help me or address me a bit? Thanks

    Read the article

  • Why does Indy 10's echo server have high CPU usage when the client disconnects?

    - by Virtuo
    When I disconnect echo client like : EchoClient1.Disconnect; client disconnects fine... but EchoServer does NOT EVEN register client disconnection and it ends up with high process usage !?!? in every example and every doc it says that calling EchoClient.Disconnect is sufficient ! anyone, any idea ? (I am working in Win7, cloud that be a problem ?) Server code : procedure TForm2.EServerConnect(AContext: TIdContext); begin SrvMsg.Lines.Add('ECHO Client connected !'); end; procedure TForm2.EServerDisconnect(AContext: TIdContext); begin SrvMsg.Lines.Add('ECHO Client disconnected !'); end; problem is "TForm2.EServerDisconnect" never executes !?!?

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Fastest Way To Format a Plain Text Using Javascript

    - by Nathan Campos
    I have a huge plain text document, about 700kb which is very big for plain texts and I need to format it on cloud converting it to HTML, but the only things that I need to replace, format to HTML so it can be displayed by the browser, are bold and italic. For bold at the plain text they are like this: Not on bold... **bold text here** not bold here And italic like this: Not italic... *italic text* no italic Just like StackOverflow does for their formatting, but the problem is that I need to make it a lot faster, since the text is so big... One of my ideas was to add a page slide, so I the script just need to format some part of the text, not it all, then after the user changes the page the script would be called again, but the problem is how I can make the code for this all?

    Read the article

  • Automatically Organize Tags in Tax/Folksonomy

    - by Rob Wilkerson
    I'm working on a process that will perform natural language processing (NLP) on one--and potentially several--of our content rich sites. What I'd like to do once the NLP is complete is to automatically organize the output (generally a set of terms that you might think of as tags given the prevalence of that metaphor) into some kind of standard or generally accepted organizational structure. In a perfect world, I'd really like this to be crowd sourced under the folksonomy concept (as opposed to a taxonomy) since the ultimate goal is to target/appeal to real people rather than "domain experts", but I'm open to ideas and best practices. For the obvious purpose of scalability, I'd like to automate the population of this tax/folksonomy so that "some guy" in the team/organization isn't responsible for looking at a bunch of words (with or without context) and arbitrarily fleshing out the contextual components of the tree. I have a few ideas for doing this that require some research to establish viability, but I have exactly zero practical experience with this sort of thing so the ideas really just boil down to stuff I made up that might perform some role in accomplishing the task. Imagining that others have vastly more experience with this sort of thing, I'm hoping that I can stand on your shoulders. Thanks for your thoughts and insights.

    Read the article

  • Started with a local git repo now I want to push my changes to a remote server

    - by Eliseo Soto
    Hi, I started a new project and created a local git repo with "git init" and now I have a few branches and everything works great. However since my webhosting company offers git hosting (if you're curious https://support.eapps.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=203) I'd like to push my entire repo to their servers to have a backup in the cloud in case something bad happens to my local repo. How can I make the remote repo the "origin" since the repo was started locally? Hope my question makes sense. Thanks, a Git newbie.

    Read the article

  • Icon backdrop on Samsung Galaxy S? how to change this?

    - by AKh
    Hi all, I see a backdrop being added to my launch icon on the Samsung Galaxy S devices. I need this backdrop changed to a custom backdrop which we created. I know this can be changed since apps like"Daily Briefing" have their own cloud like backdrop making the launch icon look really rich. If anyone does know how to change this backdrop please let me know. I am talking about the background image set behind the icons. Eg: in the below image on the 1st row can you notice the yellow backdrop for Memo app, Green backdrop for Mini Diary app, similarily in the 3rd row notice the green backdrop behind the MAIL app.... I need to change these backdrops. Thanks in advance. Appreciate your help.

    Read the article

  • best method for background uploader in Android

    - by Dr.Dredel
    Problem: I want to write a process that will allow a user to take photos with the device and for those photos to then be uploaded to some listener in the cloud. The user should not have to do anything to initiate the upload, a background listener would just watch the folder and as long as it finds files in it it would upload them and delete them. Two problems: 1) how to keep the program running in the background even after the user is no longer taking pictures (and if they reboot the device for it to wake up and finish the uploads, if any remain) 2) assuming the connection is spotty (as it always is) how to verify that a given image has completed its upload, and if not, to resubmit it. I don't need any code examples, I just would like opinions on the best strategy to get this implemented. I was going to use Apache commons and just do an upload to a PHP, but am not sure what sort of error checking exists to take into account a connection drop mid file. TIA.

    Read the article

  • How to add JS/CSS files to Joomla modules?

    - by Apache Fan
    I am starting out with Joomla and am writing a simple Joomla module. I am using some custom CSS and JS in my module. Now when I distribute this module I need my JS/CSS files to go with the ZIP. I have added my files in my module ZIP file. This is what i need to know - How do I refer to these CSS/JS files in my module so that even if I distribute the module as a zip i would not have to send the css/js files separately? I tried looking at different solutions including http://www.howtojoomla.net/how-tos/development/how-to-add-cssjavascript-to-your-joomla-extension But I was not able to figure out what the URL for the JS/CSS file should be? I am using Joomla 1.7 hosted on a cloud hosting site. Thanks

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >