Search Results

Search found 6922 results on 277 pages for 'moving platforms'.

Page 61/277 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • What's the recommended way to configure a Synaptics touchpad device?

    - by htorque
    I want to increase the scroll area by moving the so-called RightEdge a bit towards the middle. Right now I'm doing this via a one-liner that's called at session start (added via gnome-session-properties): xinput --set-prop --type=int --format=32 11 252 1781 5125 1646 4582 This works fine, but feels like a hack. What's the recommended way to edit/set touchpad device properties like this one? Few years ago I'd have put that into the xorg.conf, but this seems to be discouraged nowadays.

    Read the article

  • Do I need to match hardware on a Mac to my PC to get the same user experience?

    - by Darth
    I've been playing around with the thought of moving from a PC to a Mac. if you don't want to read this, skip to the "upgrade options" My current setup Most of my time I spent moving back and forth between Linux and Windows. During the last upgrade to Vista, I got myself pc with Core 2 Quad, 8GB of RAM and GeForce 9800GTX+. Currently I'm running dual boot between Ubuntu 10.04 and Windows Vista x64. Most of my work, around 80%, I can do on Ubuntu, which is mostly Ruby/Java programming. If that was all I needed, Ubuntu would be really great. However, I also do quite a lot of Photography and Design, which forces me to use Adobe software (not only Photoshop). I also work with Wacom Intuos4 tablet, which doesn't really have great support on Linux machines. I've tried virtualization both ways (Linux in Win and Win in Linux), but neither was anywhere near satisfying. These are those of many many reasons I want to move to OS X. Upgrade options This is how I see my upgrade options: Mac Mini - cheapest solution, but worst performance iMac - more expensive, better performing with second LCD for free Mac Pro - could match my current PC performance, currently outside of the price range When I compare the Mac hardware vs my current PC, it will be always worse, unless I decide to pump in a lot of money. The question that comes to my head, do I need to match my current PC hardware to get the same user experience with a Mac? If I look at it from the Vista point of view, 2GB RAM is as low as it gets, 4GB is usable ... and the 8GB runs very smoothly. PC HW != Mac HW? If I bought the Mac Mini for roughly the same price I paid for my PC (Core 2 Quad with 8GB RAM), I'd get Core 2 Duo with 4GB RAM. But I don't want to run Vista on it, so I can't compare the hardware directly. Say that I want to do the same things on the Mac Mini as I do on my PC, eg. open up 50 tabs in Google Chrome and start working with a large PSD in Photoshop (couple hundred MB), would running on Mac OS X compensate for the lower hardware performance? My point is, that if I'm about to upgrade, I wouldn't like to upgrade to hardware that runs a lot slower. Good analogy for this is Vista vs Ubuntu, where you can run Ubuntu smoothly on a low end laptop, but in Vista, you'd be happy to open a browser. Does the same principle apply to OS X?

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • SQL Server v.Next (Denali) : More on contained databases and "contained users"

    - by AaronBertrand
    One of the reasons for contained databases (see my previous post ) is to allow for a more seamless transition when moving a database from one server to another. One of the biggest complications in doing so is making sure that all of the logins are in place on the new server. Contained databases help solve this issue by creating a new type of user: a database-level user with a password. I want to stress that this is not the same concept as a user without a login , which serves a completely different...(read more)

    Read the article

  • SQL Server v.Next (Denali) : OS compatibility & upgrade support

    - by AaronBertrand
    Microsoft's Manageability PPM Dan Jones has asked for our feedback on their proposed list of supported operating systems and upgrade paths for the next version of SQL Server. (See the original post ). This has generated all kinds of spirited debates on twitter, in protected mailing lists, and in private e-mail. If you're going to be involved in moving to Denali, you should be aware of these proposals and stay on top of the discussion until the results are in. (The media are starting to pick up on...(read more)

    Read the article

  • How To Use Regular Expressions for Data Validation and Cleanup

    You need to provide data validation at the server level for complex strings like phone numbers, email addresses, etc. You may also need to do data cleanup / standardization before moving it from source to target. Although SQL Server provides a fair number of string functions, the code developed with these built-in functions can become complex and hard to maintain or reuse. The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • Interpolation using a sprite's previous frame and current frame

    - by user22241
    Overview I'm currently using a method which has been pointed out to me is extrapolation rather than interolation. As a result, I'm also now looking into the possibility of using another method which is based on a sprite's position at it's last (rendered) frame and it's current one. Assuming an interpolation value of 0.5 this is, (visually), how I understand it should affect my sprite's position.... This is how I'm obtaining an inerpolation value: public void onDrawFrame(GL10 gl) { // Set/re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip) { SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick += skipTicks; timeCorrection += (1000d / ticksPerSecond) % 1; nextGameTick += timeCorrection; timeCorrection %= 1; loops++; tics++; } interpolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(interpolation); } I am then applying it like so (in my rendering call): render(float interpolation) { spriteScreenX = (spriteScreenX - spritePreviousX) * interpolation + spritePreviousX; spritePreviousX = spriteScreenX; // update and store this for next time } Results This unfortunately does nothing to smooth the movement of my sprite. It's pretty much the same as without the interpolation code. I can't get my head around how this is supposed to work and I honestly can't find any decent resources which explain this in any detail. My understanding of extrapolation is that when we arrive at the rendering call, we calculate the time between the last update call and the render call, and then adjust the sprite's position to reflect this time (moving the sprite forward) - And yet, this (Interpolation) is moving the sprite back, so how can this produce smooth results? Any advise on this would be very much appreciated. Edit I've implemented the code from OriginalDaemon's answer like so: @Override public void onDrawFrame(GL10 gl) { newTime = System.currentTimeMillis()*0.001; frameTime = newTime - currentTime; if ( frameTime > (dt*25)) frameTime = (dt*25); currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { SceneManager.getInstance().getCurrentScene().updateLogic(); previousState = currentState; t += dt; accumulator -= dt; } interpolation = (float) (accumulator / dt); render(); } Interpolation values are now being produced between 0 and 1 as expected (similar to how they were in my original loop) - however, the results are the same as my original loop (my original loop allowed frames to skip if they took too long to draw which I think this loop is also doing). I appear to have made a mistake in my previous logging, it is logging as I would expect it to (interpolated position does appear to be inbetween the previous and current positions) - however, the sprites are most definitely choppy when the render() skipping happens.

    Read the article

  • BDD using SpecFlow on ASP.NET MVC Application

    - by Rajesh Pillai
    I usually love doing TDD and am moving towards understanding BDD (Behaviour Driven Development).  My learnings are documented in the form of an article at CodeProject. The URL is http://www.codeproject.com/KB/architecture/BddWithSpecFlow.aspx I will keep this updated as and when I learn a couple of more things. Hope you like it. Cheers !!!

    Read the article

  • Social Targeting: Who Do You Think You’re Talking To?

    - by Mike Stiles
    Are you the kind of person that tries to sell Clay Aiken CD’s outside Warped Tour concert venues? Then you don’t think a lot about targeting your messages to the right audience. For your communication to pack the biggest punch it can, you need to know where to throw it. And a recent study on social demographics might help you see social targeting in a whole new light. Pingdom’s annual survey of social network demographics shows us first of all that there is no gender difference between Facebook and Twitter. Both are 40% male, 60% female. If you’re looking for locales that lean heavily male, that would be Slashdot, Hacker News and Stack Overflow. The women are dominating Pinterest, Goodreads and Blogger. So what about age? 55% of tweeters are 35 and up, compared with 63% at Pinterest, 65% at Facebook and 70% at LinkedIn. As you can tell, LinkedIn supports the oldest user base, with the average member being 44. The average age at Facebook is 51, and it’s 37 at Twitter. If you want to aim younger, have you met Orkut yet? 83% of its users are under 35. The next sites in order as great candidates for the young market are deviantART, Hacker News, Hi5, Github, and Reddit. I know, other than Reddit, many of you might be saying “who?” But the list could offer an opportunity to look at the vast social world beyond Facebook, Twitter and Google+ (which Pingdom did not include in the survey at all due to a lack of accessible data). As for the average age of social users overall: 26% are 25-34 25% are 35-44 19% are 45-54 16% are 18-24  6% are 55-64  5% are 0-17  and 2% are 65 Now you know where you stand on the “cutting edge” scale for a person your age. You’re welcome. Certainly such demographics are a moving target and need to be watched and reassessed on a regular basis to make sure you’re moving in step with the people you want to talk to. For instance, since Pingdom’s survey last year, the age of the average Facebook user has gone up 2 years, while the age of the average Twitter user has gone down 2 years. With the targeting and analytics tools available on today’s social management platforms, there’s little need to market in the dark. Otherwise, good luck with those Clay CD’s.

    Read the article

  • Long Running Service Request or COLD CASE?

    - by chris.warticki
    What's going on? Why is it taking so long? Is anyone out there? Resolving Service Requests can seem to take forever. If your Service Request is taking more than a few days, moving into weeks or months, here are few things to consider.  Details here.  Comments welcome. -Chris Warticki twittering @cwarticki Join one of the Twibes - http://twibes.com/OracleSupport or http://twibes.com/MyOracleSupport

    Read the article

  • I&rsquo;ve moved out&hellip;

    - by Michael Cummings
    Originally posted on: http://geekswithblogs.net/Mathoms/archive/2013/06/21/irsquove-moved-outhellip.aspxGeeksWithBlogs has been a great property ever since I decided to start bloggging, however I have outgrown it and am moving to a new location. Please visit me at http://michaelcummings.net from now on. The RSS feed has been updated so that should automatically update to the new address. I’ll miss GWB, but my new property is hosted on Azure using Orchard and I have been really enjoying it so far.

    Read the article

  • export emails from open-xchnage webmail and import them to horde webmail

    - by sam
    Im moving hosting and need to move the emails stored with the old domain/hosting, the hosting i want to transfer from uses open-xchange for its webmail and the hosting i want to move to uses horde for their webmail. Is their an automated way from inside of the open xchange webmail, or is there a way i can do it from inside of outlook / mac mail ? not sure if this is the right place for this, if not please advise..

    Read the article

  • Digg Moves From MySQL to NoSQL

    <b>Datamation:</b> "Social networking and voting site Digg is rewriting its underlying software infrastructure in an effort to improve performance and scalability. Part of that effort involves moving away from the MySQL database that has helped to power Digg since its creation."

    Read the article

  • How to fix a Silverlight download progress indicator that jumps from 0% to 100%

    - by JaydPage
    Originally posted on: http://geekswithblogs.net/JaydPage/archive/2013/10/29/fixing-a-broken-silverlight-xap-file-download-progress-indicator-that.aspxAfter moving our silverlight application to a new server I came across an problem whereby the download progress indicator on the splash screen was stuck on 0% until the file had completely downloaded.After about an hour of searching for the answer I realised that there is a distinct lack of help out there for this problem.It is a simple fix:1) On the server that is hosting your website, go into IIS and click on the website.2) Click on the compression section3) Un-check the option that says "Dynamic Content Compression"4) Save changes

    Read the article

  • DRBD and MySQL - Heartbeat Setup

    Heartbeat automates all the moving parts and can work as well with the MySQL master-master active/passive solution as well as it can with the MySQL & DRBD solution. It manages the virtual IP address used by the database, directs DRBD to become primary, or relinquish primary duties, mounts the /dev/drbd0 device, and starts/stops MySQL as needed.

    Read the article

  • DRBD and MySQL - Heartbeat Setup

    Heartbeat automates all the moving parts and can work as well with the MySQL master-master active/passive solution as well as it can with the MySQL & DRBD solution. It manages the virtual IP address used by the database, directs DRBD to become primary, or relinquish primary duties, mounts the /dev/drbd0 device, and starts/stops MySQL as needed.

    Read the article

  • Scaling Out the Distribution Database

    Replication is a great technology for moving data from one server to another, and it has a great many configuration options. David Poole brings us a technique for scaling out with multiple distribution databases.

    Read the article

  • Submitting a sitemap to take care of inherited Google crawler errors

    - by leeand00
    I have an awful lot of Google Crawler errors (1000 or so) after I inherited a site that the previous owner migrated without moving much of their content. Would generating a map of the current site and submitting it to Google help fix this? Is there any quicker, automated way to eliminate errors other than clicking each and every site error? Note: I have already tried automating this on my own.

    Read the article

  • Report: The Ambitious Future of KDE4

    KDE is currently moving in three directions: adding functionality, extending the concept of the social desktop, and the introduction of KDE on to every possible hardware platform. Bruce Byfield learns where KDE is going from lead KDE developer Aaron Seigo.

    Read the article

  • Unity: Spin wheels to move vehicle

    - by Paul Manta
    I am just getting started with Unity and I'd like to ask a question. If I have a "Vehicle" object that has two children: "FrontWheel" and "BackWheel" (both 'wheels' are cylinders), how should I set everything up such that I can move the entire vehicle by turning its wheels? When I apply a torque to "FrontWheel", the vehicle starts to move, but instead of the whole thing the moving together, the chassis is rolling on the cylinders and eventually falls off. How can I prevent it from doing that?

    Read the article

  • Can someone help with indexing issues

    - by user631249
    Hi all I am on the first page of google for keywords concerned with MOVING, however i cant seem to break the carpet cleaning rankings. I have made changes and additions which havent been indexed yet. Should i wait for the run or please please can someone give me pointers on the carpet cleaning indexing. Also i have 53pages submitted and only 38 indexed, where could the problem be. Is there software to check indexing hiccups . Thanks.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >