Search Results

Search found 3148 results on 126 pages for 'amazon s3'.

Page 96/126 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • Anti-aliased text on HTML5's canvas element

    - by Matt Mazur
    I'm a bit confused with the way the canvas element anti-aliases text and am hoping you all can help. In the following screenshot the top "Quick Brown Fox" is an H1 element and the bottom one is a canvas element with text rendered on it. On the bottom you can see both "F"s placed side by side and zoomed in. Notice how the H1 element blends better with the background: http://jmockups.s3.amazonaws.com/canvas_rendering_both.png Here's the code I'm using to render the canvas text: var canvas = document.getElementById('canvas'); if (canvas.getContext){ var ctx = canvas.getContext('2d'); ctx.fillStyle = 'black'; ctx.font = '26px Arial'; ctx.fillText('Quick Brown Fox', 0, 26); } Is it possible to render the text on the canvas in a way so that it looks identical to the H1 element? And why are they different?

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • FileUtils.mv adding linebreaks in Windows

    - by Lowgain
    I am streaming wav data from a flash application. If I get the data and do the following: f = File.open('c:/test.wav') f << wav_data.pack('c'*wav_data.length) f.close The wav file works perfectly. If I do this: f = Tempfile.new('test.wav') f << wav_data.pack('c'*wav_data.length) f.close FileUtils.mv(f.path, 'c:/') The file is there, but sounds all garbled. Checking in a hex editor shows that everywhere the working file had an 0A (or \n), the garbled version had 0D0A (or \r\n) I am using this in conjuction with rails+paperclip, and am going to be using a combination of Heroku and S3 for the live app, so I am hoping this problem will solve itself, but I'd like to get this working on my local machine for the time being. Does anybody know of any reason FileUtils.mv would be doing this, and if there is a way to change its behaviour?

    Read the article

  • rails3 link_to :with attribute?

    - by z3cko
    i am wondering if the :with attribute is removed from rails3 since i cannot find anything in the rails3 api - http://rails3api.s3.amazonaws.com anyone has a clue or give a hint on how to use the :with parameter to send data with a link_to non-working example: = link_to "Foo", {:action => "filter", :filter => "filter1",:with => "'test='+$('search').value"}, :remote => true, :class => "trash unselected", :id => "boo" thanks!

    Read the article

  • Can you recommend an effective and cheap CDN for video streaming?

    - by Shaul Dar
    I am looking for a streaming CDN recommendation. Cost and performance are my chief concerns. Video viewers may be all over the globe, with the, US, Europe, Russia and South America topping the list (yes, I know that leaves out a little :-). I saw the following list of streaming CDNs in LinkedIn: Akamai, BitGravity, EdgeCast, Highwinds, Internap, Level3, Limelight, Mirror Image, Move Networks, Qbrick, SimpleCDN, StreamZilla, Swarmcast (streaming via HTTP), WINK Streaming... (+Amazon's S3 and CloudFront) Can anyone recommend any of these or others? Or a different type of technology (e.g. P2P).

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • How to validate HTTP request headers before receiving request body using WCF

    - by anelson
    I'm implementing a REST service using WCF which will be used to upload very large files. The HTTP headers in this request will communicate information which will be validated prior to allowing the upload to proceed (things like permissions, available disk space, etc). It's possible this validation will fail resulting in an error response. I'd like to do this validation prior to the client sending the body of the request, so it has a chance to detect failure before uploading potentially gigabytes of data. RESTful web services use the HTTP 1.1 Expect: 100-continue in the request to implement this. For example Amazon S3's REST API can validate your key and ACLs in response to an object PUT operation, returning 100 Continue if all is well, indicating you may proceed to send your data. I've rummaged around the WCF documentation and I just can't see a way to accomplish this without doing some pretty low-level hooking into the HTTP request processing pipeline. How would you suggest I solve this problem?

    Read the article

  • Need an encrypted online source code backup service.

    - by camelCase
    Please note this is not a question about online/hosted SVN services. I am working on a home based, solo developer, project that now has commercial significance and it is time to think about remote source code backup. There is no need for file level check in/out, all I need is once a day or once a week directory level snapshot to remote storage. Automatic encryption would be a bonus to protect my IP. What I have in mind is some sort of GUI interface app that will squirt a source code snapshot off to an Amazon S3 bucket on an automatic schedule. (My development PC runs on MS Windows.)

    Read the article

  • image decoding problem in android

    - by achie
    Some of the images are not being displayed in the web browser in android while they work fine on all other machines and mobile devices. this is an example of one of those images http://s3.amazonaws.com/itriage/logos/19/iphone_list.jpg?1261515055 So I tried to pull the image to see it if it works from code. This is what I did URL url = new URL(address); InputStream is = (InputStream) url.getContent(); Drawable d = Drawable.createFromStream(is, "src"); It works for most images but for some images including this images it gives this error D/skia (28314): --- decoder-decode returned false Why is this happening and how can I prevent this. I saw an example on the developers forum but thats when we are accessing the image directly. But what I want is the browser to handle it. So how do I encode my images on my server for the android devices to recognise them correctly?

    Read the article

  • Recording Audio through RTMP/Rails

    - by Lowgain
    I am in the process of building a rails/flex application which requires audio to be recorded and then stored in our amazon s3 account. I have found no alternative to using some form of RTMP server for recording audio through flash, but our hosting environment will not allow us to install anything like FMS, Red5, etc. Is there any existing Ruby/Rails RTMP solution that will allow audio recording? If not, is it possible for Rails to at least intercept the RTMP stream and then I can hope to reference red5 or something for parsing the data (long shot, I know)? The other alternative I can think of is hosting a red5 server on another host and communicating with our rails app once the saving/uploading is done, which is not preferred. Am I going to have any luck here?

    Read the article

  • Reviewing Retail Predictions for 2011

    - by David Dorf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I've been busy thinking about what 2012 and beyond will look like for retail, and I have some interesting predictions to share.  But before I go there, let’s first review this year’s predictions before making new ones for 2012. 1. Alternate Payments We've seen several alternate payment schemes emerge over the last two years, and 2011 may be the year one of them takes hold. Any competition that can drive down fees will be good for everyone. I'm betting that Apple will add NFC chips to their next version of the iPhone, then enable payments in stores using iTunes accounts on the backend. Paypal will continue to make inroads, and Isis will announce a pilot. The iPhone 4S did not contain an NFC chip, so we’ll have to continuing waiting for the iPhone 5. PayPal announced its moving into in-store payments, and Google launched its wallet in selected cities.  Overall I think the payment scene is heating up and that trend will continue. 2. Engineered Systems The industry is moving toward purpose-built appliances that are optimized across the entire stack. Oracle calls these "engineered systems" and the first two examples are Exadata and Exalogic, but there are other examples from other vendors. These are particularly important to the retail industry because of the volume of data that must be processed. There should be continued adoption in 2011. Oracle reports that Exadata is its fasting growing product, and at the recent OpenWorld it announced the SuperCluster and Exalytics products, both continuing the engineered systems trend. SAP’s HANA continues to receive attention, and IBM also seems to be moving in this direction. 3. Social Analytics There are lots of tools that provide insight into how a brand is perceived across popular internet sites, but as far as I know, these tools are not industry specific. The next step needs to mine the data and determine how it should influence retail operations. The data needs to help retailers determine how they create promotions, which products to stock, and how to keep consumers engaged. Social data alone does not provide the answers, but its one more data point that will help retailers make better decisions. Look for some vendor consolidation to help make this happen. In March, Salesforce.com acquired leading social monitoring vendor Radian6 and followed up with acquisitions of Heroku and Model Metrics. The notion of Social CRM seems to be going more mainstream now. 4. 2-D Barcodes Look for more QRCodes on shelf-tags, in newspaper circulars, and on billboards. It's a great portal from the physical world into the digital one that buys us time until augmented reality matures further. Nobody wants to type "www", backslash, and ".com" on their phones. QRCodes are everywhere. ‘Nuff said. 5. In the words of Microsoft, "To the Cloud!" My favorite "cloud application" is Evernote. If you take notes on your work laptop, you will inevitably need those notes on your home PC. And if you manage to solve that problem, you'll need to access them from your mobile phone. Evernote stores your notes in the cloud and provides easy ways to access them. Being able to access a service from anywhere and not having to worry about backups, upgrades, etc. is great. Retailers will start to rely on cloud services, both public and private, in the coming year. There were no shortage of announcements in this area: Amazon’s cloud-based Kindle Fire, Apple’s iCloud, Oracle’s Public Cloud, etc. I saw an interesting presentation showing how BevMo moved their systems to the cloud.  Seems like retailers are starting to consider the cloud for specific uses. 6. F-CommerceTop of Form Move over "E" and "M" so we can introduce "F-Commerce," which should go mainstream in 2011. Already several retailers have created small stores on Facebook, and it won't be long before Facebook becomes a full-fledged channel in the omni-channel world of retail. The battle between Facebook and Google will heat up over retail, where both stand to make lots of money. JCPenney and ASOS both put their entire catalogs on Facebook, and lots of other retailers have connected Facebook to their e-commerce site. I still think selling from the newsfeed is the best approach, and several retailers are trying that approach as well. I just don’t see Google+ as a threat to Facebook, so I think that battle is over.  I called 2011 The Year of F-Commerce, and that was probably accurate. Its good to look back at predictions, but we also have to think about what was missed.  I didn't see Amazon entering the tablet business with such a splash, although in hindsight it was obvious. Nor did I think HP would fall so far so fast.  Look for my 2012 predictions coming soon.

    Read the article

  • Intent.getInt() doesn't work on ICS, but works on JB

    - by ObAt
    I use this code to send parameters when I start a new Activity: Intent inputForm = new Intent(getActivity(), InputForm.class); Bundle b = new Bundle(); b.putInt("item", Integer.parseInt(mItem.id)); //Your id inputForm.putExtras(b); //Put your id to your next Intent startActivity(inputForm); And I use this code for reading the parameters in the inputForm Activity: Bundle b = getIntent().getExtras(); if (b != null) { int value = b.getInt("item"); ID = value; } Toast.makeText(getApplication(), "MIJN ID:" + Integer.toString(ID), Toast.LENGTH_LONG).show(); When I run this code on my Samsung Tab 10.1 GT-P7510 ID is alsways 0, when I run the same code on my Galaxy S3 with JB the code just works fine. Can someone help me? Thanks in advance, ObAt

    Read the article

  • Respecting EXIF orientation when displaying iPhone photos on the web

    - by GingerBreadMane
    I am developing an iPhone camera app that uploads an image to Amazon S3 and that image is displayed on a website. When the iPhone takes a picture, it always saves the photo in an upright orientation, while the orientation used to correctly view the photo is saved in the image's EXIF data. So if I take a photo with the iPhone and open it in FireFox without processing the EXIF data, the image could be sideways or upside down. My problem is that I don't know how to display the photo in it's correct orientation on the website. My current solution is to rotate the photo in the iPhone app, but I'd rather not do that. Is there anyway to respect the EXIF data when displaying on the web without pre-processing the image?

    Read the article

  • NerdDinner difficulties

    - by George
    Hello guys, I'm having a problem with the Create method of the NerdDinner tutorial, which is very good BTW. As you can see here http://nerddinnerbook.s3.amazonaws.com/Part5.htm in the Create method, he removed the ID field of the aspx page. I did that too, but I cannot add any dinners because I get a primary key violation. How is NerdDinner controlling the ids of each dinner? I revised the tutorial and could not see any references to identity fields on SQL database. I even created a method to get me the highest id in the table: public int GetHighestDinnerId() { int resultado = (from dinner in dataContext.Dinners select dinner.DinnerId).Max(); return resultado; } which does not work either. Any thoughts? Thank you

    Read the article

  • How to get paperclip to delete files

    - by webdestroya
    I have a model that is using Paperclip to manage the file. After I delete the model, I obviously would like the file to be deleted as well, but I cannot seem to find out how to get the file deleted using Paperclip. I have tried self.sourcefile = nil if !sourcefile.dirty? in the before_destroy def, but that had no effect. (I want to be able to have it delete the file locally when I test, and then on S3 when I use that - So i need a pure paperclip solution) Any ideas?

    Read the article

  • Logs are filling up with httpclient.wire.content dumps. How can I turn it off?

    - by ?????
    My catalina logs are filling up with gobs of statements like: /logs/catalina.out:2010-05-05 02:57:19,611 [Thread-19] DEBUG httpclient.wire.content - >> "[0x4] [0xc][0xd9][0xf4][0xa2]MA[0xed][0xc2][0x93][0x1b][0x15][0xfe],[0xe]h[0xb0][0x1f][0xff][0xd6][0xfb] [0x8f]O[0xd4][0xc4]0[0xab][0x80][0xe8][0xe4][0xf2][\r]I&[0xaa][0xd2]BQ[0xdb](zq[0xcd]ac[0xa8] on and on forever. I searched every config file in both tomcat and apache for the statements that purportedly turn this on as described here: http://hc.apache.org/httpclient-3.x/logging.html And I don't see where this logging as been enabled. No other .war I deployed does this. The log4j configuration block in the app isn't doing it. Any ideas? I'm using an S3 library for grails that may be the source for these. However when I run this application on my development machine (in both develop and deploy configs), I'm not seeing it.

    Read the article

  • what does the @ symbol mean in ls -l directory listing?

    - by Andrew Arrow
    When I run ls -l on my mac I see two .yml files: -rw-r--r-- 1 aa staff 6 Apr 15 05:50 s1.yml -rw-r--r--@ 1 aa staff 362 Apr 15 05:49 s3.yml same owner, same permissions but one has a @ at the end of the permisions. The one with the @ shows up in my editor, the one without does not. So there must be some significance. How can I turn on the @ for the file without it? I selected the files in the finder and did get info and everything looks identical between the two files.

    Read the article

  • How to bind a control to a singleton in Cocoa?

    - by SphereCat1
    I have a singleton in my FTP app designed to store all of the types of servers that the app can handle, such as FTP or Amazon S3. These types are plugins which are located in the app bundle. Their path is located by applicationWillFinishLoading: and sent to the addServerType: method inside the singleton to be loaded and stored in an NSMutableDictionary. My question is this: How do I bind an NSDictionaryController to the dictionary inside the singleton instance? Can it be done in IB, or do I have to do it in code? I need to be able to display the dictionary's keys in an NSPopupButton so the user can select a server type. Thanks in advance! SphereCat1

    Read the article

  • How do API Keys and Secret Keys work?

    - by viatropos
    I am just starting to think about how api keys and secret keys work. Just 2 days ago I signed up for Amazon S3 and installed the S3Fox Plugin. They asked me for both my Access Key and Secret Access Key, both of which require me to login to access. So I'm wondering, if they're asking me for my secret key, they must be storing it somewhere right? Isn't that basically the same thing as asking me for my credit card numbers or password and storing that in their own database? How are secret keys and api keys supposed to work? How secret do they need to be? Are these applications that use the secret keys storing it somehow? Thanks for the insight.

    Read the article

  • Embeding EasyVideoPlayer Code into Wordpress Theme - Video not showing

    - by bbacarat
    I'm attempting to place some embed code into a Premium WordPress Theme. NOTE: I'm not great when it comes to php. The embed code is produced by a video player called EasyVideoPlayer. (Basically it allows me to use Amazon S3 and gives me feedback on when people stop watching the video.) This is the embed code I have: _evpInit('ZXh0cmEtbW9uZXktZnJvbS1ob21lLTEubW92'); I've opened the index.php wordpress file and placed this video embed code in between the that represents the area of the website I want it to show up. However the video is not showing. If we place both the theme and video player aside, would you expect the php code to accept what I've done or is this not the way to go about adding this embed code? NOTE:I've contacted both the Wordpress Premium Theme support at Woothemes.com and the video players support for EasyVideoPlayer.com However both tend to stop at the point that another paid product is involved! Grrreat. website is www.extramoneyfromhome.co.uk

    Read the article

  • How important is caching for a site's speed with PHP?

    - by benhowdle89
    I've just made a user-content orientated website http://www.humanisms.co.uk Its done in PHP, MySQL and jQuery's AJAX, at the moment there is only a dozen or so submissions and already i can feel it lagging slightly when it goes to a new page (therefore running a new mysql query) Is it most important for me to try and optimise my mysql queries (by prepared statements) or is it worth in looking at CDN's (amazon s3) and caching (much like the WordPress plugin WP Super Cache) which works by serving static HTML files when there hasnt been new content submitted. Which route is the most beneficial, for me as a developer, to take, ie. where am i better off concentrating my efforts to speed up the site?

    Read the article

  • Best way to perform authentication on every request

    - by Nik
    Hello. In my asp.net mvc 2 app, I'm wondering about the best way to implement this: For every incoming request I need to perform custom authorization before allowing the file to be served. (This is based on headers and contents of the querystring. If you're familiar with how Amazon S3 does rest authentication - exactly that). I'd like to do this in the most perfomant way possible, which probably means as light a touch as possible, with IIS doing as much of the actual work as possible. The service will need to handle GET requests, as well as writing new files coming in via POST/PUT requests. The requests are for an abitrary file, so it could be: GET http://storage.foo.com/bla/egg/foo18/something.bin POST http://storage.foo.com/else.txt Right now I've half implemented it using an IHttpHandler which handles all routes (with routes.RouteExistingFiles = true), but not sure if that's the best, or if I should be hooking into the lifecycle somewhere else? Many thanks for any pointers. (IIS7)

    Read the article

  • php : open a file download dialog

    - by Hugh Valin
    I have a mpg file hosted in Amazon S3, that I want to link to a page I have, so the user will be able to download it from the page. I have in my page a linke: bla bla" The link to the file workds when I right click it and choose "Save Target As" , but I would like it to work also when I left click it, and that it will open a file download dialog. right now, a left click will direct to a page that has the video directly played in it (in FireFox) or just won't load (in Explorer) I am working in php, anyone has a clue why this happens?

    Read the article

  • SlidingDrawer handle design issue

    - by bali182
    I would like to create a SlidingDrawer, which has a handle like this: Until now i tought my solution was ok, which was: 1.) I created a 9 patch: As you can see, the center part is not strechable, only the two sides (and the height, if needed). On most of the phones i got the desired result, shown above. 2.) Put it in this layout (just pseudo-code): <SlidingDrawer (fullscreen)> <Button as handle (full width, backround the 9patch)/> <LinearLayout as content /> </SlidingDrawer> However, i tested the app on my friend's new Galaxy S3, and the result was something like this: The part, which should be centered, was completely off one side. And i have no idea, why and it bugs me since then. My Question: Is this the prefered way (9 patch with full width) to acomplish the look i want? If not, could someone suggest me a better solution?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >