Search Results

Search found 7284 results on 292 pages for 'rails 3'.

Page 290/292 | < Previous Page | 286 287 288 289 290 291 292  | Next Page >

  • Rob Blackwell on interoperability and Azure

    - by Eric Nelson
    At QCon in March we had a sample Azure application implemented in both Java and Ruby to demonstrate that the Windows Azure Platform is not just about .NET. The following is an interesting interview with Rob Blackwell, the R&D director of the partner who implemented the application. UK Interoperability Team Interviews Rob Blackwell, R&D Director at Active Web Solutions. Is Microsoft taking interoperability seriously? Yes. In the past, I think Microsoft has, quite rightly come in for criticism, but architects and developers should look at this again. The Interoperability Bridges site (http://www.interoperabilitybridges.com/ ) shows a wide range of projects that allow interoperability from Java, Ruby and PHP for example. The Windows Azure platform has been architected with interoperable APIs in mind. It's straightforward to access the various storage facilities from just about any language or platform. Azure compute is capable of running more than just C# applications! Why is interoperability important to you? My company provides consultancy and bespoke development services. We're a Microsoft Gold Partner, but we live in the real world where companies have a mix of technologies provided by a variety of vendors. When developing an enterprise software solution, you rarely have a completely blank canvas. We often see integration scenarios where we need to exchange data with legacy systems. It's not unusual to see modern Silverlight applications being built on top of Java or Mainframe based back ends. Could you give us some examples of where interoperability has been important for your projects? We developed an innovative Sea Safety system for the RNLI Lifeboats here in the UK. Commercial Fishing is one of the most dangerous professions and we helped developed the MOB Guardian System which uses satellite technology and man overboard devices to raise the alarm when a fisherman gets into trouble. The solution is implemented in .NET running on Windows, but without interoperable standards, it would have been impossible to communicate with the satellite gateway technology. For more information, please see the case study: http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000005892 More recently, we were asked to build a web site to accompany the QCon 2010 conference in London to help demonstrate and promote interoperability. We built the site using Java and Restlet and hosted it in Windows Azure Compute. The site accepts feedback from visitors and all the data is stored in Windows Azure Storage. We also ported the application to Ruby on Rails for demonstration purposes. Visitors to the stand were surprised that this was even possible. Why should Java developers be interested in Windows Azure? Windows Azure Storage consists of Blobs, Queues and Tables. The storage is scalable, durable, secure and cost-effective. Using the WindowsAzure4j library, it's easy to use, and takes just a few lines of code. If you are writing an application with large data storage requirements, or you want an offsite backup, it makes a lot of sense. Running Java applications in Azure Compute is straightforward with tools like the Tomcat Solution Accelerator (http://code.msdn.microsoft.com/winazuretomcat )and AzureRunMe (http://azurerunme.codeplex.com/ ). The Windows Azure AppFabric Service Bus can also be used to connect heterogeneous systems running on different networks and in different data centres. How can The Service Bus be considered an interoperability solution? I think that the Windows Azure AppFabric Service Bus is one of Microsoft’s best kept secrets. Think of it as “a globally scalable application plumbing kit in the sky”. If you have used Enterprise Service Buses before, you’ll be familiar with the concept. Applications can connect to the service bus to securely exchange data – these can be point to point or multicast links. With the AppFabric Service Bus, the applications can exist anywhere that has access to the Internet and the connections can traverse firewalls. This makes it easy to extend or scale your application or reach out to other networks and technologies. For example, let’s say you have a SQL Server database running on premises and you want to expose the data to a Java application running in the cloud. You could set up a point to point Service Bus connection and use JDBC. Traditionally this would have been difficult or impossible without punching holes in firewalls and compromising security. Rob Blackwell is R&D Director at Active Web Solutions, www.aws.net , a Microsoft Gold Partner specialising in leading edge software solutions. He is an occasional writer and conference speaker and blogs at www.robblackwell.org.uk Related Links: UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure

    Read the article

  • Microsoft, jQuery, and Templating

    - by Stephen Walther
    About two months ago, John Resig and I met at Café Algiers in Harvard square to discuss how Microsoft can contribute to the jQuery project. Today, Scott Guthrie announced in his second-day MIX keynote that Microsoft is throwing its weight behind jQuery and making it the primary way to develop client-side Ajax applications using Microsoft technologies. What does this announcement mean? It means that Microsoft is shifting its resources to invest in jQuery. Developers on the ASP.NET team are now working full-time to contribute features to the core jQuery library. Furthermore, we are working with other teams at Microsoft to ensure that our technologies work great with jQuery. We are contributing to the open-source jQuery project in the exact same way that any other company or individual from the community can contribute to jQuery. We are writing proposals, submitting the proposals to the jQuery forums, and revising the proposals in response to community feedback. The jQuery team can decide to reject or accept any feature that we propose. Any feature that Microsoft contributes to jQuery will be platform neutral. In other words, Microsoft contributions will benefit PHP and RAILS developers just as much as they benefit ASP.NET developers. Microsoft contributions to jQuery will improve the web for everyone. Contributing Support for Templates to jQuery Core Our first proposal concerns templating. We want to contribute support for templates to jQuery so that JavaScript developers can use jQuery to easily display a set of database records. You can read our templating proposal here: http://wiki.github.com/nje/jquery/jquery-templates-proposal You can download and play with our prototype for templating here: http://github.com/nje/jquery-tmpl The following code illustrates how you can use a template to display a set of products in a bulleted list: <script type="text/javascript"> jQuery(function(){ var products = [ { name: "Product 1", price: 12.99}, { name: "Product 2", price: 9.99}, { name: "Product 3", price: 35.59} ]; $("ul").append("#template", products); }); </script> <script id="template" type="text/html"> <li>{%= name %} - {%= price %}</li> </script> <ul></ul> The template is contained in a SCRIPT element that has a TYPE=”text/html” attribute. Browsers ignore the contents of a SCRIPT element when they don’t understand the content type. Notice that the placeholder {%=...%} is used within the template to indicate where the name and price of a product should appear. The delimiters {%=…%} are used for expressions and the delimiters {%...%} are used for code. Finally, the products are rendered using the template with the call to $(“ul”).append(“#template”, products). The standard jQuery DOM manipulation methods have been modified to support templates. When the page above is rendered, you get the bulleted list displayed in the following figure. Our goal is to keep our proposal for templates as simple as possible. After support for templating has been added to jQuery, plug-in authors can take advantage of templating when building complex data-driven plug-ins such as a DataGrid plug-in. The Ajax Control Toolkit Over 100,000 developers download the Ajax Control Toolkit every month. That’s a mind-boggling number of downloads. We realize that the Ajax Control Toolkit is extremely popular among ASP.NET Web Forms developers and we want to continue to invest in the Ajax Control Toolkit. If you are adding JavaScript interactivity to an ASP.NET Web Forms application, and you don’t want to write JavaScript, then we recommend that you use the server controls in the Ajax Control Toolkit. Using the Ajax Control Toolkit does not require knowledge of JavaScript and the toolkit enables you to build applications with the concepts familiar to ASP.NET Web Forms applications developers. If, however, you are interested in creating client-side interactivity without server controls then we recommend that you use jQuery. We plan to continue to release new versions of the Ajax Control Toolkit every few months. Our goal is to continue to improve the quality of the Ajax Control Toolkit and to make it easier for the community to contribute code, bug fixes, and documentation. The ASP.NET Ajax Library We are moving the ASP.NET Ajax Library into the Ajax Control Toolkit. If you currently use ASP.NET Ajax Library client templates, client data-binding, or the client script loader then you can continue to use these features by downloading the Ajax Control Toolkit. Be aware that our focus with the Ajax Control Toolkit is server-side Ajax.  For client-side Ajax, we are shifting our focus to jQuery. For example, if you have been using ASP.NET Ajax Library client templates then we recommend that you shift to using jQuery instead. Conclusion Our plan is to focus on jQuery as the primary technology for building client-side Ajax applications moving forward. We want to adapt Microsoft technologies to work great with jQuery and we want to contribute features to jQuery that will make the web better for everyone. We are very excited to be working with the jQuery core team.

    Read the article

  • My Codemash 2011 Retrospective

    - by Greg Malcolm
    I just got back from Codemash yesterday, and still on an adrenaline buzz. Here's my take on this years encounter: The Awesome Nearly everybody in one place Codemash is the ultimate place to catch up with community friends. This is my 3rd year visiting and I've got to know a great number of very cool people through various conferences, Give Camps and other community events. I'm finding more and more that Codemash is the best place to catch up with everybody regardless of technology interest or location. Of course I always make a whole bunch more friends while I'm there! Yay! Open Spaced I found the open spaces didn't work so well last year. This year things went a lot smoother and the topics were engaging and fresh. While I miss Alan Steven's approach of running it like an agile project, it was very cool to see that it evolving. Laptops were often cracked open, not just once but frequently! For example: Jasmine - Paired on a javascript kata using the Jasmine javascript test runner J - Sat in on a J demo from local J enthusiast, Tracy Harms Watir - More pairing, this time using Ruby with the watir-webdriver through cucumber. I'd mostly forgotten that Cucumber runs just fine without Rails. It made a change to do without. The other spaces were engaging too, but I think that's enough for that topic. Javascript Shenanigans I've already mentioned that I attended a Jasmine kata session. Jasmine is close to my heart right now every since I discovered it while on the hunt for a decent Javascript testing framework for a javascript koans project earlier this year. Well, it also got covered in the Java Precompiler and Pillar's vendor session, which was great to see. Node.js was also a reoccurring theme. Node.js in a nutshell? It's an extremely scalable Event based I/O server which runs on Javascript. I'd already encountered through a Startup Weekend project and have been noticing increasing interest of late. After encountering more node.js driven excitement from my peers at codemash I absolutely had to attend the open space on it. At least 20 people turned up and by the end we had some answers, a whole ton of new questions and an impromptu user group in the form of a twitter channel (#nodemash). I have no idea where this is going to go or how big it is going to become, but if it can cross the chasm into the enterprise it could become huge... Scala Koans I'm a bit of a Koans addict, and I really need more exposure to functional languages so I gave the Scala Koans precompiler a try. Great fun! I'm really glad I attended because I found I had a whole ton of questions. Currently the koans are available here, and the answers are here. Opportunities While we're on the subject can we change the subject now? Hai Gregory, You really need to keep the drinking for later in the day. I mean seriously, you're 34 and you still do this every single time! Sure, you made it to Chad Fowler keynote ok, but you looking a rather pale weren't you? Also might have been nice to attend 'Netflicks in the Cloud' instead of 'Sleeping It Off For People Who Should Know Better'. Kthxbye PS: Stop talking to yourself Not that I entirely regret it, I've had some of my greatest insights through late night drunken conversations at the CodeMash bar. Just might be nice to reign it in a little and get something out of the next morning too. Diversity This is something that is in the back of my mind because of conversations at Codemash as well as throughout the year; I'm realizing more and more how discouraging the IT profession is for women. I notice in the community there has been a lot of attention paid to stamping out harrasment, which is good, but there also seems to be a massive PR issue. I really don't have any solutions, but I figure it can't hurt to pay more attention to whats going on... And in Other News I now have a picture of Chad Fowler giving me more cowbell! Sadly I managed to lose the cowbell later on. Hopefully it's gone to a Better Place. The Womack Family Band joined in with the musicians jam this year. There's my cowbell again! Why must you hide from me? I also finally went in the water for the first time in all the I've been coming to codemash. Why did I wait so long?!?

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • Who could ask for more with LESS CSS? (Part 1 of 3&ndash;Features)

    - by ToStringTheory
    It wasn’t very long ago that I first began to get into CSS precompilers such as SASS (Syntactically Awesome Stylesheets) and LESS (The Dynamic Stylesheet Language) and I had been hooked on the idea since.  When I finally had a new project come up, I leapt at the opportunity to try out one of these languages. Introduction To be honest, I was hesitant at first to add either framework as I didn’t really know much more than what I had read on their homepages, and I didn’t like the idea of adding too much complexity to a project - I couldn’t guarantee I would be the only person to support it in the future. Thankfully, both of these languages just add things into CSS.  You don’t HAVE to know LESS or SASS to do anything, you can still do your old school CSS, and your output will be the same.  However, when you want to start doing more advanced things such as variables, mixins, and color functions, the functionality is all there for you to utilize. From what I had read, SASS has a few more features than LESS, which is why I initially tried to figure out how to incorporate it into a MVC 4 project. However, through my research, I couldn’t find a way to accomplish this without including some bit of the Ruby on Rails framework on the computer running it, and I hated the fact that I had to do that.  Besides SASS, there is little chance of me getting into the RoR framework, at least in the next couple years.  So in the end, I settled with using LESS. Features So, what can LESS (or SASS) do for you?  There are several reasons I have come to love it in the past few weeks. 1 – Constants Using LESS, you can finally declare a constant and use its value across an entire CSS file. The case that most people would be familiar with is colors.  Wanting to declare one or two color variables that comprise the theme of the site, and not have to retype out their specific hex code each time, but rather a variable name.  What’s great about this is that if you end up having to change it, you only have to change it in one place.  An important thing to note is that you aren’t limited to creating constants just for colors, but for strings and measurements as well. 2 – Inheritance This is a cool feature in my mind for simplicity and organization.  Both LESS and SASS allow you to place selectors within other selectors, and when it is compiled, the languages will break the rules out as necessary and keep the inheritance chain you created in the selectors. Example LESS Code: #header {   h1 {     font-size: 26px;     font-weight: bold;   }   p {     font-size: 12px;     a     {       text-decoration: none;       &:hover {         border-width: 1px       }     }   } } Example Compiled CSS: #header h1 {   font-size: 26px;   font-weight: bold; } #header p {   font-size: 12px; } #header p a {   text-decoration: none; } #header p a:hover {   border-width: 1px; } 3 - Mixins Mixins are where languages like this really shine.  The ability to mixin other definitions setup a parametric mixin.  There is really a lot of content in this area, so I would suggest looking at http://lesscss.org for more information.  One of the things I would suggest if you do begin to use LESS is to also grab the mixins.less file from the Twitter Bootstrap project.  This file already has a bunch of predefined mixins for things like border-radius with all of the browser specific prefixes.  This alone is of great use! 4 – Color Functions This is the last thing I wanted to point out as my final post in this series will be utilizing these functions in a more drawn out manner.  Both LESS and SASS provide functions for getting information from a color (R,G,B,H,S,L).  Using these, it is easy to define a primary color, and then darken or lighten it a little for your needs.  Example: Example LESS Code: @base-color: #111; @red:        #842210; #footer {   color: (@base-color + #003300);   border-left:  2px;   border-right: 2px;   border-color: desaturate(@red, 10%); } Example Compiled CSS: #footer {    color: #114411;    border-left:  2px;    border-right: 2px;    border-color: #7d2717; } I have found that these can be very useful and powerful when constructing a site theme. Conclusion I came across LESS and SASS when looking for the best way to implement some type of CSS variables for colors, because I hated having to do a Find and Replace in all of the files using the colors, and in some instances, you couldn’t just find/replace because of the color choices interfering with other colors (color to replace of #000, yet come colors existed like #0002bc).  So in many cases I would end up having to do a Find and manually check each one. In my next post, I am going to cover how I’ve come to set up these items and the structure for the items in the project, as well as the conventions that I have come to start using.  In the final post in the series, I will cover a neat little side project I built in LESS dealing with colors!

    Read the article

  • /etc/rc.local not being run on Ubuntu Desktop Install

    - by loosecannon
    I have been trying to get sphinx to run at boot, so I added some lines to /etc/rc.local but nothing happens when I start up. If i run it manually it works however. /etc/init.d/rc.local start works fine as does /etc/rc.local It's listed in the default runlevel and is all executable but it does not work. I am considering writing a separate init.d script to do the same thing but that's a lot of work for a simple task dumbledore:/etc/init.d# ls -l rc* -rwxr-xr-x 1 root root 8863 2009-09-07 13:58 rc -rwxr-xr-x 1 root root 801 2009-09-07 13:58 rc.local -rwxr-xr-x 1 root root 117 2009-09-07 13:58 rcS dumbledore:/etc/init.d# ls /etc/rc.local -l -rwxr-xr-x 1 root root 491 2011-05-14 16:13 /etc/rc.local dumbledore:/etc/init.d# runlevel N 2 dumbledore:/etc/init.d# ls /etc/rc2.d/ -l total 4 lrwxrwxrwx 1 root root 18 2011-04-22 18:53 K08vmware -> /etc/init.d/vmware -rw-r--r-- 1 root root 677 2011-03-28 15:10 README lrwxrwxrwx 1 root root 18 2011-04-22 18:53 S19vmware -> /etc/init.d/vmware lrwxrwxrwx 1 root root 18 2011-05-15 14:09 S20ddclient -> ../init.d/ddclient lrwxrwxrwx 1 root root 20 2011-03-10 18:00 S20fancontrol -> ../init.d/fancontrol lrwxrwxrwx 1 root root 20 2011-03-10 18:00 S20kerneloops -> ../init.d/kerneloops lrwxrwxrwx 1 root root 27 2011-03-10 18:00 S20speech-dispatcher -> ../init.d/speech-dispatcher lrwxrwxrwx 1 root root 19 2011-03-10 18:00 S25bluetooth -> ../init.d/bluetooth lrwxrwxrwx 1 root root 20 2011-03-10 18:00 S50pulseaudio -> ../init.d/pulseaudio lrwxrwxrwx 1 root root 15 2011-03-10 18:00 S50rsync -> ../init.d/rsync lrwxrwxrwx 1 root root 15 2011-03-10 18:00 S50saned -> ../init.d/saned lrwxrwxrwx 1 root root 19 2011-03-10 18:00 S70dns-clean -> ../init.d/dns-clean lrwxrwxrwx 1 root root 18 2011-03-10 18:00 S70pppd-dns -> ../init.d/pppd-dns lrwxrwxrwx 1 root root 14 2011-05-07 11:22 S75sudo -> ../init.d/sudo lrwxrwxrwx 1 root root 24 2011-03-10 18:00 S90binfmt-support -> ../init.d/binfmt-support lrwxrwxrwx 1 root root 17 2011-05-12 21:18 S91apache2 -> ../init.d/apache2 lrwxrwxrwx 1 root root 22 2011-03-10 18:00 S99acpi-support -> ../init.d/acpi-support lrwxrwxrwx 1 root root 21 2011-03-10 18:00 S99grub-common -> ../init.d/grub-common lrwxrwxrwx 1 root root 18 2011-03-10 18:00 S99ondemand -> ../init.d/ondemand lrwxrwxrwx 1 root root 18 2011-03-10 18:00 S99rc.local -> ../init.d/rc.local dumbledore:/etc/init.d# cat /etc/rc.local #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. # Start sphinx daemon for rails app on startup # Added 2011-05-13 # Cannon Matthews cd /var/www/extemp /usr/bin/rake ts:config /usr/bin/rake ts:start touch ./tmp/ohyeah cd - exit 0 dumbledore:/etc/init.d# cat /etc/init.d/rc.local #! /bin/sh ### BEGIN INIT INFO # Provides: rc.local # Required-Start: $remote_fs $syslog $all # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: # Short-Description: Run /etc/rc.local if it exist ### END INIT INFO PATH=/sbin:/usr/sbin:/bin:/usr/bin . /lib/init/vars.sh . /lib/lsb/init-functions do_start() { if [ -x /etc/rc.local ]; then [ "$VERBOSE" != no ] && log_begin_msg "Running local boot scripts (/etc/rc.local)" /etc/rc.local ES=$? [ "$VERBOSE" != no ] && log_end_msg $ES return $ES fi } case "$1" in start) do_start ;; restart|reload|force-reload) echo "Error: argument '$1' not supported" >&2 exit 3 ;; stop) ;; *) echo "Usage: $0 start|stop" >&2 exit 3 ;; esac

    Read the article

  • Squid not caching files (Randomly)

    - by Heinrich
    I want to use an intercepting squid server to cache specific large zip files that users in my network download frequently. I have configured squid on a gateway machine and caching is working for "static" zip files that are served from an Apache web server outside our network. The files that I want to have cached by squid are zip files 100MB which are served from a heroku-hosted Rails application. I set an ETag header (SHA hash of the zip file on the server) and Cache-Control: public header. However, these files are not cached by squid. This, for example, is a request that is not cached: $ curl --no-keepalive -v -o test.zip --header "X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a" --header "X-Customer: 5" "http://MY_APP.herokuapp.com/api/device/v1/media/download?version=latest" * Adding handle: conn: 0x7ffd4a804400 * Adding handle: send: 0 * Adding handle: recv: 0 ... > GET /api/device/v1/media/download?version=latest HTTP/1.1 > User-Agent: curl/7.30.0 > Host: MY_APP.herokuapp.com > Accept: */* > X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a > X-Customer: 5 > 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0< HTTP/1.1 200 OK * Server Cowboy is not blacklisted < Server: Cowboy < Date: Mon, 18 Aug 2014 14:13:27 GMT < Status: 200 OK < X-Frame-Options: SAMEORIGIN < X-Xss-Protection: 1; mode=block < X-Content-Type-Options: nosniff < ETag: "95e888938c0d539b8dd74139beace67f" < Content-Disposition: attachment; filename="e7cce850ae728b81fe3f315d21a560af.zip" < Content-Transfer-Encoding: binary < Content-Length: 125727431 < Content-Type: application/zip < Cache-Control: public < X-Request-Id: 7ce6edb0-013a-4003-a331-94d2b8fae8ad < X-Runtime: 1.244251 < X-Cache: MISS from AAA.fritz.box < Via: 1.1 vegur, 1.1 AAA.fritz.box (squid/3.3.11) < Connection: keep-alive In the logs squid is reporting a TCP_MISS. This is the relevant excerpt from my squid file: # Squid normally listens to port 3128 http_port 3128 http_port 3129 intercept # Uncomment and adjust the following to add a disk cache directory. maximum_object_size 1000 MB maximum_object_size_in_memory 1000 MB cache_dir ufs /usr/local/var/cache/squid 10000 16 256 cache_mem 2000 MB # Leave coredumps in the first cache dir coredump_dir /usr/local/var/cache/squid cache_store_log daemon:/usr/local/var/logs/cache_store.log #refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern -i .(zip) 525600 100% 525600 override-expire ignore-no-cache ignore-no-store refresh_pattern . 0 20% 4320 ## DNS Configuration dns_nameservers 8.8.8.8 8.8.4.4 After trying around for some time I realized that squid is sometimes deciding that my file is cacheable, sometimes not, depending on whether and when I enable/disable the dns_nameservers directive. What could be wrong here?

    Read the article

  • Why is Git telling me "Your branch is ahead of 'origin/master' by 11 commits." and how do I get it t

    - by spilth
    I'm a Git newbie. I recently moved a Rails project from Subversion to Git. I followed the tutorial here: http://www.simplisticcomplexity.com/2008/03/05/cleanly-migrate-your-subversion-repository-to-a-git-repository/ I am also using unfuddle.com to store my code. I make changes on my Mac laptop on the train to/from work and then push them to unfuddle when I have a network connection using the following command: git push unfuddle master I use Capistrano for deployments and pull code from the unfuddle repository using the master branch. Lately I've noticed the following message when I run "git status" on my laptop: # On branch master # Your branch is ahead of 'origin/master' by 11 commits. # nothing to commit (working directory clean) And I'm confused as to why. I thought my laptop was the origin... but don't know if either the fact that I originally pulled from Subversion or push to Unfuddle is what's causing the message to show up. How can I: Find out where Git thinks 'origin/master' is? If it's somewhere else, how do I turn my laptop into the 'origin/master'? Get this message to go away. It makes me think Git is unhappy about something. My mac is running Git version 1.6.0.1. When I run git remote show origin as suggested by dbr, I get the following: ~/Projects/GeekFor/geekfor 10:47 AM $ git remote show origin fatal: '/Users/brian/Projects/GeekFor/gf/.git': unable to chdir or not a git archive fatal: The remote end hung up unexpectedly When I run git remote -v as suggested by Aristotle Pagaltzis, I get the following: ~/Projects/GeekFor/geekfor 10:33 AM $ git remote -v origin /Users/brian/Projects/GeekFor/gf/.git unfuddle [email protected]:spilth/geekfor.git Now, interestingly, I'm working on my project in the geekfor directory but it says my origin is my local machine in the gf directory. I believe gf was the temporary directory I used when converting my project from Subversion to Git and probably where I pushed to unfuddle from. Then I believe I checked out a fresh copy from unfuddle to the geekfor directory. So it looks like I should follow dbr's advice and do: git remote rm origin git remote add origin [email protected]:spilth/geekfor.git

    Read the article

  • Howto - Running Redmine on mongrel as a service on windows

    - by Achilles
    I use Redmine on Mongrel as a project manager and I use a batch file (start-redmine.bat) to start the redmine in mongrel. There are 2 issues with my setup: 1. I have a running IIS on the server that occupies the HTTP port (80) 2. The start-redmine.bat must be periodically checked to see if it's stopped after a restart that is caused by windows update service. for the first issue, I have no choice but running mongrel on a port like 3000 and for the second issue I have to create a windows service that runs automatically in the background when the windows starts; and here comes the trouble! There are at least 3 ways to run redmine as a service that I'm aware of; none of them can satisfy a performance view on this subject. you may read about them on http://stackoverflow.com/questions/877943/how-to-configure-a-rails-app-redmine-to-run-as-a-service-on-windows I tried them all. The easiest way to setup such a service is using mongrel_service approach; in 3 lines of command you're done. but the performance is significantly lower than running that batch file... Now, I wanna show you my approach: First suppose we have ruby installed into c:\ruby and we have issued the command gem install mongrel to get the mongrel gem installed into c:\ruby\bin Also, suppose we have installed the Redmine into a folder like c:\redmine; and we have ruby's path (i.e. c:\ruby\bin) in our PATH environment variable. Now Download and install the Windows NT Resource Kit Tools from microsoft website. Open the command-line tool that comes with the Resource Kit (from start menu). Use instsrv to install a dummy service called Redmine using the following command: "[path-to-instsrv.exe]\instsrv" Redmine "[path-to-srvany.exe]\srvany.exe" in my case (which is the default case) it was something like this: "C:\Program Files\Windows Resource Kits\Tools\instsrv" Redmine "C:\Program Files\Windows Resource Kits\Tools\srvany.exe" Now create the batch file. Open a notepad and paste these instructions into it and then save it as "c:\redmine\start-redmine.bat" @echo off cd c:\redmine\ mongrel_rails start -a 0.0.0.0 -p 3000 -e production Now we need to configure that dummy service we had created before. WATCH OUT WHAT YOU'RE DOING FROM HERE ON, OR YOU MAY CORRUPT YOUR WINDOWS. To configure that service, open windows registry editor (Start - Run - regedit) and navigate to this node: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Redmine Right-Click on "Redmine" node and using the context menu, create a new key called Parameters (New - Key) Right-Click on "Parameters" and create a String Value property called Application. Do this again and create another String Value called AppParameters. Now Double-click on "Application" and put cmd.exe into "Value data" section. Then Double-click on "AppParameters" and put /C "C:\redmine\start-redmine.bat" into Value data section. We're done! issue this command to run the redmine on mongrel as a service: net start Redmine

    Read the article

  • Failing to install activerecord-jdbcmysql-adapter gem

    - by Phil Sturgeon
    I am trying to follow the basic "Create a blog in 20 minutes" Rails screencast but have hit a stumbling block already. When I try to rake db:migrate I get errors about the gem activerecord-jdbcmysql-adapter not being installed. When I try to install it, I am told it doesn't exist. If I try to simply gem install mysql I get all sorts of madness appearing. I am running this on Mac OS X 10.6.2 and my installation was all done through gem. My basic setup works (Hello world!). Here is the error log: $ rake db:migrate (in /Users/xxxx/Sites/blog) rake aborted! Please install the jdbcmysql adapter: gem install activerecord-jdbcmysql-adapter (no such file to load -- active_record/connection_adapters/jdbcmysql_adapter) (See full trace by running task with --trace) $ sudo gem install activerecord-jdbcmysql-adapter ERROR: could not find gem activerecord-jdbcmysql-adapter locally or in a repository $ sudo gem install mysql Password: Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /opt/local/bin/ruby extconf.rb checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/opt/local/bin/ruby --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib --with-mlib --without-mlib --with-mysqlclientlib --without-mysqlclientlib --with-zlib --without-zlib --with-mysqlclientlib --without-mysqlclientlib --with-socketlib --without-socketlib --with-mysqlclientlib --without-mysqlclientlib --with-nsllib --without-nsllib --with-mysqlclientlib --without-mysqlclientlib --with-mygcclib --without-mygcclib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /opt/local/lib/ruby/gems/1.8/gems/mysql-2.8.1 for inspection. Results logged to /opt/local/lib/ruby/gems/1.8/gems/mysql-2.8.1/ext/mysql_api/gem_make.out

    Read the article

  • Am I "wasting" my time learning C and other low level stuff ?

    - by Andreas Grech
    I have just recently started learning C and the reason I did that was because frankly, I consider myself to be of a "less-developer" than the people who know and work with C. Thus I planned to start learning ASM, C, C++ and bought the K&R book and started pushing myself to learn the C Programming Language and up till now I'm doing great...learning about arrays the low level way (ie the pointer + offset thing), pointers and all that and obviously asking questions on stackoverflow for guidance. My problem is that sometimes I get thinking if instead of learning this low level stuff, maybe I should maybe spend more time learning newer, more widely used technologies...basically, more web stuff. Now I am well versed with both C# and ASP.Net and currently that's what I do for a living, but still there exists Microsoft technologies that I haven't quite touched upon...such as ASP.Net MVC, The Entity Framework etc... And those are only Microsoft Technologies...obviously there are other stuff that I would like to touch upon...stuff like Ruby, which would lead me to Ruby on Rails, or Python for Django or even Java and J2EE, or maybe even PHP; ie, basically mainly Web Stuff. Mind you, I did touch upon some of the stuff I mentioned earlier on, such as PHP and Java but I am still not quite versed in them as I am in C# and ASP.Net...but still, I think that by learning other languages that are used in the web environment will broaden my horizons...both as a developer who loves learning, and also Career wise. My point is, am I really using up my time correctly by learning older, lower level stuff? Stuff that for my current line of work, will most probably never use, but still is interesting to know ? To be frankly honest, I am also learning C so that I could, maybe someday, get into Electronics and Micro-controller programming but that is a whole new world for me and, if I choose to go there, will take some time to get adjusted to. And even then, I don't know if I can get a career in working in that line of work. ...but I still wonder about this question over and over...Am I doing the right thing by learning C instead of something (Web-stuff) that will most probably be more useful for me career-wise? I'm sorry for such asking such a long and most probably a boring question, but I feel as if this is the only place where I can ask such a question and get an honest answer from experts in the field. Thank you for your time.

    Read the article

  • How to build, sort and print a tree of a sort?

    - by Tuplanolla
    This is more of an algorithmic dilemma than a language-specific problem, but since I'm currently using Ruby I'll tag this as such. I've already spent over 20 hours on this and I would've never believed it if someone told me writing a LaTeX parser was a walk in the park in comparison. I have a loop to read hierarchies (that are prefixed with \m) from different files art.tex: \m{Art} graphical.tex: \m{Art}{Graphical} me.tex: \m{About}{Me} music.tex: \m{Art}{Music} notes.tex: \m{Art}{Music}{Sheet Music} site.tex: \m{About}{Site} something.tex: \m{Something} whatever.tex: \m{Something}{That}{Does Not}{Matter} and I need to sort them alphabetically and print them out as a tree About Me (me.tex) Site (site.tex) Art (art.tex) Graphical (graphical.tex) Music (music.tex) Sheet Music (notes.tex) Something (something.tex) That Does Not Matter (whatever.tex) in (X)HTML <ul> <li>About</li> <ul> <li><a href="me.tex">Me</a></li> <li><a href="site.tex">Site</a></li> </ul> <li><a href="art.tex">Art</a></li> <ul> <li><a href="graphical.tex">Graphical</a></li> <li><a href="music.tex">Music</a></li> <ul> <li><a href="notes.tex">Sheet Music</a></li> </ul> </ul> <li><a href="something.tex">Something</a></li> <ul> <li>That</li> <ul> <li>Doesn't</li> <ul> <li><a href="whatever.tex">Matter</a></li> </ul> </ul> </ul> </ul> using Ruby without Rails, which means that at least Array.sort and Dir.glob are available. All of my attempts were formed like this (as this part should work just fine). def fss_brace_array(ss_input)#a concise version of another function; converts {1}{2}...{n} into an array [1, 2, ..., n] or returns an empty array ss_output = ss_input[1].scan(%r{\{(.*?)\}}) rescue ss_output = [] ensure return ss_output end #define tree s_handle = File.join(:content.to_s, "*") Dir.glob("#{s_handle}.tex").each do |s_handle| File.open(s_handle, "r") do |f_handle| while s_line = f_handle.gets if s_all = s_line.match(%r{\\m\{(\{.*?\})+\}}) s_all = s_all.to_a #do something with tree, fss_brace_array(s_all) and s_handle break end end end end #do something else with tree

    Read the article

  • Unable to add SSH pub key for GitHub

    - by Kaushik
    I am trying to set up a new GitHub account as part of learning ruby on rails. My OS is windows. I am having the following problem while trying to add my public key to the GitHub SSH public keys. When I put the key in the text area, supply a name, and click 'Add Key', I am taken to the Job profile page without any confirmation that the key has been added.(I am using SSH client GUI to generate RSA keys). When I come back to the SSH public keys page, I see that the key is not there. I tried this multiple times...without any result. Just as a test, I tried to ssh to the GitHub account using 'ssh -v [email protected]' and here is the verbose output: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug1: identity file /c/Users/Administrator/.ssh/identity type -1 debug1: identity file /c/Users/Administrator/.ssh/id_rsa type -1 debug1: identity file /c/Users/Administrator/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2 debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'github.com' is known and matches the RSA host key. debug1: Found key in /c/Users/Administrator/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Administrator/.ssh/identity debug1: Trying private key: /c/Users/Administrator/.ssh/id_rsa debug1: Trying private key: /c/Users/Administrator/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). Also, why is it looking for the keys in c/Users/Administrator/.ssh/ Is there a possibility of changing this default location? EDIT: With Mozila, I get error msg: Oops! The key has already been taken. The key is invalid. It must begin with 'ssh-rsa' or 'ssh-dss'. My key looks like: ---- BEGIN SSH2 PUBLIC KEY ---- Comment: "[2048-bit rsa, Administrator@Kaushik-THINK, Sun Jan 02 2011 \ 02:40:03]" AAAAB3NzaC1yc2EAAAADAQABAAABAQDoA5xqJozKmAHTGh9hgmagsFOl2hdPzS8ZPV9Ta1 xU0JiUnHef38Rvz/5oqL1wW7SjmZbc/+tCPOfU1lg3UisFXajJhek2bjJ2qsKd4Sjax2Nj ZMYD7djPb8rokUYSKW3bdYyJHtJH+murz04UGdCcZ8HqkMTzqlh3zAIK7SIlCy+mtAi5A/ sm0JbqlNGHB6YrL1aWIcOSolIx2HWt8cWhlK77guT9dPgd0HT59Gn0uhO7UWGLrNdJut0x ian3HdvNYMUXoO/SkNlxvWRgZ1UOhaB+qf4hw5RCGcBbqP3fM4LKpurHZx4wEpgmM0e4EM +2PYdf46uxChNdBl7J5sZF ---- END SSH2 PUBLIC KEY ---- I still can't see the key..so not sure how it says it is already taken.

    Read the article

  • Logging in with WebFinger and OpenID

    - by Ryan
    I would like to apologize in advance for the ugly formatting. In order to talk about the problem, I need to be posting a bunch of URLs, but the excessive URLs and my lack of reputation makes StackOverflow think I could be a spammer. Any instance of 'ht~tp' is supposed to be 'http'. '{dot}' is supposed to be '.' and '{colon}' is supposed to be ':'. Also, my lack of reputation has prevented me from tagging my question with 'webfinger' and 'google-profiles'. Onto my question: I am messing around with WebFinger and trying to create a small rails app that enables a user to log in using nothing but their WebFinger account. I can succesfully finger myself, and I get back an XRD file with the following snippet: Link rel="ht~tp://specs{dot}openid{dot}net/auth/2.0/provider" href="ht~tp://www{dot}google{dot}com/profiles/{redacted}"/ Which, to me, reads, "I have an OpenID 2.0 login at the url: ht~tp://www{dot}google{dot}com/profiles/{redacted}". But when I try to use that URL to log in, I get the following error OpenID::DiscoveryFailure (Failed to fetch identity URL ht~tp://www{dot}google{dot}com/profiles/{redacted} : Error encountered in redirect from ht~tp://www{dot}google{dot}com/profiles/{redacted}: Error fetching /profiles/{Redacted}: Connection refused - connect(2)): When I replace the profile URL with 'ht~tps://www{dot}google{dot}com/accounts/o8/id', the login works perfectly. here is the code that I am using (I'm using RedFinger as a plugin, and JanRain's ruby-openid, installed without the gem) require "openid" require 'openid/store/filesystem.rb' class SessionsController < ApplicationController def new @session = Session.new #render a textbox requesting a webfinger address, and a submit button end def create ####################### # # Pay Attention to this section right here # ####################### #use given webfinger address to retrieve openid login finger = Redfinger.finger(params[:session][:webfinger_address]) openid_url = finger.open_id.first.to_s #openid_url is now: ht~tp://www{dot}google{dot}com/profiles/{redacted} #Get needed info about the acquired OpenID login file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.begin(openid_url) #ERROR HAPPENS HERE #send user to OpenID login for verification redirect_to response.redirect_url('ht~tp://localhost{colon}3000/','ht~tp://localhost{colon}3000/sessions/complete') end def complete #interpret return parameters file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.complete params case response.status when OpenID::SUCCESS session[:openid] = response.identity_url #redirect somehwere here end end end Is it possible for me to use the URL I received from my WebFinger to log in with OpenID?

    Read the article

  • Seeding repository Rhino Mocks

    - by ahsteele
    I am embarking upon my first journey of test driven development in C#. To get started I'm using MSTest and Rhino.Mocks. I am attempting to write my first unit tests against my ICustomerRepository. It seems tedious to new up a Customer for each test method. In ruby-on-rails I'd create a seed file and load the customer for each test. It seems logical that I could put this boiler plate Customer into a property of the test class but then I would run the risk of it being modified. What are my options for simplifying this code? [TestMethod] public class CustomerTests : TestClassBase { [TestMethod] public void CanGetCustomerById() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } [TestMethod] public void CanGetCustomerByDifId() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetCustomerByDifID("55")).Return(customer); // assert Assert.AreEqual(customer, repository.GetCustomerByDifID("55")); } [TestMethod] public void CanGetCustomerByLogin() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetCustomerByLogin("tdude")).Return(customer); // assert Assert.AreEqual(customer, repository.GetCustomerByLogin("tdude")); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • SQL efficiency argument, add a column or solvable by query?

    - by theTurk
    I am a recent college graduate and a new hire for software development. Things have been a little slow lately so I was given a db task. My db skills are limited to pet projects with Rails and Django. So, I was a little surprised with my latest task. I have been asked by my manager to subclass Person with a 'Parent' table and add a reference to their custodian in the Person table. This is to facilitate going from Parent to Form when the custodian, not the Parent, is the FormContact. Here is a simplified, mock structure of a sql-db I am working with. I would have drawn the relationship tables if I had access to Visio. We have a table 'Person' and we have a table 'Form'. There is a table, 'FormContact', that relates a Person to a Form, not all Persons are related to a Form. There is a relationship table for Person to Person relationships (Employer, Parent, etc.) I've asked, "Why this couldn't be handled by a query?" Response, Inefficient. (Really!?!) So, I ask, "Why not have a reference to the Form? That would be more efficient since you wouldn't be querying the FormContacts table with the reference from child/custodian." Response, this would essentially make the Parent is a FormContact. (Fair enough.) I went ahead an wrote a query to get from non-FormContact Parent to Form, and tested on the production server. The response time was instantaneous. *SOME_VALUE* is the Parent's fk ID. SELECT FormID FROM FormContact WHERE FormContact.ContactID IN (SELECT SourceContactID FROM ContactRelationship WHERE (ContactRelationship.RelatedContactID = *SOME_VALUE*) AND (ContactRelationship.Relationship = 'Parent')); If I am right, "This is an unnecessary change." What should I do, defend my position or should I concede to the managers request? If I am wrong. What is my error? Is there a better solution than the manager's?

    Read the article

  • Converting HTML TAG Object to JSON Object

    - by cooldude
    Hi, I want to convert the html tag objects to json object in the javascript in order to send them to the server from the javascript. As i have to save these objects at the Ruby on Rails server. These HTML objects is the canvas tag object and the graphics objects created using CAKE API. I have used the stringify function but it is not working. Here is my code: var CAKECanvas = new Canvas(document.body, 1000,1000); var canvas=CAKECanvas.canvas; var text=document.createElement('textarea'); text.id="text"; text.rows="100"; text.cols="200"; document.body.appendChild(text); canvas.style.borderStyle="solid"; canvas.style.borderColor="black"; var rect= new Circle(); rect.radius=100; rect.centered=true; rect.cx=Math.random() * 500; rect.cy= Math.random() * 300; rect.stroke= false; rect.fill= "red"; rect.xDir = Math.random() > 0.5?1:-1; rect.yDir = Math.random() > 0.5?1:-1; var obj=new Object; var count = 0,k; for (k in rect) { if (rect.hasOwnProperty(k)) { count++; obj[k]=rect[k]; } } alert(count); rect.addFrameListener(function(t, dt) { this.cx += this.xDir * 50 * dt/1000; this.cy += this.yDir * 50 * dt/1000; if (this.cx > 550) { this.xDir = -1; } if (this.cx < 50) { this.xDir = 1; } if (this.cy > 350) { this.yDir = -1; } if (this.cy < 50) { this.yDir = 1; } } ); CAKECanvas.append(rect); var carAsJSON = JSON.stringify(obj); /////////////////ERROR

    Read the article

  • Thin & Sinatra not taking port

    - by NekoNova
    I'm having problems settig up my application using Thin and Sinatra. I have created a development-config.ru file that contains the following settings: # This is a rack configuration file to fire up the Sinatra application. # This allows better control and configuration as we are using the modular # approach here for controlling our application. # # Extend the Ruby load path with the root of the API and the lib folder # so that we can automatically include all our own custom classes. This makes # the requiring of files a bit cleaner and easier to maintain. # This is basically what rails does as well. # We also store the root of the API in the ENV settings to ensure we have # always access to the root of the API when building paths. ENV['API_ROOT'] = File.dirname(__FILE__) $:.unshift ENV['API_ROOT'] $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'lib')) $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'db')) # Now we can require all the gems used for the entire API by simpling requiring # them here. We can also include the classes that we have defined inside the lib # folder. require 'rubygems' require 'bundler' # Run Bundler to setup our gems properly. This will install all the missing gems on # the system and ensure that the deployment environment is ready to run. Bundler.require # To make the loading easier for the application, we will now automatically load all # models that have been defined inside the lib folder. This ensures that we do not need # to load them anymore anywhere else in our application, as the models will be known to # ruby everywhere. Dir.glob(File.join(ENV['API_ROOT'], 'lib', '**', '*.rb')).each{|file| require file} # Now we will configure the Sinatra application so that we can fire up the entire API. # This requires some detailed settings like whether logging is allowed, the port to be # used and some folder locations. require 'sinatra' require 'app' set :logging, true set :dump_errors, true set :port, 3001 set :views, "#{ENV['API_ROOT']}/views" set :public_folder, "#{ENV['API_ROOT']}/public" set :environment, :test # Start up the Sinatra application with all the settings that we have defined. run App.new This is based upon the information I found on the Sinatra website. However, the problem is that I cannot get the application running on port 3001. If I use thin start -R development-config.ru it runs on port 3000. If I use rackup config-development.ru it runs on port 9696. However I never see Sinatra kick in or run over port 3000. My application looks like this: # Author : Arne De Herdt # Email : # This is the actuall application that will be running under Sinatra # to serve the requests for the billing middleware API. # We use the modular approach here to allow control when deploying # the application using Capistrano. require 'sinatra/base' require 'logger' require 'savon' require 'billcrux' class App < Sinatra::Base # This action responds to POST requests on the URI '/billcrux/register' # and is responsible for handeling registration requests with the # BillCrux payment system. # The post "/billcrux/register" do # do stuff end end Can someone tell me what I am doing wrong?

    Read the article

  • java web application best practices

    - by Bruce
    Hi all I'm trying to figure out the optimum way to develop and release a fairly simple web application, and I'm running into several problems. I'll outline the decisions I've made, because somewhere I've clearly gone off the rails.. Hugely grateful for any help! I have what I think is a fairly simple web application. It contains a couple of jsps that reference a couple of java beans, and the usual static html, js, css and images. Decision 1) I wanted to have a clear and clean release procedure, such that I could develop on my local machine and then release reliably to a production machine. I therefore made the decision to package the application into a war file (including all the static resources), to minimize the separate bits and pieces I would need to release. So far so good? Decision 2) I wanted things on my local machine to be as similar as possible to the production environment. So in my html, for example, I may have a reference to a static file such as http://static.foo.com/file . To keep this code working seamlessly on dev and prod, I decided to put static.foo.com in my /etc/hosts when developing locally, so that all the urls work correctly without changing anything. Decision 3) I decided to use eclipse and maven to give me a best practice environment for administering and building my project. So I have a nice tight set up now, except that: Every time I want to change anything in development, like one line in an html file, I have to rebuild the entire project and then wait for tomcat to load the war before I can see if it's what I wanted. So my questions are: 1) Is there a way to connect up eclipse and tomcat so that I don't have to rebuild the war each time? ie tomcat is looking straight at my actual workspace to serve up the static files? 2)I think I'm maybe making things harder by using /etc/hosts to reflect production urls - is there a better way that doesn't involve manually changing over urls (relative urls are fine of course, but where you have many subdomains, say one for static files and one for dynamic, you have to write out the full path, surely?) 3) Is this really best practice?? How do people set things up so that they balance the requirement for an automated, all-encompassing build process on the one hand, and the speed and flexibility to be able to develop javascript and html and css quickly, as quickly as if one just pointed apache at the directory and developed live? What do people find works? Many thanks!

    Read the article

  • go programming POST FormValue can't be printed

    - by poor_programmer
    Before I being a bit of background, I am very new to go programming language. I am running go on Win 7, latest go package installer for windows. I'm not good at coding but I do like some challenge of learning a new language. I wanted to start learn Erlang but found go very interesting based on the GO I/O videos in youtube. I'm having problem with capturing POST form values in GO. I spend three hours yesterday to get go to print a POST form value in the browser and failed miserably. I don't know what I'm doing wrong, can anyone point me to the right direction? I can easily do this in another language like C#, PHP, VB, ASP, Rails etc. I have search the entire interweb and haven't found a working sample. Below is my sample code. Here is Index.html page {{ define "title" }}Homepage{{ end }} {{ define "content" }} <h1>My Homepage</h1> <p>Hello, and welcome to my homepage!</p> <form method="POST" action="/"> <p> Enter your name : <input type="text" name="username"> </P> <p> <button>Go</button> </form> <br /><br /> {{ end }} Here is the base page <!DOCTYPE html> <html lang="en"> <head> <title>{{ template "title" . }}</title> </head> <body> <section id="contents"> {{ template "content" . }} </section> <footer id="footer"> My homepage 2012 copy </footer> </body> </html> now some go code package main import ( "fmt" "http" "strings" "html/template" ) var index = template.Must(template.ParseFiles( "templates/_base.html", "templates/index.html", )) func GeneralHandler(w http.ResponseWriter, r *http.Request) { index.Execute(w, nil) if r.Method == "POST" { a := r.FormValue("username") fmt.Fprintf(w, "hi %s!",a); //<-- this variable does not rendered in the browser!!! } } func helloHandler(w http.ResponseWriter, r *http.Request) { remPartOfURL := r.URL.Path[len("/hello/"):] fmt.Fprintf(w, "Hello %s!", remPartOfURL) } func main() { http.HandleFunc("/", GeneralHandler) http.HandleFunc("/hello/", helloHandler) http.ListenAndServe("localhost:81", nil) } Thanks! PS: Very tedious to add four space before every line of code in stackoverflow especially when you are copy pasting. Didn't find it very user friendly or is there an easier way?

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • Need MYSQL query for finding lowest score per game player

    - by Chris Barnhill
    I have a game on Facebook called Rails Across Europe. I have a Best Scores page where I show the players with the best 20 scores, which in game terms refers to the lowest winning turn. The problem is that there are a small number of players who play frequently, and their scores dominate the page. I'd like to make the scores page open to more players. So I thought that I could display the single lowest winning turn for each player instead of displaying all of the lowest winning turns for all players. The problem is that the query for this eludes me. So I hope that one of you brilliant StackOverflow folks can help me with this. I have included the relevant MYSQL table schemas below. Here are the the table relationships: player_stats contains statistics for either a game in progress or a completed game. If a game is in progress, winning_turn is zero (which means that games with a winning_turn of zero should not be included in the query). player_stats has a game_player table id reference. game_player contains data describing games currently in progress. game_player has a player table id reference. player contains data describing a person who plays the game. Here's the query I'm currently using: 'SELECT p.fb_user_id, ps.winning_turn, gp.difficulty_level, c.name as city_name, g.name as goods_name, d.cost FROM game_player as gp, player as p, player_stats as ps, demand as d, city as c, goods as g WHERE p.status = "ACTIVE" AND gp.player_id = p.id AND ps.game_player_id = gp.id AND d.id = ps.highest_demand_id AND c.id = d.city_id AND g.id = d.goods_id AND ps.winning_turn > 0 ORDER BY ps.winning_turn ASC, d.cost DESC LIMIT '.$limit.';'; Here are the relevant table schemas: -- -- Table structure for table `player_stats` -- CREATE TABLE IF NOT EXISTS `player_stats` ( `id` int(11) NOT NULL auto_increment, `game_player_id` int(11) NOT NULL, `winning_turn` int(11) NOT NULL, `highest_demand_id` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `game_player_id` (`game_player_id`,`highest_demand_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=3814 ; -- -- Table structure for table `game_player` -- CREATE TABLE IF NOT EXISTS `game_player` ( `id` int(10) unsigned NOT NULL auto_increment, `game_id` int(10) unsigned NOT NULL, `player_id` int(10) unsigned NOT NULL, `player_number` int(11) NOT NULL, `funds` int(10) unsigned NOT NULL, `turn` int(10) unsigned NOT NULL, `difficulty_level` enum('STANDARD','ADVANCED','MASTER','ULTIMATE') NOT NULL, `date_last_used` datetime NOT NULL, PRIMARY KEY (`id`), KEY `game_id` (`game_id`,`player_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=3814 ; -- -- Table structure for table `player` -- CREATE TABLE IF NOT EXISTS `player` ( `id` int(11) NOT NULL auto_increment, `fb_user_id` char(255) NOT NULL, `fb_proxied_email` text NOT NULL, `first_name` char(255) NOT NULL, `last_name` char(255) NOT NULL, `birthdate` date NOT NULL, `date_registered` datetime NOT NULL, `date_last_logged_in` datetime NOT NULL, `status` enum('ACTIVE','SUSPENDED','CLOSED') NOT NULL, PRIMARY KEY (`id`), KEY `fb_user_id` (`fb_user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1646 ;

    Read the article

  • Test All Features of Windows Phone 7 On Your PC

    - by Matthew Guay
    Are you developer or just excited about the upcoming Windows Phone 7, and want to try it out now?  Thanks to free developer tools from Microsoft and a new unlocked emulator rom, you can try out most of the exciting features today from your PC. Last week we showed you how to try out Windows Phone 7 on your PC and get started developing for the upcoming new devices.  We noticed, however, that the emulator only contains Internet Explorer Mobile and some settings.  This is still interesting to play around with, but it wasn’t the full Windows Phone 7 experience. Some enterprising tweakers discovered that more applications were actually included in the emulator, but were simply hidden from users.  Developer Dan Ardelean then figured out how to re-enable these features, and released a tweaked emulator rom so everyone can try out all of the Windows Phone 7 features for themselves.  Here we’ll look at how you can run this new emulator image on your PC, and then look at some interesting features in Windows Phone 7. Editor Note: This modified emulator image is not official, and isn’t sanctioned by Microsoft. Use your own judgment when choosing to download and use the emulator. Setting Up Emulator Rom To test-drive Windows Phone 7 on your PC, you must first download and install the Windows Phone Developer Tools CTP (link below).  Follow the steps we showed you last week at: Try out Windows Phone 7 on your PC today.  Once it’s installed, go ahead and run the default emulator as we showed to make sure everything works ok. Once the Windows Phone Developer Tools are installed and running, download the new emulator rom from XDA Forums (link below).  This will be a zip file, so extract it first. Note where you save the file, as you will need the address in the next step. Now, to run our new emulator image, we need to open the emulator in command line and point to the new rom image.  To do this, browse to the correct directory, depending on whether you’re running the 32 bit or 64 bit version of Windows: 32 bit: C:\Program Files\Microsoft XDE\1.0\ 64 bit: C:\Program Files (x86)\Microsoft XDE\1.0\ Hold your Shift key down and right-click in the folder.  Choose Open Command Window here. At the command prompt, enter XDE.exe followed by the location of your new rom image.  Here, we downloaded the rom to our download folder, so at the command prompt we entered: XDE.exe C:\Users\Matthew\Downloads\WM70Full\WM70Full.bin The emulator loads … with the full Windows Phone 7 experience! To make it easier, let’s make a shortcut on our desktop to load the emulator with the new rom directly.  Right-click on your desktop (or any folder you want to create the shortcut in), select New, and then Shortcut. Now, in the box, we need to enter the path for the emulator followed by the location of our rom.  Both items must be in quotes.  So, in our test, we entered the following: 32 bit: “C:\Program Files\Microsoft XDE\1.0\” “C:\Users\Matthew\Downloads\WM70Full\WM70Full.bin” 64 bit: “C:\Program Files (x86)\Microsoft XDE\1.0\” “C:\Users\Matthew\Downloads\WM70Full\WM70Full.bin” Make sure to enter the correct location of the new emulator rom for your computer, and keep both items in separate quotes.  Click next when you’ve entered the location. Name the shortcut; we named it Windows Phone 7, but simply enter whatever you’d like.  Click Finish when you’re done. You should now have a nice Windows Phone icon and your fully functional shortcut!  Double-click it to run the Windows Phone 7 emulator as above. Features in the Unlocked Windows Phone 7 Emulator So let’s look at what you can do with this new emulator.  Almost everything you’ve seen in demos from the Mobile World Conference and Mix’10 are right here for you to play with.  Here’s the application menu, which you can access by clicking on the arrow on the top of the home screen, which shows how much stuff they’ve got in this!   And, of course, even the home screen itself shows much more activity than it did in the original emulator. Let’s check out some of these sections.  Here’s Zune running on Windows Phone 7, and the Zune Marketplace.  The animations are beautiful, so be sure to check this out yourself. The new picture hub is much nicer than any picture viewer included with Windows Mobile in the past…   Stay productive, and on schedule with the new Calendar. The XBOX hub gives us only a hint of things to come, and the links to games now are simply placeholders. Here’s a look at the Office hub.  This doesn’t show up on the homescreen right now, but you can access it in the applications menu.  Office obviously still has a lot of work left on it, but even at a glance here it looks like it includes a lot more functionality than Office Mobile in Windows Mobile 6. Here’s a look at each of the three apps: Word, Excel, and OneNote, and the formatting pallet in Office apps.   This emulator also includes a lot more settings than the default one, including settings for individual applications. You can even activate the screen lock, and try out the lift-to-peek-or-unlock feature… Finally, this version of Windows Phone 7 includes a very nice SystemInfo app with an advanced task manager.  We hope this is still available when the actual phones are released. Conclusion If you’re excited about the upcoming Windows Phone 7 series, or simply want to learn more about what’s coming, this is a great way to test it out.  With these exciting new hubs and applications, there’s something here for everyone.  Let us know what you like most about Windows Phone 7 and what your favorite app or hub is. Links Please note: These roms are not officially supported by Microsoft, and could be taken down. Download the unlocked Windows Phone 7 emulator from XDA Forums – click the link in this post to download How the unlocked emulator image was created Similar Articles Productive Geek Tips Try out Windows Phone 7 on your PC todayGet stats on your Ruby on Rails codeDisable Windows Vista’s Built-in CD/DVD Burning FeaturesWeek in Geek – The Slick Windows 7 File Copy Animation EditionGeek Fun: Virtualized Old School Windows – Windows 95 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10

    Read the article

  • From J2EE to Java EE: what has changed?

    - by Bruno.Borges
    See original @Java_EE tweet on 29 May 2014 Yeap, it has been 8 years since the term J2EE was replaced, and still some people refer to it (mostly recruiters, luckily!). But then comes the question: what has changed besides the name? Our community friend Abhishek Gupta worked on this question and provided an excellent response titled "What's in a name? Java EE? J2EE?". But let me give you a few highlights here so you don't lose yourself with YATO (yet another tab opened): J2EE used to be an infrastructure and resources provider only, requiring developers to depend on external 3rd-party frameworks to then implement application requirements or improve productivity J2EE used to require hundreds of XML lines of codes to define just a dozen of resources like EJBs, MDBs, Servlets, and so on J2EE used to support only EAR (Enterprise Archives) with a bunch of other archives like JARs and WARs just to run a simple Web application And so on, and so on! It was a great technology but still required a lot of work to get something up and running. Remember xDoclet? Remember Struts? The old days of pure Hibernate code? Or when Ajax became a trending topic and we were all implementing it with DWR Servlet? Still, we J2EE developers survived, and learned, and helped evolve the platform to a whole new level of DX (Developer Experience). A new DX for J2EE suggested a new name. One that referred to the platform as the Enterprise Edition of Java, because "Java is why we're here" quoting Bill Shannon. The release of Java EE 5 included so many features that clearly showed developers the platform was going after all those DX gaps. Radical simplification of the persistence model with the introduction of JPA Support of Annotations following the launch of Java SE 5.0 Updated XML APIs with the introduction of StAX Drastic simplification of the EJB component model (with annotations!) Convention over Configuration and Dependency Injection A few bullets you may say but that represented a whole new DX and a vision for upcoming versions. Clearly, the release of Java EE 5 helped drive the future of the platform by reducing the number of XMLs, Java Interfaces, simplified configurations, provided convention-over-configuration, etc! We then saw the release of Java EE 6 with even more great features like Managed Beans, CDI, Bean Validation, improved JSP and Servlets APIs, JASPIC, the posisbility to deploy plain WARs and so many other improvements it is difficult to list in one sentence. And we've gotta give Spring Framework some credit here: thanks to Rod Johnson and team, concepts like Dependency Injection fit perfectly into the Java EE Platform. Clearly, Spring used to be one of the most inspiring frameworks for the Java EE platform, and it is great to see things like Pivotal and Spring supporting JSR 352 Batch API standard! Cooperation to keep improving DX at maximum in the server-side Java landscape.  The master piece result of these previous releases is seen and called today as Java EE 7, which by providing a newly and improved JavaServer Faces release, with new features for Web Development like WebSockets API, improved JAX-RS, and JSON-P, but also including Batch API and so many other great improvements, has increased developer productivity and brought innovation to server-side Java developers. Java EE is not just a new name (which was introduced back in May 2006!) but a new Developer Experience for server-side Java developers. To show you why we are here and where we are going (see the Java EE 8 update), we wanted to share with you a draft of the new Java EE logos that the evangelist team created, to help you spread the word about Java EE. You can get access to these images at the Java EE Platform Facebook Album, or the Google+ Java EE Platform Album whichever is better for you, but don't forget to like and/or +1 those social network profiles :-) A message to all job recruiters: stop using J2EE and start using Java EE if you want to find great Java EE 5, Java EE 6, or Java EE 7 developers To not only save you recruiter valuable characters when tweeting that job opportunity but to also match the correct term, we invite you to replace long terms like "Java/J2EE" or even worse "#Java #J2EE #JEE" or all these awkward combinations with the only acceptable hashtag: #JavaEE. And to prove that Java EE is catching among developers and even recruiters, and that J2EE is past, let me highlight here how are the jobs trends! The image below is from Indeed.com trends page, for the following keywords: J2EE, Java/J2EE, Java/JEE, JEE. As you can see, J2EE is indeed going away, while JEE saw some increase. Perhaps because some people are just lazy to type "Java" but at the same time they are aware that J2EE (the '2') is past. We shall forgive that for a while :-) Another proof that J2EE is going away is by looking at its trending statistics at Google. People have been showing less and less interest in the term J2EE. See the chart below:  Recruiter, if you still need proof that J2EE is past, that Java EE is trending, and that other job recruiters are seeking for Java EE developers, and that the developer community is aware of the new term, perhaps these other charts can show you what term you should be using. See for example the Job Trends for Java EE at Indeed.com and notice where it started... 2006! 8 years ago :-) Last but not least, the Google Trends for Java EE term (including the still wrong but forgivable JavaEE term) shows us that the new term is catching up very well. J2EE is past. Oh, and don't worry about the curves going down. We developers like to be hipsters sometimes and today only AngularJS, NodeJS, BigData are going up. Java EE and other traditional server-side technologies such as Spring, or even from other platforms such as Ruby on Rails, PHP, Grails, are pretty much consolidated and the curves... well, they are consolidated too. So If you are a Java EE developer, drop that J2EE from your résumé, and let recruiters also know that this term is past. Embrace Java EE, and enjoy a new developer experience for server-side Java developers. Java EE on TwitterJava EE on Google+Java EE on Facebook

    Read the article

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

< Previous Page | 286 287 288 289 290 291 292  | Next Page >