Search Results

Search found 6945 results on 278 pages for 'azure use cases'.

Page 71/278 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • T-SQL Tuesday #13: Clarifying Requirements

    - by Alexander Kuznetsov
    When we transform initial ideas into clear requirements for databases, we typically have to make the following choices: Frequent maintenance vs doing it once. As we are clarifying the requirements, we need to determine whether we want to concinue spending considerable time maintaining the system, or if we want to finish it up and move on to other tasks. Race car maintenance vs installing electric wiring is my favorite analogy for this kind of choice. In some cases we need to sqeeze every last bit...(read more)

    Read the article

  • I need help on methodologies for information system project [closed]

    - by Neenee Kale
    Basically I will be developing a student information system for parents and I am confused on what type of methodology I can use. Please recommend me a methodology which involves use cases the system development life cycle. I'm confused on what a methodology is as I've read loads of books and researched but I still don't seem to understand. I was going to use system development life cycle but I found out that this is not a methodology.

    Read the article

  • 2 Servers 1 Database - Can I use Redis?

    - by Aust
    Ok I have a couple of questions here. First let me give you some background information. I'm starting a project where I have a node.js server running my application and my website running on another normal server. My application will allow multiple users simultaneous connections and updates to the database so Redis seemed like a good fit there because of its speed and atomic functions. For someone to access my application they have to login with an account. To get an account, they have to signup for one through my website. So my website needs a database, but its not important to have a database like Redis here because it doesn't need it. Which leads me to my first question: 1. Can Redis even be used without node.js? It seems like it would be convenient if both of my servers were using the same database to keep track of information. In some cases, they will keep track of the same information (as in user information) and in other cases, they will be keeping track of separate information. So even if the website wouldn't be taking full advantage of all that Redis has to offer it seems like it would be more convenient. So assuming Redis could be used in this situation that leads to my next question: 2. Since Redis is linked with JavaScript, how would I handle the security from my website users? What would be stopping my website users from opening firebug or chrome's inspector and making changes to the database? Maybe if I designed my site with the layout like this: apply.php-update.php-home.php. Where after they submitted their form it would redirect them to the update page where the JavaScript would run and then redirect them after the database updated to the home page. I don't really know I'm just taking shots in the dark at this point. :) Maybe a better alternative would be to have my node.js application access its own Redis database and also have access to another MySQL database that my website also has access to. Or maybe there is another database that would be better suited for this situation other than Redis. Anyways any direction on this matter would be greatly appreciated. :)

    Read the article

  • New DMF for SQL Server 2008 sys.dm_fts_parser to parse a string

    Many times we want to split a string into an array and get a list of each word separately. The sys.dm_fts_parser function will help us in these cases. More over, this function will also differentiate the noise words and exact match words. The sys.dm_fts_parser can be also very powerful for debugging purposes. It can help you check how the word breaker and stemmer works for a given input for Full Text Search.

    Read the article

  • UPK for Testing Webinar Recording Now Available!

    - by Karen Rihs
    For anyone who missed last week’s event, a recording of the UPK for Testing webinar is now available.  As an implementation and enablement tool, Oracle’s User Productivity Kit (UPK) provides value throughout the software lifecycle.  Application testing is one area where customers like Northern Illinois University (NIU) are finding huge value in UPK and are using it to validate their systems.  Hear Beth Renstrom, UPK Product Manager and Bettylynne Gregg, NIU ERP Coordinator, discuss how the Test It Mode, Test Scripts, and Test Cases of UPK can be used to facilitate applications testing.

    Read the article

  • Why Clonezilla needs to reinstall grub?

    - by Pablo
    By default CloneZilla Live will reinstall grub after restore. In my particular case I am restoring a disk image to same phisical disk without any partitioning/size change. I am expecting to have exact same state as I had right after backup. However, reinstalling grub will render my system to unbootable. If I remove that option [-g] then everything is fine. My question is in what cases and why CloneZilla ever need to re-install grub when restoring disk image?

    Read the article

  • Database Activity Monitoring Part 2 - SQL Injection Attacks

    If you think through the web sites you visit on a daily basis the chances are that you will need to login to verify who you are. In most cases your username would be stored in a relational database along with all the other registered users on that web site. Hopefully your password will be encrypted and not stored in plain text.

    Read the article

  • Timestep schemes for physics simulations

    - by ktodisco
    The operations used for stepping a physics simulation are most commonly: Integrate velocity and position Collision detection and resolution Contact resolution (in advanced cases) A while ago I came across this paper from Stanford that proposed an alternative scheme, which is as follows: Collision detection and resolution Integrate velocity Contact resolution Integrate position It's intriguing because it allows for robust solutions to the stacking problem. So it got me wondering... What, if any, alternative schemes are available, either simple or complex? What are their benefits, drawbacks, and performance considerations?

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • Tips on ensuring Model Quality

    - by [email protected]
    Given enough data that represents well the domain and models that reflect exactly the decision being optimized, models usually provide good predictions that ensure lift. Nevertheless, sometimes the modeling situation is less than ideal. In this blog entry we explore the problems found in a few such situations and how to avoid them.1 - The Model does not reflect the problem you are trying to solveFor example, you may be trying to solve the problem: "What product should I recommend to this customer" but your model learns on the problem: "Given that a customer has acquired our products, what is the likelihood for each product". In this case the model you built may be too far of a proxy for the problem you are really trying to solve. What you could do in this case is try to build a model based on the result from recommendations of products to customers. If there is not enough data from actual recommendations, you could use a hybrid approach in which you would use the [bad] proxy model until the recommendation model converges.2 - Data is not predictive enoughIf the inputs are not correlated with the output then the models may be unable to provide good predictions. For example, if the input is the phase of the moon and the weather and the output is what car did the customer buy, there may be no correlations found. In this case you should see a low quality model.The solution in this case is to include more relevant inputs.3 - Not enough cases seenIf the data learned does not include enough cases, at least 200 positive examples for each output, then the quality of recommendations may be low. The obvious solution is to include more data records. If this is not possible, then it may be possible to build a model based on the characteristics of the output choices rather than the choices themselves. For example, instead of using products as output, use the product category, price and brand name, and then combine these models.4 - Output leaking into input giving the false impression of good quality modelsIf the input data in the training includes values that have changed or are available only because the output happened, then you will find some strong correlations between the input and the output, but these strong correlations do not reflect the data that you will have available at decision (prediction) time. For example, if you are building a model to predict whether a web site visitor will succeed in registering, and the input includes the variable DaysSinceRegistration, and you learn when this variable has already been set, you will probably see a big correlation between having a Zero (or one) in this variable and the fact that registration was successful.The solution is to remove these variables from the input or make sure they reflect the value as of the time of decision and not after the result is known. 

    Read the article

  • Back from the OData Roadshow

    - by Fabrice Marguerie
    I'm just back from the OData Roadshow with Douglas Purdy and Jonathan Carter. Paris was the last location of seven cities around the world.If there was something you wanted to know about OData, that was the place to be!These guys gave a great tour around OData. I learned things I didn't know about OData and I was able to give a demo of Sesame to the audience.More ideas and use cases popping-up!

    Read the article

  • Oracle WebCenter in Action: Best Practices from Oracle Consulting

    - by Kellsey Ruppel
    Oracle WebCenter in Action: Best Practices from Oracle ConsultingSee concrete, real-world examples of deployments throughout the Oracle WebCenter stack. Oracle Consulting will lead you through a discussion about best practices and key customer use cases, as well as offer practical tips to support web experience management, enterprise content management, and portal deployments.Watch this webcast as our presenters discuss: Best practices for deployments of large complex architectures with Oracle WebCenter Sites Key deployments and helpful hints for Oracle WebCenter Content Performance tuning takeaways when using Oracle WebCenter Portal Watch the webcast by registering now. REGISTER NOW

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-11

    - by Bob Rhubart
    Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 OTN Architect Day - Reston, VA - May 16 www.oracle.com The live one-day event in Reston, VA brings together architects from a broad range of disciplines and domains to share insights and expertise in the use of Oracle technologies to meet the challenges today’s solution architects regularly face. Registration is free, but seating is limited. InfoQ: Seven Secrets Every Architect Should Know www.infoq.com Frank Buschmann’s secrets: User Tasks-based Design, Be Minimalist, Ensure Visibility of Domain Concepts, Use Uncertainty as a Driver, Design Between Things, Check Assumptions, Eat Your Own Dog Food. Roadmaps for the IT shop’s evolution | Andy Mulholland www.capgemini.com Andy Mulholland discusses "the challenge of new technology and the disruptive change it brings, together with the needs to understand and plan, or even try to gain control of end-users implementations." Drive Online Engagement with Intuitive Portals and Websites | Kellsey Ruppel blogs.oracle.com "The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance," says WebCenter blogger Kellsey Ruppel. "And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more." New Exadata Customer Cases | Javier Puerta blogs.oracle.com Javier Puerta shares links to four new customer use cases featuring details on the solutions implemented at each of these sizable companies. Invoicing: It's time to catch up! | Jesper Mol www.nl.capgemini.com Capgemini's Jesper Mol diagrams an e-invoicing solution that includes Oracle Service Bus. Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri blogs.oracle.com Shub Lahiri shares a brief overview outlining the steps required to build such a simple project with Oracle Service Bus 11g and SAP Adapter for the PS3 release. Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH www.neooug.org More than 20 sessions over 4 tracks, featuring 18 speakers, including Oracle ACE Director Cary Millsap, Oracle ACE Director Rich Niemiec, and Oracle ACE Stewart Brand. Register before April 15 and save. Thought for the Day "Today, most software exists, not to solve a problem, but to interface with other software." — I. O. Angell

    Read the article

  • Table or index that goes nowhere

    - by Linchi Shea
    SQL Server allows you to create a table or an index on a filegroup that has no file assigned to it. Because there is no data file to hold anything, the table or the index thus created cannot be used. This may not be a problem because often you would probably use the table or the index 'immeidately', and would realize the problem. Well, you wouldn't be able to go anywhere. But there are cases, especially with an index, where the problem may not be discovered until some time later, and that could cause...(read more)

    Read the article

  • Is sexist humor more common in the Ruby community than other language communities? [closed]

    - by Andrew Grimm
    I've heard of more cases of sexist humor in the Ruby community, such as the sqoot's "women as perk" and toplessness in advertising, than in all the other programming language communities combined. Is this merely because I'm in the Ruby community, and therefore are more likely to hear about incidents in the Ruby community, or is it because there's a higher rate of sexist humor in the Ruby community compared to, say, the C community?

    Read the article

  • Why a graduate program in South Africa?

    - by anca.rosu
    South Africa, like many other countries, is desperate for skills. Good, solid, technical skills – together with a get-up-and-go attitude – and the desire to work for a world-class organization that is leading the way! In addition, we have made a commitment in South Africa that we need to transform our organization and develop and empower Black individuals who historically have not had the opportunity to participate in the global economy. It is through this investment in our country's people that we contribute to the development of a nation capable of competing on the global stage. This makes for an exciting recipe! We have: Plenty of young and talented individuals who are eager to get stuck in and learn. Formal, recognized qualifications that form the basis for further development. A huge big global organization – Oracle – that is committed to developing these graduates and giving them an opportunity that is out of this world! Mix the above ‘ingredients’ together Tackle and remove potential “lumps & bumps” along the way as we learn and grow together Nurture and care for each other in a warm but tough environment What have we achieved? In most cases, the outcome is an awesome bunch of new talent that is well equipped to face the IT world. Where we have the opportunity and suitable headcount available to employ these graduates at Oracle we snap them up – alternatively our business partners and customers are always eager to recruit Oracle graduates into their organizations! These individuals go through real-life work place experience whilst at Oracle. In some cases they get to travel internationally. The excitement and buzz gets into their system and their blood becomes truly RED! Oracle RED! This is valuable talent and expertise to have in our eco-system and it’s an exciting program to be a part of not only as a graduate but as an Oracle employee too!   If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com. Technorati Tags: South Africa,technical skills,graduate program,opportunity,global organization,new talent

    Read the article

  • Google I/O 2012 - Managing Google Compute Engine Virtual Machines Through Google App Engine

    Google I/O 2012 - Managing Google Compute Engine Virtual Machines Through Google App Engine Alon Levi, Adam Eijdenberg Google Compute Engine provides highly efficient and scalable virtual machines for large scale data processing operations. Integration with Google App Engine provides an orchestration framework to manage large virtual machine clusters used for data processing. This session will talk demonstrate integration and discuss future use cases of the two technologies. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 0 0 ratings Time: 51:06 More in Science & Technology

    Read the article

  • When do you use a struct instead of a class?

    - by jkohlhepp
    What are your rules of thumb for when to use structs vs. classes? I'm thinking of the C# definition of those terms but if your language has similar concepts I'd like to hear your opinion as well. I tend to use classes for almost everything, and use structs only when something is very simplistic and should be a value type, such as a PhoneNumber or something like that. But this seems like a relatively minor use and I hope there are more interesting use cases.

    Read the article

  • SQL SERVER – Windows File/Folder and Share Permissions – Notes from the Field #029

    - by Pinal Dave
    [Note from Pinal]: This is a 29th episode of Notes from the Field series. Security is the task which we should give it to the experts. If there is a small overlook or misstep, there are good chances that security of the organization is compromised. This is very true, but there are always devils’s advocates who believe everyone should know the security. As a DBA and Administrator, I often see people not taking interest in the Windows Security hiding behind the reason of not expert of Windows Server. We all often miss the important mission statement for the success of any organization – Teamwork. In this blog post Brian tells the story in very interesting lucid language. Read On! In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words. When I talk security among database professionals, I find that most have at least a working knowledge of how to apply security within a database. When I talk with DBAs in particular, I find that most have at least a working knowledge of security at the server level if we’re speaking of SQL Server. One area I see continually that is weak is in the area of Windows file/folder (NTFS) and share permissions. The typical response is, “I’m a database developer and the Windows system administrator is responsible for that.” That may very well be true – the system administrator may have the primary responsibility and accountability for file/folder and share security for the server. However, if you’re involved in the typical activities surrounding databases and moving data around, you should know these permissions, too. Otherwise, you could be setting yourself up where someone is able to get to data he or she shouldn’t, or you could be opening the door where human error puts bad data in your production system. File/Folder Permission Basics: I wrote about file/folder permissions a few years ago to give the basic permissions that are most often seen. Here’s what you must know as a minimum at the file/folder level: Read - Allows you to read the contents of the file or folder. Having read permissions allows you to copy the file or folder. Write  – Again, as the name implies, it allows you to write to the file or folder. This doesn’t include the ability to delete, however, nothing stops a person with this access from writing an empty file. Delete - Allows the file/folder to be deleted. If you overwrite files, you may need this permission. Modify - Allows read, write, and delete. Full Control - Same as modify + the ability to assign permissions. File/Folder permissions aggregate, unless there is a DENY (where it trumps, just like within SQL Server), meaning if a person is in one group that gives Read and antoher group that gives Write, that person has both Read and Write permissions. As you might expect me to say, always apply the Principle of Least Privilege. This likely means that any additional permission you might add does not need Full Control. Share Permission Basics: At the share level, here are the permissions. Read - Allows you to read the contents on the share. Change - Allows you to read, write, and delete contents on the share. Full control - Change + the ability to modify permissions. Like with file/folder permissions, these permissions aggregate, and DENY trumps. So What Access Does a Person / Process Have? Figuring out what someone or some process has depends on how the location is being accessed: Access comes through the share (\\ServerName\Share) – a combination of permissions is considered. Access is through a drive letter (C:\, E:\, S:\, etc.) – only the file/folder permissions are considered. The only complicated one here is access through the share. Here’s what Windows does: Figures out what the aggregated permissions are at the file/folder level. Figures out what the aggregated permissions are at the share level. Takes the most restrictive of the two sets of permissions. You can test this by granting Full Control over a folder (this is likely already in place for the Users local group) and then setting up a share. Give only Read access through the share, and that includes to Administrators (if you’re creating a share, likely you have membership in the Administrators group). Try to read a file through the share. Now try to modify it. The most restrictive permission is the Share level permissions. It’s set to only allow Read. Therefore, if you come through the share, it’s the most restrictive. Does This Knowledge Really Help Me? In my experience, it does. I’ve seen cases where sensitive files were accessible by every authenticated user through a share. Auditors, as you might expect, have a real problem with that. I’ve also seen cases where files to be imported as part of the nightly processing were overwritten by files intended from development. And I’ve seen cases where a process can’t get to the files it needs for a process because someone changed the permissions. If you know file/folder and share permissions, you can spot and correct these types of security flaws. Given that there are a lot of database professionals that don’t understand these permissions, if you know it, you set yourself apart. And if you’re able to help on critical processes, you begin to set yourself up as a linchpin (link to .pdf) for your organization. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Thoughts and comments on Search Neutrality?

    - by SprocketGizmo
    Following the cases brought forward by Foundem, Ciao!, and eJustice.fr what are your thoughts on Search Neutrality? Should search engines be regulated by the FCC or FTC similarly to the way the FCC is pushing to regulate Net Neutrality? Relevant Articles: Op-Ed to the New York Times from the founder of Foundem Excerpt from Book on Search/Net Neutrality Blog discussing preceding link. Site founded by Foundem to promote Search Neutrality awareness.

    Read the article

  • Extended ListView Control for Visual WebGui

    - by Webgui
    Visual WebGui's extended ListView control is in many cases the best option to implement the most complex data entry and views. The following webcast explains the different between the extended controls and the standard "flat list" ListView control and of course what can be done with it.

    Read the article

  • Does the SPDY protocol eliminate the need for cookieless domains?

    - by Clint Pachl
    With plain HTTP, cookieless domains were an optimization to avoid unnecessarily sending cookie headers for page resources. However, the SPDY protocol compresses HTTP headers and in some cases eliminates unnecessary headers. My question then is, does SPDY make cookieless domains irrelevant? Furthermore, should the page source and all of its resources be hosted at the same domain in order to optimize a SPDY implementation?

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • Web.NET: A Brief Retrospective

    - by Chris Massey
    It’s been several weeks since I had the pleasure of visiting Milan, and joining 150 enthusiastic web developers for a day of server-side frameworks and JavaScript. Lucky for me, I keep good notes. Overall the day went smoothly, with some solid logistics and very attentiveorganizerss, and an impressively diverse audience drawn by the fact that the event was ambitiously run in English. This was great in that it drew a truly pan-European audience (11 countries were represented on the day, and at least 1 visa had to be procured to get someone there!) It was trouble because, in some cases, it pushed speakers outside their comfort zone. Thankfully, despite a slightly rocky start, every session I attended was very well presented, and the consensus on the day was that the speakers were excellent. While I felt that a lot of the speakers had more that they wanted to cover, the topics were well-chosen, every room constantly had a stack of people in it, and all the sessions were pleasingly focused on code & demos. For all that the language barriers occasionally made networking a little challenging,organizerss Simone & Ugo nailed the logistics. Registration was slick, lunch was plentiful, and session management was great. The very generous Rui was kind enough to showcase a short video about Glimpse in his session, which seemed to go down well (Although the audio in the rooms was a little under-powered). Because I think you might need a mid-week chuckle, here are some out-takes.: And lets not forget the Hackathon. The idea was what having just learned about a stack of interesting technologies, attendees could spend an evening (fuelled by pizza and some good Github beer) hacking something together using them. Unfortunately, after a (great)10-hour day, and in many cases facing international travel in the morning, many of the attendees headed straight for their hotel rooms. This idea could work so beautifully, and I’m excited to see how it pans out in 2013. On top of the slick sessions, getting to finally meet Ugo and Simone in the flesh as a pleasure, as was the serendipitous introduction to the most excellent Rui. They’re all fantastic guys who are passionate about the web, and I’m looking forward to finding opportunities to work with them. Simone & Ugo put on a great event, and I’m excited to see what they do next year.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >