Search Results

Search found 9545 results on 382 pages for 'least privilege'.

Page 234/382 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • Exam 70-541 - TS: Microsoft Windows SharePoint Services 3.0 - Application Development

    - by DigiMortal
    Today I passed Microsoft exam 70-541: Microsoft Windows SharePoint Services 3.0 - Application Development. This exam gives you MCTS certificate. In this posting I will talk about the exam and also give some suggestions about books to read when preparing for exam. About exam This exam was good one I think. The questions were not hard and also not too easy. Just enough to make sure you really know what you do when working with SharePoint. Or at least to make sure you how things work. After couple of years active SharePoint coding this exam needs no additional preparation. The questions covered very different topics like alerts, features, web parts, site definitions, event receivers, workflows, web services and deployments. There are 59 questions in the exam (this information is available in internet) and you have time a little bit more than two hours. It took me about 40 minutes to get questions answered and reviewed. I strongly suggest you to study the parts of WSS 3.0 you don’t know yet and write some code to find out how to use these things through SharePoint API. Good reading For guys with less experience there are some good books to suggest. Take one or both of these books because there are no official study materials or training kits available for this exam. One of my colleagues who is less experienced than me suggested Inside Microsoft Windows SharePoint Services 3.0 by Ted Pattison and Daniel Larson. He told me that he found this book most useful for him to pass this exam.   When I started with SharePoint Services 3.0 my first book was Developer’s Guide To The Windows SharePoint Services v3 Platform by Todd C. Bleeker. It helped me getting started and later it was my main handbook for some time. Of course, there are many other good books and I suggest you to take what you find. Of course, before buying something I suggest you to discuss with guys who have read the book before. And make sure you mention that you are preparing for exam.   Conclusion If you are experienced SharePoint developer then this exam needs no preparation. Okay, some preparation is always good but if you don’t have time you are still able to pass this exam. If you are not experienced SharePoint developer then study before taking this exam – it is not easy stuff for novices. But if you pass this exam you can proudly say – yes, I know something about SharePoint! :)

    Read the article

  • How to become a solid python web developer [closed]

    - by Estarius
    Possible Duplicate: How do I learn Python from zero to web development? I have started Python recently with the goal to become a solid developer to make a web application eventually. However, as time goes by I am wondering if I am being optimal about how I will achieve my goal. I would compare it to a game for example, to be better you must spend time playing and trying new things... However, if you just log in and sit in the lobby chatting you are most likely not progressing. So far, this is my plan (feel free to comment or judge it): Review basic programmation concepts Start coding slowly in Python Once comfortable in Python, learn about web development in Python Learn about those things we heard about: SQLAlchemy, MVC, TDD, Git, Agile (Group project) To achieve these things, I started the Learn python the hard way exercises, which I am doing at the rate of 5 per days. I also started to read Think Python at the same time and planning to move on with Dive into python. As far as my research goes, these documentations along with Python documentation is usually what is the most recommended to learn Python. I consider this to get my point 1 and 2 done. While learning Python is really great, my goal remains to do quality web development. I know there are books about Django etc. however I would like to become comfortable with any Python web development. This means without Framework and with Framework... Any framework, then be able to choose the one which best fits our needs. For this I would like to know if some people have suggestions. Should I just get a book on Django and it should apply to everything ? What would be the best method to go from Python to Web Python and not end up creating crappy code which would turn into nightmares for other programmers ? Then finally, those "things we hear about". While I understand what they all do basically, I am fairly sure that like everything, there are good and wrong ways of making use of them. Should I go through at least a whole book on each before starting to use them or keep it at their respective online documentation ? Are there some kind of documentation which links their use to Python ? Also, from looking at Django and Pyramid they seems to use something else than MVC, while the Django model looks similar, the Pyramid one seems to cut a whole part of it... Is learning MVC still worth it ? Sorry for the wall of text, Thanks in advance !

    Read the article

  • Is hiring a "chief intern" a good idea?

    - by dukeofgaming
    I'm starting an internship program for our software department and I was wondering about creating a position ("chief intern", intern supervisor, or whatever one should call it) with the following responsibilities: Train interns Coach interns Manage projects and tasks for interns Supervise intern's work in terms of rhythm and quality Act as a liaison between the main team's needs and interns performance/aspirations Evaluate and facilitate intern's progress when they want to grab a higher-level domain-specific task (at this point, a main dev team member can do mentoring) Get freely involved in the main team's software development tasks so that he himself can grow, and have full mentorship from the main dev team. I'm thinking that an apprentice-level engineer (below Jr., or Jr.; but being a graduate and working full-time) can handle this for a while (he will be trained by the main dev team first), until one of two things happen: He/she decides to move on to the main dev team by recommending an appropriate replacement (or me finding another one as a new hire) Keep leading the interns while still being able to grow to Jr. Eng., Eng., Sr. Eng I know the notion of a "chief intern" is common within the medical world, but I don't really know about that in the software world (I was a freelancer for most of my university years). A side-intention to this is also that, if this ends up being a higher rotation position (organically) because the intern supervisor wants to join the main dev team, this could help interns that aspire this position emerge as leaders. My main intention for this, though, is removing distractions from the main team but without making the interns suffer the lack of attention, which could lead to boredom and little intern retention. Is this "chief intern" idea common (or good at least)?, are there any obvious risks to it that I might not be seeing? Edit: I have a draft plan for the kind of work the interns would be doing: Are R&D mini-projects a good activity for interns? Edit #2: My intention is not keeping them isolated, but having someone focus on giving attention to them when we cannot. Edit #3: I'm now convince it is a good idea, but I will take the organic approach to hiring someone in such position: do it myself until I cannot. This way I'll know better what to expect from a person I hire for this role in the future, as well as what works and what doesn't with interns.

    Read the article

  • Wireless Drivers for Broadcom BCM 4321 (14e4:4329) will not stay connected to a wireless network

    - by Eugene
    So, I'm not necessary new to Linux, I just never took the time to learn it, so please, bare with me. I just swapped out one of my wireless cards from one computer to another. This wireless card in question would be a "Broadcom BCM4321 (14e4:4329)" or actually a "Netgear WN311B Rangemax Next 270 Mbps Wireless PCI Adapter", but that's not important. I've tried (but probably screwed up in the process) installing the "wl" , "b43" and "brcmsmac" drivers, or at least I think I did. Currently I have only the following drivers loaded: eugene@EugeneS-PCu:~$ lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" b43 387371 0 bcma 52096 1 b43 mac80211 630653 1 b43 cfg80211 484040 2 b43,mac80211 ssb_hcd 12869 0 ssb 62379 2 b43,ssb_hcd The main issue is that with most of the drivers available that I've installed, they will find my wireless network but, they will only stay connected for about a minute with abnormally slow speed and then all of a sudden disconnect. Currently, the computer is hooked into another to share it's connect so that I can install drivers from the internet instead of loading them on to a flash drive and doing it offline. If anyone has any insight to the problem, that would be awesome. If not, I'll probably just look up how to install the Windows closed source driver. Edit 1: Even when I try the method here, as suggested when this was marked as a duplicate, I still can't stay connected to a wireless network. Edit 2: After discussing my issue with @Luis, he opened my question back up and told me to include the tests/procedures in the comments. Basically I did this: Read the first answer of the link above when this question was marked as duplicate which involved installing removing bcmwl-kernel-source and instead install firmware-b43-installer and b43-fwcutter. No change of result and contacted Luis in the comments, who then told me to try the second answer which involved removing my previous mistake and installing bcmwl-kernel-source Now the Network Manger (this has happend before, but usally I fixed it by using a different driver) even recognizes WiFi exist (both non-literal and literal). Luis who then suggested sudo rfkill unblock all rfkill unblock all didn't return anything, so I decide to try sudo rfkill list all. Returns nothing (no wonder rfkill unblock all did nothing). I enter lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" and that returns nothing. Try loading the driver by entering sudo modprobe b43 and try lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" again. Returns this: eugene@Eugenes-uPC:~$ sudo modprobe b43 eugene@Eugenes-uPC:~$ lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" b43 387371 0 bcma 52096 1 b43 mac80211 630653 1 b43 cfg80211 484040 2 b43,mac80211 ssb_hcd 12869 0 ssb 62379 2 b43,ssb_hcd So to recap: Currently Network Manager doesn't recognize Wireless exists, b43 drivers are loaded and I've currently hardwired a connect from my laptop to the computer that's causing this.

    Read the article

  • How can I save my university's Computer Science & Engineering department? [closed]

    - by Blake
    I'm currently pursuing a B.S. in Computer Engineering at the University of Florida, and we're having a bit of a problem right now... The state recently passed a budget plan that cuts funding for higher education in Florida. The dean of UF's College of Engineering decided that the best way for us to absorb the blow is by executing the following plan: All of the Computer Engineering Degree programs, BS, MS and PhD, would be moved from the Computer & Information Science and Engineering Dept. to the Electrical and Computer Engineering Dept. along with most of the advising staff. Roughly half of the faculty would be offered the opportunity to move to Electrical/Computer Eng., Biomedical Eng., or Industrial/Systems Eng. Staff positions in CISE which are currently supporting research and graduate programs would be eliminated. The activities currently covered by TAs would be reassigned to faculty and the TA budget for CISE would be eliminated. Any faculty member who wishes to stay in CISE may do so, but with a revised assignment focused on teaching and advising. In short: our department (at least as we know it) is being decimated. Computer & Information Sciences & Engineering (one of 9 departments in the College of Engineering) is taking more than 50% of the cuts. If you're interested in reading the full proposal, you can access it here. A vast, VAST majority of the students and faculty in the department are vehemently opposed to this plan, however the dean is already taking measures to implement it. This is the only proposal on the table right now, and she has not entertained our requests for alternatives. She sees it as an obvious (albeit drastic) solution to our budget problem, citing that many other universities have combined Computer and Electrical Engineering departments. I'll bet those universities didn't have to eliminate an established department to get there, though. The budget goes into effect July 1, 2012 (this is non-negotiable), and the dean's proposal is currently set to be finalized some time next week. We don't have much time! My question to everyone here is this: Are we overreacting to this plan, or are we justified? And could you explain why or why not? It's obvious that CISE students will resist any cuts to our department, but I'm curious to see what other people in the field have to say. Any feedback is greatly appreciated. I will select the answer that saves our department. Just kidding, I'll pick the one that best explains why this is a good or bad decision for the dean to make. Please note that anything you say can and will be used to further our cause (and we might track you down if you provide a compelling argument against us).

    Read the article

  • Talking JavaOne with Rock Star Adam Bien

    - by Janice J. Heiss
    Among the most celebrated developers in recent years, especially in the domain of Java EE and JavaFX, is consultant Adam Bien, who, in addition to being a JavaOne Rock Star for Java EE sessions given in 2009 and 20011, is a Java Champion, the winner of Oracle Magazine’s 2011 Top Java Developer of the Year Award, and recently won a 2012 JAX Innovation Award as a top Java Ambassador. Bien will be presenting the following sessions: TUT3907 - Java EE 6/7: The Lean Parts CON3906 - Stress-Testing Java EE 6 Applications Without Stress CON3908 - Building Serious JavaFX 2 Applications CON3896 - Interactive Onstage Java EE Overengineering I spoke with Bien to get his take on Java today. He expressed excitement that the smallest companies and startups are showing increasing interest in Java EE. “This is a very good sign,” said Bien. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.” Nashorn is an upcoming JavaScript engine, developed fully in Java by Oracle, and based on the Da Vinci Machine (JSR 292) which is expected to be available for Java 8.   Bien expressed concern about a common misconception regarding Java's mediocre productivity. “The problem is not Java,” explained Bien, “but rather systems built with ancient patterns and approaches. Sometimes it really is ‘Cargo Cult Programming.’ Java SE/EE can be incredibly productive and lean without the unnecessary and hard-to-maintain bloat. The real problems are ‘Ivory Towers’ and not Java’s lack of productivity.” Bien remarked that if there is one thing he wanted Java developers to understand it is that, "Premature optimization is the root of all evil. Or at least of some evil. Modern JVMs and application servers are hard to optimize upfront. It is far easier to write simple code and measure the results continuously. Identify the hotspots first, then optimize.” He advised Java EE developers to, “Rethink everything you know about Enterprise Java. Before you implement anything, ask the question: ‘Why?’ If there is no clear answer -- just don't do it. Most well known best practices are outdated. Focus your efforts on the domain problem and not the technology.” Looking ahead, Bien said, “I would like to see open source application servers running directly on a hypervisor. Packaging the whole runtime in a single file would significantly simplify the deployment and operations.”Check out a recent Java Magazine interview with Bien about his Java EE 6 stress monitoring tool here. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Taking the fear out of a Cloud initiative through the use of security tools

    - by user736511
    Typical employees, constituents, and business owners  interact with online services at a level where their knowledge of back-end systems is low, and most of the times, there is no interest in knowing the systems' architecture.  Most application administrators, while partially responsible for these systems' upkeep, have very low interactions with them, at least at an operational, platform level.  Of greatest interest to these groups is the consistent, reliable, and manageable operation of the interfaces with which they communicate.  Introducing the "Cloud" topic in any evolving architecture automatically raises the concerns for data and identity security simply because of the perception that when owning the silicon, enterprises are not able to manage its content.  But is this really true?   In the majority of traditional architectures, data and applications that access it are physically distant from the organization that owns it.  It may reside in a shared data center, or a geographically convenient location that spans large organizations' connectivity capabilities.  In the end, very often, the model of a "traditional" architecture is fairly close to the "new" Cloud architecture.  Most notable difference is that by nature, a Cloud setup uses security as a core function, and not as a necessary add-on. Therefore, following best practices, one can say that data can be safer in the Cloud than in traditional, stove-piped environments where data access is segmented and difficult to audit. The caveat is, of course, what "best practices" consist of, and here is where Oracle's security tools are perfectly suited for the task.  Since Oracle's model is to support very large organizations, it is fundamentally concerned about distributed applications, databases etc and their security, and the related Identity Management Products, or DB Security options reflect that concept.  In the end, consumers of applications and their data are to be served more safely in a controlled Cloud environment, while realizing the many cost savings associated with it. Having very fast resources to serve them (such as the Exa* platform) makes the concept even more attractive.  Finally, if a Cloud strategy does not seem feasible, consider the pros and cons of a traditional vs. a Cloud architecture.  Using the exact same criteria and business goals/traditions, and with Oracle's technology, you might be hard pressed to justify maintaining the technical status quo on security alone. For additional information please visit Oracle's Cloud Security page at: http://www.oracle.com/us/technologies/cloud/cloud-security-428855.html

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

  • Appropriate response when client empowered with CMS destroys content to his own will

    - by dukeofgaming
    So, I just recently closed a website project that pretty much was The Oatmeals' Design Hell, but with content. The client loved the site at the beginning but started getting other people involved and mercilessly bombarding us with their opinions. We served a carefully thought content strategy (which the client approved) and extremely curated copywriting that took us four months after at least 5 requirement changes (new content, new objectives for the business, changed offerings, new mindfaps, etc.) that required us to rewrite the content about 3 times. The client never gave timely feedback even though we kept the process open for him and his people to see (content being developed transparently in Google Docs). Near the end of the project he still wanted to make changes but wanted us to finish already (there are not enough words in the world to even try to make sense of this). So I explained to him the obvious implications of the never-ending requirement changes and advised him to take the time to gather his thoughts with his own team and see the new content introduced as a new content maintenance project. He happily accepted, but on the day of training/delivery things went very wrong and we have no idea why. The client didn't even allow the site to be out for a week with the content we developed for him and quickly replaced us with a Joomla savvy intern so that he completely destroy the content with shallow, unstructured, tasteless and plain wordsmithing (and I'm not even being visceral). Worst insult of all, he revoked our access from his server and the deployed CMS not even having passed 10 minutes of being given his administrator account (we realized the day after that he did it in our own office, the nerve!). Everybody involved in the team is enraged and insulted. I never want to see this happen again. So, to try to make sense of this situation and avoid it in the future with new clients I have two concrete questions: Is there even an appropriate course of action with a client like this?, or is he just not worth the trouble of analyzing (blindly hoping this never repeats again). In the exercise to try and blame ourselves instead of the client and take this as a lesson of... something, how should we set expectations for new clients about the working terms, process and final product so that they are discouraged from mauling the content to their own contempt once they get the codes to the nukes (access to the CMS)?

    Read the article

  • iPad: How To Soft Reset, Restore Factory Settings And Erase All Content

    - by Gopinath
    iPad is an amazing gadget from Apple and everyone of us loves to own it. If you are a lucky one to own an iPad here are some basic troubleshooting features that you should be aware. There is no doubt that iPad is an amazing gadget, but similar to other electronic gadgets it refuses to work now and then. When your iPad hangs or refuses to respond you can soft reset it by holding the Sleep/Wake button and the Home button at the same time for at least ten seconds, until the Apple logo appears. This just reset your iPad and it’s similar to force rebooting your PC. All your settings and data are untouched during this process. The embedded video below shows how to turn off and soft reset an iPad. If iPad Doesn’t Work After Soft Reset – Restore Factory Settings Soft reset should resolve most of the iPad issues like hanging, not responding properly, etc. In case even if your iPad does not work properly after soft reset you can try reset all the setting to factory defaults by navigating through the Home screen > Settings > General > Reset > Reset All Settings.  This will reset all your settings and other iPad customizations to factory defaults but your data(images, documents, apps, etc.) are left untouched. How To Erase All Content Of iPad You may ask why should I erase all the content of iPad? Well may be you are willing to sell it off on eBay by erasing all the content or you want start afresh or some other reason. It is easy to reset all the settings of iPad as well as wipe out all content by navigating through Home screen > General > Reset >  “Erase All Content and Settings.” CC Image credit flickr/korosirego This article titled,iPad: How To Soft Reset, Restore Factory Settings And Erase All Content, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Installing Forms and Reports on a development system

    - by Duncan Mills
    By popular demand I've resurrected / updated one of the old blog postings from Jan Carlin's Blog on GroundSide here. A recent (lengthy) post on the Forms forums chronicles the problems some of you have had installing F&R on a development machine. See the link in the headline of this post for the main one. When installing, here are some points to bear in mind: Download and install Weblogic Server first. http://www.oracle.com/technology/software/products/middleware/index.htmlFind the Forms and Reports (and Disco and Portal) zip files here. Download them to the desktop (or some other temporary directory of your choosing). Unzip both of the two zip files into the same new directory (maybe called 'stage') and check that you have 4 directories in the stage dir when you are finished unzipping: 'Disk1', 'Disk2', 'Disk3' and 'Disk4'. These folders are specified in the zip file structure and must be preserved for the setup executable to work. If you use WinZip and have a right click menu option that say "Extract to here", use that by right click-dragging the zip file onto the newly created directory. Don't use the "Extract to folder %HOME%\Desktop\ofm_pfrd...disk_1of2" option. That will get you into the trouble that was reported early in this thread. Free up as much memory as you can. Stop services and background processes and virus scanners and databases (you don't need a DB to install Forms) and other things lurking about on your machine. You can restart them when the install is done. Around 1.5 GB free real memory should do it. If it doesn't, free up more if you can. Don't change the swap space unless you know what you are doing. Let Windows handle it. A 1 GB machine will likely not be enough. You will likely need at least 2GB of RAM.Start the install with setup.exe from the 'Disk1' directoryChoose the Install and Configure option unless you have a good reason not to.Choose a unique instance name even if you deinstalled and removed the last install. I suggest using 'asinst_20090722_1' (today's date in ISO format with a running incremented number at the end if you install more than two times on a particular day).Unselect Portal and Discoverer and select the Builders you want.Unselect WebCacheUnselect OHS.Unselect the single sign-on option Check for any failures and choose the retry option if any occur. If that doesn't fix the problem, call Oracle Customer Support .

    Read the article

  • Iterative and Incremental Principle Series 3: The Implementation Plan (a.k.a The Fitness Plan)

    - by llowitz
    Welcome back to the Iterative and Incremental Blog series.  Yesterday, I demonstrated how shorter interval sets allowed me to focus on my fitness goals and achieve success.  Likewise, in a project setting, shorter milestones allow the project team to maintain focus and experience a sense of accomplishment throughout the project lifecycle.  Today, I will discuss project planning and how to effectively plan your iterations. Admittedly, there is more to applying the iterative and incremental principle than breaking long durations into multiple, shorter ones.  In order to effectively apply the iterative and incremental approach, one should start by creating an implementation plan.   In a project setting, the Implementation Plan is a high level plan that focuses on milestones, objectives, and the number of iterations.  It is the plan that is typically developed at the start of an engagement identifying the project phases and milestones.  When the iterative and incremental principle is applied, the Implementation Plan also identified the number of iterations planned for each phase.  The implementation plan does not include the detailed plan for the iterations, as this detail is determined prior to each iteration start during Iteration Planning.  An individual iteration plan is created for each project iteration. For my fitness regime, I also created an “Implementation Plan” for my weekly exercise.   My high level plan included exercising 6 days a week, and since I cross train, trying not to repeat the same exercise two days in a row.  Because running on the hills outside is the most difficult and consequently, the most effective exercise, my implementation plan includes running outside at least 2 times a week.   Regardless of the exercise selected, I always apply a series of 6-minute interval sets.  I never plan what I will do each day in advance because there are too many changing factors that need to be considered before that level of detail is determined.  If my Implementation Plan included details on the exercise I was to perform each day of the week, it is quite certain that I would be unable to follow my plan to that level.  It is unrealistic to plan each day of the week without considering the unique circumstances at that time.  For example, what is the weather?  Are there are conflicting schedule commitments?  Are there injuries that need to be considered?  Likewise, in a project setting, it is best to plan for the iteration details prior to its start. Join me for tomorrow’s blog where I will discuss when and how to plan the details of your iterations.

    Read the article

  • Should I choose Doctrine 2 or Propel 1.5/1.6, and why?

    - by Billy ONeal
    I'd like to hear from those who have used Doctrine 2 (or later) and Propel 1.5 (or later). Most comparisons between these two object relational mappers are based on old versions -- Doctrine 1 versus Propel 1.3/1.4, and both ORMs went through significant redesigns in their recent revisions. For example, most of the criticism of Propel seems to center around the "ModelName Peer" classes, which are deprecated in 1.5 in any case. Here's what I've accumulated so far (And I've tried to make this list as balanced as possible...): Propel Pros Extremely IDE friendly, because actual code is generated, instead of relying on PHP magic methods. This means IDE features like code completion are actually helpful. Fast (In terms of database usage -- no runtime introspection is done on the database) Clean migration between schema versions (at least in the 1.6 beta) Can generate PHP 5.3 models (i.e. namespaces) Easy to chain a lot of things into a single database query with things like useXxx methods. (See the "code completion" video above) Cons Requires an extra build step, namely building the model classes. Generated code needs rebuilt whenever Propel version is changed, a setting is changed, or the schema changes. This might be unintuitive to some and custom methods applied to the model are lost. (I think?) Some useful features (i.e. version behavior, schema migrations) are in beta status. Doctrine Pros More popular Doctrine Query Language can express potentially more complicated relationships between data than easily possible with Propel's ActiveRecord strategy. Easier to add reusable behaviors when compared with Propel. DocBlock based commenting for building the schema is embedded in the actual PHP instead of a separate XML file. Uses PHP 5.3 Namespaces everywhere Cons Requires learning an entirely new programming language (Doctrine Query Language) Implemented in terms of "magic methods" in several places, making IDE autocomplete worthless. Requires database introspection and thus is slightly slower than Propel by default; caching can remove this but the caching adds considerable complexity. Fewer behaviors are included in the core codebase. Several features Propel provides out of the box (such as Nested Set) are available only through extensions. Freakin' HUGE :) This I have gleaned though only through reading the documentation available for both tools -- I've not actually built anything yet. I'd like to hear from those who have used both tools though, to share their experience on pros/cons of each library, and what their recommendation is at this point :)

    Read the article

  • SSIS Reporting Pack update

    - by jamiet
    Its been a while since I last posted anything in regard to SSIS Reporting Pack, the most recent release being on 27th May 2012, so here is a short update. There is still lots of work to do on SSIS Reporting Pack; lots more features to add, lots of performance work to be done, and a few bug fixes too. I have also been (fairly) hard at work on a framework to be used in conjunction with SSIS 2012 that I refer to as the Restart Framework (currently residing at http://ssisrestartframework.codeplex.com/). There is still much work to be done on the Restart Framework (not least some useful documentation on how to use it) which is why I haven’t mentioned it publicly before now although I am actively checking in changes. One thing I am considering is amalgamating the two projects into one; this would mean I could build a suite of reports that both work against the SSIS Catalog (what you currently know as “SSIS Reporting Pack”) and also against this Restart Framework thing. No decision has been made as yet though. There have been a number of bug reports and feature suggestions for SSIS Reporting Pack added to the Issue Tracker. Thank you to everyone that has submitted something, rest assured I am not going to ignore them forever; my time is at a premium right now unfortunately due to … well … life… so working on these items isn’t near the top of my priority list. Lastly, I am actively using SSIS Reporting Pack in a production environment right now and I’m happy to report that it is proving to be very useful. One of the reports that I have put a lot of time into is execution executable duration.rdl and its proving very adept at easily identifying bottlenecks in our SSIS 2012 executions: The report allows you to browse through the hierarchy of executables in each execution and each bar represents the duration of each executable in relation to all the other executables; longer bars being a good indication of where problems might lie. The colour of the bar indicates whether it was successful or not (green=success). Hovering over a bar brings up a tooltip showing more information about that executable. Clicking on a bar allows you to compare this particular instance of the executable against other executions. Please do let me know if you are using SSIS Reporting Pack. I would like to hear any anecdotes you might have, good or bad. @Jamiet

    Read the article

  • IDC Analyst Report Touts Oracle–Accenture Strategic Initiative

    - by kristin.jellison
    Hi there, partners! Oracle Engineered Systems have been getting some love lately, and we want to share it with you! The market intelligence and advisory firm IDC recently released a report lauding Oracle and Accenture’s strategic initiative to route the performance and flexibility of Oracle Engineered Systems to clients. The report, "Oracle and Accenture Strategic Alliance Places Big Bet on Engineered Systems,” by Steve White, reflects a largely positive analysis of the relationship. White notes that the alliance is “one of the largest in the industry.” Under the relationship, Accenture has incorporated Oracle Engineered Systems—including Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle SuperCluster, and Oracle Exalytics In-Memory Machine—into its leading datacenter transformation consulting services. Together, the two companies have also created bespoke platforms, such as the Accenture Foundation Platform for Oracle, which helps clients accelerate deployments on Oracle Fusion Middleware, running Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine. Oracle Engineered Systems deliver a single, engineered platform—including server to storage and networking. This makes it easier and cheaper for Accenture clients around the world to prepare their datacenters for managing, processing and analyzing the massive amounts of data they (rightly) anticipate seeing in the next decade. The new solutions can help reduce the effort and cost to migrate any vendor database to an Oracle Engineered Systems platform, which can lower the cost of ownership by up to 50 percent. For its part, Accenture has built a team of 300 consultants to implement and increase the flexibility and stability of client datacenters. This move further expands one of the fastest-growing full-service Oracle Enterprise solutions. Over 52,000 Accenture consultants are qualified to implement, upgrade and outsource the Oracle product suite. Accenture is a Diamond-level member of Oracle PartnerNetwork (OPN). For Oracle Partners, this update should give you at least two things to walk away with. First, this initiative is showing signs of success. As Marty Cole, group chief executive for Accenture’s Technology growth platform, put it, “We are seeing an increasing number of clients recognizing the value of consolidating their databases and taking advantage of the cost and performance benefits delivered by these solutions.” The pipeline is there—and not just for Accenture. Use this example to show your clients that investments in Oracle Engineered Systems are on the rise. Second, recognize that Oracle Engineered Systems represent one of the biggest platforms for growth that Oracle has to offer partners. As part of the agreement, Accenture is able to provide: Platform Readiness Assessments Platform Implementation App Rationalization Database Rationalization Managed Services These are all enablement opportunities you can offer customers under Oracle’s partner programs —to continue building the value of their investments, and the value of your relationship with Oracle. Take a read through the IDC report. To learn more about the partnership, see this press release. Happy selling! The OPN Communications Team

    Read the article

  • Go Big or Go Special

    - by Ajarn Mark Caldwell
    Watching Shark Tank tonight and the first presentation was by Mango Mango Preserves and it highlighted an interesting contrast in business trends today and how to capitalize on opportunities.  <Spoiler Alert> Even though every one of the sharks was raving about the product samples they tried, with two of them going for second and third servings, none of them made a deal to invest in the company.</Spoiler>  In fact, one of the sharks, Kevin O’Leary, kept ripping into the owners with statements to the effect that he thinks they are headed over a financial cliff because he felt their costs were way out of line and would be their downfall if they didn’t take action to radically cut costs. He said that he had previously owned a jams and jellies business and knew the cost ratios that you had to have to make it work.  I don’t doubt he knows exactly what he’s talking about and is 100% accurate…for doing business his way, which I’ll call “Go Big”.  But there’s a whole other way to do business today that would be ideal for these ladies to pursue. As I understand it, based on his level of success in various businesses and the fact that he is even in a position to be investing in other companies, Kevin’s approach is to go mass market (Go Big) and make hundreds of millions of dollars in sales (or something along that scale) while squeezing out every ounce of cost that you can to produce an acceptable margin.  But there is a very different way of making a very successful business these days, which is all about building a passionate and loyal community of customers that are rooting for your success and even actively trying to help you succeed by promoting your product or company (Go Special).  This capitalizes on the power of social media, niche marketing, and The Long Tail.  One of the most prolific writers about capitalizing on this trend is Seth Godin, and I hope that the founders of Mango Mango pick up a couple of his books (probably Purple Cow and Tribes would be good starts) or at least read his blog.  I think the adoration expressed by all of the sharks for the product is the biggest hint that they have a remarkable product and that they are perfect for this type of business approach. Both are completely valid business models, and it may certainly be that the scale at which Kevin O’Leary wants to conduct business where he invests his money is well beyond the long tail, but that doesn’t mean that there is not still a lot of money to be made there.  I wish them the best of luck with their endeavors!

    Read the article

  • Poll Results: Foreign Key Constraints

    - by Darren Gosbell
    A few weeks ago I did the following post asking people – if they used foreign key constraints in their star schemas. The poll is still open if you are interested in adding to it, but here is what the chart looks like as of today. (at the bottom of the poll itself there is a link to the live results, unfortunately I cannot link the live results in here as the blogging platform blocks the required javascript)   Interestingly the results are fairly even. Of the 78 respondents, fractionally over half at least aim to start with referential integrity in their star schemas. I did not want to influence the results by sharing my opinion, but my personal preference is to always aim to have foreign key constraints. But at the same time, I am pragmatic about it, I do have projects where for various reasons some constraints are not defined. And I also have other designs that I have inherited, where it would just be too much work to go back and add foreign key constraints. If you are going to implement foreign keys in your star schema, they really need to be there at the start. In fact this poll was was the result of a feature request for BIDSHelper asking for a feature to check for null/missing foreign keys and I am entirely convinced that BIDS is the wrong place for this sort of functionality. BIDS is a design tool, your data needs to be constantly checked for consistency. It's not that I think that it's impossible to get a design working without foreign key constraints, but I like the idea of failing as soon as possible if there is an error and enforcing foreign key constraints lets me "fail early" if there are constancy issues with my data. By far the biggest concern with foreign keys is performance and I suppose I'm curious as to how often people actually measure and quantify this. I worked on a project a number of years ago that had very large data volumes and we did find that foreign key constraints did have a measurable impact, but what we did was to disable the constraints before loading the data, then enabled and checked them afterwards. This saved as time (although not as much as not having constraints at all), but still let us know early in the process if there were any consistency issues. For the people that do not have consistent data, if you have ETL processes that you control that are building your star schema which you also control, then to be blunt you only have yourself to blame. It is the job of the ETL process to make the data consistent. There are techniques for handling situations like missing data as well as  early and late arriving data. Ralph Kimball's book – The Data Warehouse Toolkit goes through some design patterns for handling data consistency. Having foreign key relationships can also help the relational engine to optimize queries as noted in this recent blog post by Boyan Penev

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Live Debugging

    - by Daniel Moth
    Based on my classification of diagnostics, you should know what live debugging is NOT about - at least according to me :-) and in this post I'll share how I think of live debugging. These are the (outer) steps to live debugging Get the debugger in the picture. Control program execution. Inspect state. Iterate between 2 and 3 as necessary. Stop debugging (and potentially start new iteration going back to step 1). Step 1 has two options: start with the debugger attached, or execute your binary separately and attach the debugger later. You might say there is a 3rd option, where the app notifies you that there is an issue, referred to as JIT debugging. However, that is just a variation of the attach because that is when you start the debugging session: when you attach. I'll be covering in future posts how this step works in Visual Studio. Step 2 is about pausing (or breaking) your app so that it makes no progress and remains "frozen". A sub-variation is to pause only parts of its execution, or in other words to freeze individual threads. I'll be covering in future posts the various ways you can perform this step in Visual Studio. Step 3, is about seeing what the state of your program is when you have paused it. Typically it involves comparing the state you are finding, with a mental picture of what you thought the state would be. Or simply checking invariants about the intended state of the app, with the actual state of the app. I'll be covering in future posts the various ways you can perform this step in Visual Studio. Step 4 is necessary if you need to inspect more state - rinse and repeat. Self-explanatory, and will be covered as part of steps 2 & 3. Step 5 is the most straightforward, with 3 options: Detach the debugger; terminate your binary though the normal way that it terminates (e.g. close the main window); and, terminate the debugging session through your debugger with a result that it terminates the execution of your program too. In a future post I'll cover the ways you can detach or terminate the debugger in Visual Studio. I found an old picture I used to use to map the steps above on Visual Studio 2010. It is basically the Debug menu with colored rectangles around each menu mapping the menu to one of the first 3 steps (step 5 was merged with step 1 for that slide). Here it is in case it helps: Stay tuned for more... Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Bash arrays and case statements - review my script

    - by Felipe Alvarez
    #!/bin/bash # Change the environment in which you are currently working. # Actually, it calls the relevant 'lettus.sh' script if [ "${BASH_SOURCE[0]}" == "$0" ]; then echo "Try running this as \". chenv $1\"" exit 0 fi usage(){ echo "Usage: . ${PROG} -- Shows a list of user-selectable environments." echo " . ${PROG} [env] -- Select environment." echo " . ${PROG} -h -- Shows this usage screen." return } showEnv(){ # check if index0 exists, assume we have at least the first (zeroth) element #if [ -z "${envList}" ]; then if [ -z "${envList[0]}" ]; then echo "array \$envList is empty! " >&2 return 1 fi # Show all elements in array (0 -> n-1) for i in $(seq 0 $((${#envList[@]} - 1))); do echo ${envList[$i]} done return } setEnv(){ if [ -z "$1" ]; then usage; return fi case $1 in cold) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_cold.sh;; coles) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_coles.sh;; fc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fc.sh;; fcrm) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fcrm.sh;; stable) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_stable.sh;; tip) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_tip.sh;; uat) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_uat.sh;; wellmdc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_wellmdc.sh;; *) usage; return;; esac if $IS_SOURCED; then echo "Environment \"$1\" selected." echo "Now sourcing file \"$FILE_TO_SOURCE\"..." . ${FILE_TO_SOURCE} return else return 1 fi } main(){ if [ -z "$1" ]; then showEnv; return fi case $1 in -h) usage;; *) setEnv $1;; esac return } PROG="chenv" # create array of user-selectable environments envList=( cold coles fc fcrm stable tip uat wellmdc ) main "$@" return If I could, I'd like to get some feedback on a better way to accomplish any of the following: run through the case statement make script trivally simple to maintain/upgrade/update

    Read the article

  • Mathemagics - 3 consecutive number

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Three Consecutive numbers When I was young and handsome (OK, OK, just young), my father used to challenge us with riddles and tricks involving Logic, Math and general knowledge. Most of the time, at least after reaching the ripe age of 10, I would see thru his tricks in no time. This one is a bit more subtle. I had to think about it for close to an hour and then when I had the ‘AHA!’ effect, I could not understand why it had taken me so long. So here it is. You select a volunteer from the audience (or a shill, but that would be cheating!) and ask him to select three consecutive numbers, all of them 1 or 2 digits. So {1, 2, 3} would be good, albeit trivial set, as would {8, 9, 10} or {97, 98, 99} but not {99, 99, 100} (why?!). Now, using a calculator – and these days almost every phone has a built in calculator – he is to perform these steps: 1.      Select a single digit 2.      Multiply it by 3 and write it down 3.      Add the 3 consecutive numbers 4.      Add the number from step 2 5.      Multiply the sum by 67 6.      Now tell me the last 2 digits of the result and also the number you wrote down in step 2 I will tell you which numbers you selected. How do I do this? I’ll give you the mechanical answer, but because I like you to have the pleasure of an ‘AHA!’ effect, I will not really explain the ‘why’. So let’s you selected 30, 31, and 32 and also that your 3 multiple was 24, so here is what you get 30 + 31 + 32 = 93 93 + 24 = 117 117 x 67 = 7839, last 2 digits are 39, so you say “the last 2 digits are 39, and the other number is 24.” Now, I divide 24 by 3 getting 8. I subtract 8 from 39 and get 31. I then subtract 1 from this getting 30, and say: “You selected 30, 31, and 32.” This is the ‘how’. I leave the ‘why’ to you! That’s all folks! PS do you really want to know why? Post a feedback below. When 11 people or more will have asked for it, I’ll add a link to the full explanation.

    Read the article

  • JavaOne Rock Star – Adam Bien

    - by Janice J. Heiss
    Among the most celebrated developers in recent years, especially in the domain of Java EE and JavaFX, is consultant Adam Bien, who, in addition to being a JavaOne Rock Star for Java EE sessions given in 2009 and 2011, is a Java Champion, the winner of Oracle Magazine’s 2011 Top Java Developer of the Year Award, and recently won a 2012 JAX Innovation Award as a top Java Ambassador. Bien will be presenting the following sessions: TUT3907 - Java EE 6/7: The Lean Parts CON3906 - Stress-Testing Java EE 6 Applications Without Stress CON3908 - Building Serious JavaFX 2 Applications CON3896 - Interactive Onstage Java EE Overengineering I spoke with Bien to get his take on Java today. He expressed excitement that the smallest companies and startups are showing increasing interest in Java EE. “This is a very good sign,” said Bien. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.” Nashorn is an upcoming JavaScript engine, developed fully in Java by Oracle, and based on the Da Vinci Machine (JSR 292) which is expected to be available for Java 8.    Bien expressed concern about a common misconception regarding Java's mediocre productivity. “The problem is not Java,” explained Bien, “but rather systems built with ancient patterns and approaches. Sometimes it really is ‘Cargo Cult Programming.’ Java SE/EE can be incredibly productive and lean without the unnecessary and hard-to-maintain bloat. The real problems are ‘Ivory Towers’ and not Java’s lack of productivity.” Bien remarked that if there is one thing he wanted Java developers to understand it is that, "Premature optimization is the root of all evil. Or at least of some evil. Modern JVMs and application servers are hard to optimize upfront. It is far easier to write simple code and measure the results continuously. Identify the hotspots first, then optimize.”   He advised Java EE developers to, “Rethink everything you know about Enterprise Java. Before you implement anything, ask the question: ‘Why?’ If there is no clear answer -- just don't do it. Most well known best practices are outdated. Focus your efforts on the domain problem and not the technology.” Looking ahead, Bien remarked, “I would like to see open source application servers running directly on a hypervisor. Packaging the whole runtime in a single file would significantly simplify the deployment and operations.” Check out a recent Java Magazine interview with Bien about his Java EE 6 stress monitoring tool here.

    Read the article

  • Travelling MVP #3: Community event in Varna, Bulgaria

    - by DigiMortal
    Second stop in my DevReach 2012 trip was at Varna. We had not much time to hang around there but this problem will get fixed next year if not before. But still we had sessions there with Dimitar Georgijev and I had also chance to meet local techies. Next time we will have more tech and beers for sure! We started in the morning from Bucharest and travelled through Ruse, Razgrad and Shumen to Varna. It’s about 275km. We used cab, local bus and Dimitar father’s car. We had one food stop in Ruse and after that we went directly to Varna. Here is our route on map. Varna is Bulgarian city that locates on western coast of Black Sea. I have been there once before this trip and it’s good place to have vacation under sun. Also autumn is there milder than here in Estonia (third day of snow is going on). Bulgaria has some good beers, my favorite mankind killer called rakia and very good national cuisine. Food is made of fresh stuff and it is damn good experience. Here are some arbitrarily selected images (you can click on these to view at original size): Old bus “monument” in Razgrad Stuffed peppers, Bulgarian national cuisine Infra-red community having good time and beers We made our sessions at one study class of Varna technical university. It’s a little bit old style university but everything we needed was there and we had no problems with machinery. Sessions were same as in Bucharest. The user group in Varna is brand new and hopefully it will be something bigger one good day. At least I try to make my commits so they get on their feet quicker. As we had not much time to announce the event there was about 15 guys listening to us and I’m happy that it was not too much hyped event because still I was getting my first experiences with foreign audiences. After sessions we took our stuff to hotel and went to hang around with local techies. We had some good time there and made some new friends. Next time when I go to Varna I go back as more experienced speaker and I plan to do there one tougher and highly challenging session. Maybe somebody from Estonian community will join me and then it will be well planned surprise-attack to Varna :)

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >