Search Results

Search found 3248 results on 130 pages for 'personal'.

Page 93/130 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Need help on a problemset in a programming contest

    - by topher
    I've attended a local programming contest on my country. The name of the contest is "ACM-ICPC Indonesia National Contest 2013". The contest has ended on 2013-10-13 15:00:00 (GMT +7) and I am still curious about one of the problems. You can find the original version of the problem here. Brief Problem Explanation: There are a set of "jobs" (tasks) that should be performed on several "servers" (computers). Each job should be executed strictly from start time Si to end time Ei Each server can only perform one task at a time. (The complicated thing goes here) It takes some time for a server to switch from one job to another. If a server finishes job Jx, then to start job Jy it will need an intermission time Tx,y after job Jx completes. This is the time required by the server to clean up job Jx and load job Jy. In other word, job Jy can be run after job Jx if and only if Ex + Tx,y = Sy. The problem is to compute the minimum number of servers needed to do all jobs. Example: For example, let there be 3 jobs S(1) = 3 and E(1) = 6 S(2) = 10 and E(2) = 15 S(3) = 16 and E(3) = 20 T(1,2) = 2, T(1,3) = 5 T(2,1) = 0, T(2,3) = 3 T(3,1) = 0, T(3,2) = 0 In this example, we need 2 servers: Server 1: J(1), J(2) Server 2: J(3) Sample Input: Short explanation: The first 3 is the number of test cases, following by number of jobs (the second 3 means that there are 3 jobs for case 1), then followed by Ei and Si, then the T matrix (sized equal with number of jobs). 3 3 3 6 10 15 16 20 0 2 5 0 0 3 0 0 0 4 8 10 4 7 12 15 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 8 10 4 7 12 15 1 4 0 50 50 50 50 0 50 50 50 50 0 50 50 50 50 0 Sample Output: Case #1: 2 Case #2: 1 Case #3: 4 Personal Comments: The time required can be represented as a graph matrix, so I'm supposing this as a directed acyclic graph problem. Methods I tried so far is brute force and greedy, but got Wrong Answer. (Unfortunately I don't have my code anymore) Could probably solved by dynamic programming too, but I'm not sure. I really have no clear idea on how to solve this problem. So a simple hint or insight will be very helpful to me.

    Read the article

  • How to enable and connect to RDP on a Windows Azure Web Role Instance?

    - by Enrique Lima
    We all know there have been some updates to Windows Azure, and one of the biggest I would say is the capability of being able to remote into the “OS level” of the image running a role.  And I am not talking about VM Role, I am talking about a Web Role for example. As developers we use Visual Studio, and when we are getting ready to deploy a project, we have the option of enabling this. Here is how: 1.  We publish our Project 2. On the Deployment dialog, provide all the details for your account, and before clicking OK, click on Configure Remote Desktop connections. 3.  Enable connections and the rest of the configuration.  Now, here is where there is an extra set of steps.  The first thing to know: The certificate used here is different from the other certs you have in place.  I created a new one, the went into certmgr.msc, then to Personal, then I selected the cert I just created.  Did a right-click, then All Tasks > Export.  Because what is needed is a pfx package, make sure when exporting you select to export the private key. 4. Click OK, on the Remote Desktop Configuration screen, now before you click OK on the Deployment, you will need to visit the Azure Portal. And perform the following: Go to your hosted services. Then with the service available, select the Certificates folder location. Then, select Add Certificate from the toolbar (more like Azure Portal Ribbon) Provide the details to upload the recently create pfx file. That will create the Certificate. Click OK on the deployment dialog, this kick off the deployment process. 5. Now, we need to go to the Windows Azure Portal.  Here we will select the Web Role deployed and Configure RDP. 6. Time to test.  Click on the Instance (not the role), this will make the Remote Access Connect Button available.  A file will start the process to be downloaded too 7. You will then be prompted for the credentials you configured. 8.  Validate connectivity … 9. Open IIS Manager … From here on, this is a way to manage and work with your Instance.

    Read the article

  • [EF + Oracle] Intro

    - by JTorrecilla
    Prologue I have a busy personal and working time, and at this moment that I start to get more free time, I decided to start a Serie about Entity Framework with Oracle. A few time ago, I got my first experience with EF and Oracle with Oracle 10 g express and Oracle 10 g with the same results, Doesn’t work. Now I download Oracle 11 g to Test again. Tools To start using EF with Oracle we need the following: 1. Visual Studio 2010. No Express Edition 2. Oracle 11g 3 Oracle Driver for EF (ODAC) Intro People, who are starting with EF developments, I recommend to take a look into Unai Zorrilla’s Blog, the post were written in Spanish but they are great! To this Serie, we are going to define the DB from the Oracle administrator. For that we need to follow the next steps: 1. Create a User with a PassWord. In my example the user will be Jtorrecilla 2. Create a TableSpace 3. Define some example tables   (Image1) When we have created the DB, we are going to start a new project in VS 2010. I will start a C# Project. To start with EF, we need to add a new objet to our Project “ADO .NET Entity Data Model". (Image2) The next step will be to indicate that our model will be based on an existing DB, and indicate the connection string (Images 3 and 4): (Imagen3) (Imagen4) Once we selected the connection string, we will need to indicate that in the connection will be saved “Sensitive” data (Image 5), and in the next step we are going to select the DB objets to use in the project(Image 6).   (Image 5) (Image 6) A the end of the selection, we will press Finish button, and it will generate a EDMX file to add to our solution, and in the IDE will appear the DB Schema with the selected Tables and Relations. (Imagen7) One Entity is composed by a set of properties (each matches with a column from the Table in the DB) and Navigation Properties that represents any relation with other Entities.   Finally With this chapter we have installed the environment, defined a DB and configured the solution to start using EF with Oracle. In the next chapter we are going to see What is a Entity and how it works. I hope you enjoy this Serie!

    Read the article

  • My 2011 Professional Development Goals

    - by kerry
    I thought it might be a good idea to post some professional goals for 2011.  Hopefully, I can look at this list at the end of the year and have accomplished most of them. Release an Android app to the marketplace – I figured I would put this first because I have one that I have been working on for a while and it is about ready.  Along with this, I would like to start another one and continue to develop my Android skills. Contribute free software to the community – Again, I have an SMF plugin that will fill this requirement nicely.  Just need to give it some polish and release it.  That’s not all, I would like to add a few more libraries on github, or possibly contribute to an open source project. Regularly attend a user group meetings outside of Java – A great way to meet people and learn new things. Obtain the Oracle Certified Web Developer Certification – I got the SCJP a few years ago and would like to obtain another one.  One step closer to Certified Enterprise Architect. Learn scala – As a language geek, I like to stick to the Pragmatic Programmer’s ‘learn a new language every year’ rule (last year was Ruby).  Scala presents some new concepts all wrapped in a JVM-based OOP language.  Time to dig in. Write an app using JSF – New JEE 6 features are pretty slick.  I want to really leverage them in an app. Present at a user group meeting – Last but not least, I would like to improve my public speaking and skills in presenting.  Also, is a great reason to dig in to some latest and greatest tech. Use git more, and more effectively – Trying to move all my personal projects from Subversion to Git. That’s it.  A little daunting, but I am confident I can at least touch on most of these and it’s a great roadmap to my professional development.

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • Migrating to Natty (or any other future versions of ubuntu)

    - by Nik
    I am hoping that this question would help other ubuntu users when migrating to a newer version of ubuntu. This should have all the info that they need. So please when you answer try to phrase them into points for easy understanding. I understand that some questions that I ask might have been asked before by other users. In that case just provide the links to those questions. I am running ubuntu 10.10 Maverick Meerkat in case that is important. I can say for sure that a clean install is definitely better than an upgrade since it gives you an opportunity to clean your system and get a fresh start. However some of us like to retain certain software configuration or files etc. The questions are as follows, How do you save the configuration files of certain application like for instance Thunderbird, firefox, etc...so that you can basically paste in the new version of ubuntu? (Thunderbird for instance has all my mail, so I definitely would like to backup its configuration and then use it the new installation that I do) I have some applications like MATLAB and Maple (Based on JAVA) installed. When I migrate, can I just copy the entire installation folder to the new version of ubuntu? Would it still work as now if I do that? When doing a backup which folders should be backed up? Obviously your personal files would be backup. But other than that, is it necessary to back up stuff in the home folder, /usr/bin etc? I have BURG installed. I am guessing that would be erased when I do a clean install along with the program's configuration and everything. How can I do a backup of it? I am dual booting my ubuntu alongside with Windows 7. When I perform the clean install of ubuntu, would GRUB (bootloader) be removed and in anyway jeopardize my windows installation? Over time I have added a lot of PPA which are of course compatible with my current ubuntu version. How do I make a backup of all my PPA and would they be compatible to the newer version of ubuntu when I restore them? I hope this covers all the questions or doubts that a user might face when thinking about performing a clean install of his system. If I missed anything please mention it as a comment and I will add it to my answer.

    Read the article

  • WebCenter Customer Spotlight: Alberta Agriculture and Rural Developmen

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAlberta Agriculture and Rural Development is a government ministry that works with producers and consumers to create a strong, competitive, and sustainable agriculture and food industry in the province of Alberta, Canada The primary business challenge faced by the Alberta Ministry of Agriculture was that of managing the rapid growth of their information.  They needed to incorporate a system that would work across 22 different divisions within the ministry and deliver an improved and more efficient experience for Desktop, Web and Mobile users, while addressing their regulatory compliance needs as part of the Canadian government. The customer implemented a centralized Enterprise Content Management solution based on Oracle WebCenter Content and developed a strong and repeatable information life cycle management methodology across all their 22 divisions and agencies. With the implemented solution, Alberta Agriculture and Rural Development  centrally manages over 20 million documents for 22 divisions and agencies and they have improved time required to find records,  reliability of information, improved speed and accuracy of reporting and data security. Company OverviewAlberta Agriculture and Rural Development is a government ministry that works with producers and consumers to create a strong, competitive, and sustainable agriculture and food industry in the province of Alberta, Canada.  Business ChallengesThe business users were overwhelmed by growth in documents (over 20 million files across 22 divisions and agencies) and it was difficult to find and manage documents and versions. There was a strong need for a personalized easy-to-use, secure and dependable method of managing and consuming content via desktop, Web, and mobile, while improving efficiency and maintaining regulatory compliance by removing the risk of non-uniform approaches to retention and disposition. Solution DeployedAs a first step Alberta Agriculture and Rural Development developed a business case with clear defined business drivers: Reduce time required to find records Locate “lost” records Capture knowledge lost through attrition Increase the ease of retrieval Reduce personal copies Increase reliability of information Improve speed and accuracy of reporting Improve data security The customer implemented a centralized Enterprise Content Management solution based on Oracle WebCenter Content. They used an incremental implementation approach aligned with their divisional and agency structure which allowed continuous process improvement. This led to a very strong and repeatable information life cycle management methodology across all their 22 divisions and agencies. Business ResultsAlberta Agriculture and Rural Development achieved impressive business results: Centrally managing over 20 million files for 22 divisions and agencies Federated model to manage documents in SharePoint and other applications Doing records management for both paper and electronic records Reduced time required to find records Increased the ease of retrieval Increased reliability of information Improved speed and accuracy of reporting Improved data security Additional Information Oracle Open World 2012 Presentation Oracle WebCenter Content

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

  • How should I land my next consulting gig? [closed]

    - by MrOodles
    For the last couple of years, I've been working on speculative projects, expanding my skillset with side work, and paying the bills while having a blast consulting for startups. However, for a number of personal reasons, I need to spend the next 6-9 months maximizing my cash income. I want to put this into effect starting in early August. So that means I have one month to put the necessary client list/portfolio/resume together to start making this happen. As a programmer, I am very proficient in building Django web apps. I can write the necessary SQL, python, javascript, and css to build every part of a Django app, and then do the system administration necessary to deploy on AWS using EC2. I can also rig up a CDN to work seemlessly with the app using S3 and Cloudfront. I have built GIS applications using GeoDjango and PostGIS, and I have constructed social video apps by implementing Encoding.com as a service to prepare raw video files for consumption on the web. I am also moderately proficient in programming PHP, Java, and C#. I have built web apps in PHP, and desktop apps in Java and C#. I have dabbled with Android applications and iPhone apps, but nothing I would show off. I have experience doing SEO, social media marketing, and content marketing. Many of my clients have needed their apps promoted after they were built, and I was always happy to oblige when I could. I have also worked with biometrics technology including fingerprints for government contractors. This was as much a business analyst role as it was a programming gig, as I had to help answer RFPs, make checklists, and work around reems and reems of regulations to build applications that met very large bureaucratic requirements. I only have two real requirements for my next gig(s): 1) Work remotely. I live in North East Ohio, and I don't plan on leaving, but I wouldn't mind traveling one or two weeks out of every month to service clients who need on-site help. 2) $60.00hr-$∞ USD contracting rate. So what should I do for the next 30 days to achieve this? Should I target some large company and learn the requisite buzzwords to impress them? Should I learn some new language or technology? Polish some skill that I already have? Should I build something using my current skillset, or with some new technology? Should I put a website for my consultancy together to market myself? Should I do that using latest technology x, y, and z? Or should I just slap something up on Tumblr? I'm willing to do anything (moral) over the next 4 weeks to put myself into a position to maximize my income, and I'm open to all and every idea Programmers users may have. Let me hear them.

    Read the article

  • HTML5 game programming style

    - by fnx
    I am currently trying learn javascript in form of HTML5 games. Stuff that I've done so far isn't too fancy since I'm still a beginner. My biggest concern so far has been that I don't really know what is the best way to code since I don't know the pros and cons of different methods, nor I've found any good explanations about them. So far I've been using the worst (and propably easiest) method of all (I think) since I'm just starting out, for example like this: var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); var width = 640; var height = 480; var player = new Player("pic.png", 100, 100, ...); also some other global vars... function Player(imgSrc, x, y, ...) { this.sprite = new Image(); this.sprite.src = imgSrc; this.x = x; this.y = y; ... } Player.prototype.update = function() { // blah blah... } Player.prototype.draw = function() { // yada yada... } function GameLoop() { player.update(); player.draw(); setTimeout(GameLoop, 1000/60); } However, I've seen a few examples on the internet that look interesting, but I don't know how to properly code in these styles, nor do I know if there are names for them. These might not be the best examples but hopefully you'll get the point: 1: Game = { variables: { width: 640, height: 480, stuff: value }, init: function(args) { // some stuff here }, update: function(args) { // some stuff here }, draw: function(args) { // some stuff here }, }; // from http://codeincomplete.com/posts/2011/5/14/javascript_pong/ 2: function Game() { this.Initialize = function () { } this.LoadContent = function () { this.GameLoop = setInterval(this.RunGameLoop, this.DrawInterval); } this.RunGameLoop = function (game) { this.Update(); this.Draw(); } this.Update = function () { // update } this.Draw = function () { // draw game frame } } // from http://www.felinesoft.com/blog/index.php/2010/09/accelerated-game-programming-with-html5-and-canvas/ 3: var engine = {}; engine.canvas = document.getElementById('canvas'); engine.ctx = engine.canvas.getContext('2d'); engine.map = {}; engine.map.draw = function() { // draw map } engine.player = {}; engine.player.draw = function() { // draw player } // from http://that-guy.net/articles/ So I guess my questions are: Which is most CPU efficient, is there any difference between these styles at runtime? Which one allows for easy expandability? Which one is the most safe, or at least harder to hack? Are there any good websites where stuff like this is explained? or... Does it all come to just personal preferance? :)

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • Interaction of a GUI-based App and Windows Service

    - by psubsee2003
    I am working on personal project that will be designed to help manage my media library, specifically recordings created by Windows Media Center. So I am going to have the following parts to this application: A Windows Service that monitors the recording folder. Once a new recording is completed that meets specific criteria, it will call several 3rd party CLI Applications to remove the commercials and re-encode the video into a more hard-drive friendly format. A controller GUI to be able to modify settings of the service, specifically add new shows to watch for, and to modify parameters for the CLI Applications A standalone (GUI-based) desktop application that can perform many of the same functions as the windows service, expect manually on specific files instead of automatically based on specific criteria. (It should be mentioned that I have limited experience with an application of this complexity, and I have absolutely zero experience with Windows Services) Since the 1st and 3rd bullet share similar functionality, my design plan is to pull the common functionality into a separate library shared by both parts applications, but these 2 components do not need to interact otherwise. The 2nd and 3rd bullets seem to share some common functionality, both will have a GUI, both will have to help define similar parameters (one to send to the service and the other to send directly to the CLI applications), so I can see some advantage to combining them into the same application. On the other hand, the standalone application (bullet #3) really does not need to interact with the service at all, except for possibly sharing a few common default parameters that can easily be put into an XML in a common location, so it seems to make more sense to just keep everything separate. The controller GUI (2nd bullet) is where I am stuck at the moment. Do I just roll this functionality (allow for user interaction with the service to update settings and criteria) into the standalone application? Or would it be a better design decision to keep them separate? Specifically, I'm worried about adding the complexity of communicating with the Windows Service to the standalone application when it doesn't need it. Is WCF the right approach to allow the controller GUI to interact with the Windows Service? Or is there a better alternative? At the moment, I don't envision a need for a significant amount of interaction, maybe just adding a new task once in a while and occasionally tweaking a parameter, but when something is changed, I do expect the windows service to immediately use the new settings.

    Read the article

  • Can you work for the big (Google, Microsoft, Facebook etc.) without getting too much involved?

    - by Developer Art
    Having seen people talking about interviewing and working for the big companies, I keep wondering how much are you expected to actually get involved in there. 1) That's because I keep seeing folks from Google and Microsoft and others writing in forums, blogging, tweeting, speaking at conferences and seemingly doing this on the 24/7/365 basis from their office, apartment, hotel and even plane. Are you really expected to commit that much if you come to work for them? Do they want you to think about your work while you're eating, sleeping, taking a shower, making love and so on? Can you in fact "switch off" at five and go home forgetting everything? Perhaps you have a hobby, family life, kids, friends, personal projects anyone? Is it so that if you work for the big then you're expected not to have any life outside of the company? You can't develop own projects, have own clients and just have another life? 2) One other thing is the work contracts the big use. I've heard for instance that when you join Microsoft you need to provide a list of projects you're currently working on and after that anything new you'll come up with during your employment automatically belongs to the company. Are all of the big doing this? Can you deny signing a contract until such clause is removed or with the big it is "take it or leave it" because the legal department won't accept any change? Can you make them write the contract in that manner that they step away from anything you've developed in your private time? Of all the big I have only been at SAP during my internship. Lately while browsing through the old papers I've found my old contact which stipulated they owned everything I developed or invented during my employment, which I would never have signed these days. On a side note I don't think I would return to SAP since I remember most people there were clueless and provided the impression they were simply sitting out their years waiting for the retirement. But anyway, what do the other big put in their contracts? How far do you get involved when you go working for the big? Or perhaps fully committed with your body and soul? P.S. I'm not planning to join any of them I'm just curious.

    Read the article

  • Getting started with object detection - Image segmentation algorithm

    - by Dev Kanchen
    Just getting started on a hobby object-detection project. My aim is to understand the underlying algorithms and to this end the overall accuracy of the results is (currently) more important than actual run-time. I'm starting with trying to find a good image segmentation algorithm that provide a good jump-off point for the object detection phase. The target images would be "real-world" scenes. I found two techniques which mirrored my thoughts on how to go about this: Graph-based Image Segmentation: http://www.cs.cornell.edu/~dph/papers/seg-ijcv.pdf Contour and Texture Analysis for Image Segmentation: http://www.eng.utah.edu/~bresee/compvision/files/MalikBLS.pdf The first one was really intuitive to understand and seems simple enough to implement, while the second was closer to my initial thoughts on how to go about this (combine color/intensity and texture information to find regions). But it's an order of magnitude more complex (at least for me). My question is - are there any other algorithms I should be looking at that provide the kind of results that these two, specific papers have arrived at. Are there updated versions of these techniques already floating around. Like I mentioned earlier, the goal is relative accuracy of image segmentation (with an eventual aim to achieve a degree of accuracy of object detection) over runtime, with the algorithm being able to segment an image into "naturally" or perceptually important components, as these two algorithms do (each to varying extents). Thanks! P.S.1: I found these two papers after a couple of days of refining my search terms and learning new ones relevant to the exact kind of techniques I was looking for. :) I have just about reached the end of my personal Google creativity, which is why I am finally here! Thanks for the help. P.S.2: I couldn't find good tags for this question. If some relevant ones exist, @mods please add them. P.S.3: I do not know if this is a better fit for cstheory.stackexchange (or even cs.stackexchange). I looked but cstheory seems more appropriate for intricate algorithmic discussions than a broad question like this. Also, I couldn't find any relevant tags there either! But please do move if appropriate.

    Read the article

  • List of common pages to have in the footer [closed]

    - by user359650
    I would like to post this question as a reference for webmasters wondering what pages they should include in the footer. I will use answers to complete my initial list: About us / About MyCompany / MyCompany About / About us: description about the company, its mission, and its vision. History: summary of milestones achieved by the company. The team / Management / Board of directors: depending on size of the company there may be one of more pages describing the people involved in the company, depending on their position. Awards: list of awards received by the company if any. In the press / They're talking about us: list of links to external websites, usually highly regarded news websites, which mentioned the company in one of their articles. Media Wallpapers: wallpapers with company logo in different colors and formats that fans can set as desktop image for their computer. logos: company logo in different colors and formats that websites/blogs posting about the website can use for illustration purposes. Media kits: documents, usually in PDF format summarizing the key company figures and facts that journalists can download and read to get a quick overview of the company. Misc Contact / Contact us: contact details the company is prepared to disclose if any (address, email, phone) or contact form. Careers / Jobs / Join us: list of open vacancies with contact form to apply. Investors / Partners / Publishers: information and contact forms for companies willing to become Investors/Partners/Publishers or login page to access portal restricted to those who already are. FAQ: list of common questions and answers to guide users and reduce number of support requests. Follow us / Community Facebook / Twitter / Google+: links to the company's pages/accounts on various social networks. Legal Terms / Terms of use / Terms & Conditions: rules users must follow when browsing the website. Privacy / Privacy Statement: explanations as to how the company deals with users' personal data and what users can do about it (request information to be deleted...). cookies: page that starts appearing on more and more websites due to new regulation (notably EU) imposing more transparency and control for users about cookies (e.g. BBC cookie page). Any input is welcome PS: if someone with enough rep could add the footer tag that would be great (min. 300 required).

    Read the article

  • The Connected Company: WebCenter Portal Activity Streams

    - by Michael Snow
    Guest post by Mitchell Palski, Oracle Staff Sales Consultant Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Social media is sure to have made its way into your company or government organization. Whether its discussion threads, blog posts, Facebook-style profile-pages, or just a simple Instant Messenger application; in one way or another, your employees are connected. What are the objectives of leveraging social media in your organization? Facilitating knowledge transfer More effectively organizing team events Generating inter-community discussions to solve problems Improving resource management Increasing organizational awareness Creating an environment of accountability Do any of the business objectives above stand out to you as needs? If so, consider leveraging the WebCenter Portal Activity Stream as part of your solution. In WebCenter Portal, the Activity Stream feature provides a streaming view of the activities of your connections, actions taken in portals, and business activities that looks a lot like a combined Facebook and Twitter newsfeed. Activity Stream can note when a user: Posts feedback (comments) Uploads a document Creates a new blog, page, event, or announcement Starts a new discussion Streams messages and attachments entered through WebCenter Publisher (similar to Twitter) Through Activity Stream Preferences, you can select which of these activities to show or hide from your personal Activity Stream. Here’s what you get: Real-time stream of activities with in a Portal or sub-Portal increases awareness across your organization or within a working group Complete list of user actions reduces the time-to-find for users that need to interact with the latest activities in your portal Users can publish to their groups when tasks are finished for complete group traceability and accountability, as well as improved resource management. Project discussions and shared documents that require the expertise of someone outside of a working group now get increased visibility across your organization. There’s a reason that commercial Social Media tools like Facebook and Twitter have been so successful – they spread information in an aesthetically appealing and easy to read format.  Strategically placing an Activity Feed within your Portal is analogous to sending your employees a daily newsletter, events calendar, recent documents report, and list of announcements – BUT ALL IN ONE! 

    Read the article

  • vJUG: Worldwide Virtual JUG Created

    - by Tori Wieldt
    London Java Community leader and technical evangelist Simon Maple has created a Meetup called vJUG, with aim toward connecting Java Developers in the virtual world. The aim for vJUG is: Get technical leaders from around the world to present to the vJUG members (without travel cost concerns!). Work with local JUGs to provide worldwide content to their members and help JUGs present to a worldwide audience. Provide content to devs without access to a local JUG. Be a hub that will stream content from other JUG sessions live.  The vJUG is not intended to replace local JUG efforts. "The vJUG can never be, and will never be, as vibrant and valuable to its members as a proper local JUG can. Why? Because the true value in JUG meetings are the face to face interactions and personal networking," said Maple. "However, many people do not have access to a really active JUG with great speakers and awesome content. Or, like me, the closest JUG is about 90 mins away." WebEx and Google Hangouts are great, Maple explained, he hopes vJUG will provide more coordination of online events.  Maple hopes that in the future, vJUG will provide An Events calendar with reminders and links to up coming meetings. A Newsletter with what's coming up and links to previous sessions. Coordination of links to IRC channels which are active during presentations (to create a feeling of virtual community). Comments and forums around sessions and presentations A place where physical JUGs could advertise their sessions (i.e. a NY JUG event) to a worldwide audience, when streamed, via an event that people can sign up to. A common Webex or Hangout. Maple encourages both people who need a JUG and existing JUG members to join vJUG. "I'm looking forward to talking with many of you one to get members, speakers, and JUG support!" Join vJUG now! (I sense a need for a logo...) 

    Read the article

  • Understanding When Social Interactions Should Be Resolved in Another Channel

    - by Christina McKeon
    Guest Blogger: Aphrodite Brinsmead, Senior Analyst at Ovum Agents need to respond to customers’ social comments and questions quickly and in the right tone. But more importantly, they need to offer resolutions. Customers care most about how long it takes to find information rather than which channel they are using. They choose to use social media because they are comfortable with the channel and it offers a convenient way to communicate. Ideally agents will resolve questions within social media, but they need guidance as to how and when to escalate interactions to a more private channel. First, businesses should assess the way in which customers are using social media to communicate with them and categorize posts into groups: complaints, feedback, technical queries or more general support questions. They should then consider the types of interactions that can easily be handled within social media and those that need to be followed up in another channel. This will be very dependent on the industry. Examples of queries that can be resolved in social media include Shipping pricing and timeframes Outage updates and resolution plans Flight status information Product stock check Technical support videos or forum posts Availability of facilities Both customers and agents need to be educated about the types of questions they can expect to resolve within social media. As the channel matures as a customer service tool, it needs to have value other than just as a forum for complaints. Social customer service agents need the power to start a web chat or phone call Any questions where customers need to divulge personal details in order to get a resolution will need to be addressed in a private channel: a private social message, web chat, email or phone call. Customers should never disclose their date of birth, social security, credit card number, or healthcare records in a public forum. Flight issues, changes to a booking, billing queries or account updates will all need to be completed via a private interaction. Agents responding to questions on social media need the ability to start a web chat or phone call with the customer. The customer doesn’t want to have to repeat their question and the agent should be empowered to connect customer records and access account or billing information. These agents will need to be trained across different channels and should be able to view all customer communications in one application. They also need to follow up questions that began on a public forum in the initial channel to make it clear that the issue was addressed. In order to make this possible, social media needs to be integrated as part of a broader customer service strategy. Irrespective of how many channels are used to complete an interaction, businesses should prioritize customer satisfaction and issue resolution. They need a clear strategy and trained agents that can handle and respond to social interactions. Follow me on Twitter @diteb. 

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

  • How do I prevent ISPs from killing downloads of files in mid-transfer?

    - by Gorchestopher H
    I run a small website with a few users, low traffic, mostly to share personal mp3 files with a small community. Depending on their ISP, my users can't always download or stream larger files. By larger I mean larger than 1MB. Essentially the host either stops sending, or the client stops receiving. One of the links along the connection chain simply ends its connection before the transfer completes Trace-route shows no connection issues. There are no connection issues with short transfers that don't take more than a few seconds. It's these 10 second transfers that just end up ending. Just doing a straight download with a direct link can yield this error if you have the wrong ISP. Strangely enough, this is most common with users with ISPs who are essentially independent providers that buy service via a fiber link. Unfortunately these providers aren't very knowledgeable, are unable to do any testing, and insist it's a problem with the host. I have gotten my host to transfer my site to different servers of their, to the same effect. Nearly identical sites (affiliate sites actually) experience no such issue. What can I be doing to further troubleshoot this matter? How can I prove that someone is dropping the ball, and identify who that party is? Can I do a 5Mb traceroute? EDIT Maybe I can clear up some misconceptions with my question: The files are not very large. They are simply over 2Mb. The users do not have "slow" connections, they are at least 5mbps. This "time out" happens very quickly, in the realm of 5 seconds, so I don't know if it's a timeout or not. The user often gets 1 or 2Mb in this chunk of time. I have tried streaming with a flash player. I have tried saving the target. Forcing the download. I have tried allowing the browser to stream the file. I have tried different browsers (FF, IE, Chrome). Users are able to download identical files when on different hosts.

    Read the article

  • Is it dangerous for me to give some of my Model classes Control-like methods?

    - by Pureferret
    In my personal project I have tried to stick to MVC, but I've also been made aware that sticking to MVC too tightly can be a bad thing as it makes writing awkward and forces the flow of the program in odd ways (i.e. some simple functions can be performed by something that normally wouldn't, and avoid MVC related overheads). So I'm beginning to feel justified in this compromise: I have some 'manager programs' that 'own' data and have some way to manipulate it, as such I think they'd count as both part of the model, and part of the control, and to me this feels more natural than keepingthem separate. For instance: One of my Managers is the PlayerCharacterManager that has these methods: void buySkill(PlayerCharacter playerCharacter, Skill skill); void changeName(); void changeRole(); void restatCharacter(); void addCharacterToGame(); void createNewCharacter(); PlayerCharacter getPlayerCharacter(); List<PlayerCharacter> getPlayersCharacter(Player player); List<PlayerCharacter> getAllCharacters(); I hope the mothod names are transparent enough that they don't all need explaining. I've called it a manager because it will help manage all of the PlayerCharacter 'model' objects the code creates, and create and keep a map of these. I may also get it to store other information in the future. I plan to have another two similar classes for this sort of control, but I will orchestrate when and how this happens, and what to do with the returned data via a pure controller class. This splitting up control between informed managers and the controller, as opposed to operating just through a controller seems like it will simplify my code and make it flow more. My question is, is this a dangerous choice, in terms of making the code harder to follow/test/fix? Is this somethign established as good or bad or neutral? I oculdn't find anything similar except the idea of Actors but that's not quite why I'm trying to do. Edit: Perhaps an example is needed; I'm using the Controller to update the view and access the data, so when I click the 'Add new character to a player button' it'll call methods in the controller that then go and tell the PlayerCharacterManager class to create a new character instance, it'll call the PlayerManager class to add that new character to the player-character map, and then it'll add this information to the database, and tell the view to update any GUIs effected. That is the sort of 'control sequence' I'm hoping to create with these manager classes.

    Read the article

  • Imperative vs. component based programming [closed]

    - by AlexW
    I've been thinking about how programming and more specifically the teaching of programming is advocated amongst the community (online). Often I've heard that Ruby and RoR is an ideal platform for learning to program. I completely disagree... RoR and Ruby are based on the application of the component based paradigm, which means they are ideal for rapid application development. This is much like the MVC model in PHP and ASP.NET But, learning a proper imperative language like Java or C/C++ (or even Perl and PHP) is the only way for a new programmer to explore logic itself, and not get too bogged down in architectural concerns like the need for separation of concerns, and the preference for components. Maybe it's a personal preference thing. I rather think that the most interesting aspects to programming are the procedural bits of code I write that actually do stuff rather than the project planning, and modelling that comes about from fully object oriented engineering or simply using the MVC model. I know this may sound confused to some of you. I feel strongly though that the best way for programming to be taught is through imperative and procedural methods. Architectural (component) methods come later, if at all. After all, none of the amazing algorithms that exist were based on OOP practice! It's all procedural code when it comes to the 'magic'. OOP is useful in creating products and utilities. Algorithms are what makes things happen, and move data around, and so imperative (and/or procedural) code are what matters most. When I see programmers recommending Ruby on Rails to newbie developers, I think it's just so wrong. Just because you write less code with Ruby does not make it easier to do! It's the opposite... you have to know loads more to appreciate its succinct nature. New coders who really want to understand the nuts and bolts of coding need to go away and figure out writing methods/functions (i.e. imperative programming) and working in procedural style, in order to grasp the fundamentals, first, before looking into architectural ways of working. So, my question is: should Ruby ever be recommended as a first language? I think no (obviously)... what arguments are there for it?

    Read the article

  • How to Export a Contact to and Import a Contact from a vCard (.vcf) File in Outlook 2013

    - by Lori Kaufman
    vCard is the abbreviation for Virtual Business Card and is the standard format (.vcf files) for electronic business cards. vCards allow you to create and share contact information over the internet, such as in email messages and instant messaging. You can also use vCards to move contact information from one email or personal information management program to another, as long as both programs support the .vcf file format. vCards can contain name and address information, as well as phone numbers, email addresses, URLs, images, and audio clips. We will show you how to export a contact to and import a contact from a vCard, or .vcf file, in Outlook. First access the People section by clicking People at the bottom of the Outlook window. To view your contact in business card format, click Business Card in the Current View section of the Home tab. Select a contact by clicking on the name bar at the top of the business card. To export the selected contact as a vCard, click the File tab. On the Account Information screen, click Save As in the list of options on the left. The Save As dialog box displays. By default, the name of the contact is used to name the .vcf file in the File name edit box. Change the name, if desired, select a location for the file, and click Save. The contact is saved as a .vcf file. To import a vCard, or .vcf file, into Outlook, simply double-click on the .vcf file. By default, .vcf files are automatically associated with Outlook, so the file is opened in Outlook as a Contact. Make any changes or additions to the contact in the contact editing window. To save the contact, click Save & Close in the Actions section of the Contact tab. NOTE: Notice that because this contact is new, the full contact editing window displays rather than the Contact Card that displays when double-clicking on a contact. You can open the full contact editing window instead of the Contact Card when editing a contact or searching for a contact. The contact is added to the Contacts folder. You can add your contact information to a signature in business card format, and it will display as shown above in emails. We have covered how to create signatures and will be discussing more about signatures and business cards in Outlook.     

    Read the article

  • Misused mke2fs and cannot boot into system

    - by surlogics
    I installed Ubuntu with WUBI in Windows 7 64bit, and I had installed Mandriva 2011 with a disk. I tried to learn Linux with Ubuntu and misused mke2fs; after I reboot my computer, Windows 7 and Ubuntu has crashed. As I have Mandriva, I boot into Mandriva and found # df -h /dev/sda7 12G 9.8G 1.5G 88% / /dev/sda2 15G 165M 14G 2% /media/logical /dev/sda6 119G 88G 32G 74% /media/2C9E85319E84F51C /dev/sda5 118G 59G 60G 50% /media/D25A6DDE5A6DBFB9 /dev/sda9 100G 188M 100G 1% /media/ae69134a-a65e-488f-ae7f-150d1b5e36a6 /dev/sda1 100M 122K 100M 1% /media/DELLUTILITY /dev/sda3 98G 81G 17G 83% /media/OS # fdisk /dev/sda Command (m for help): p Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd24f801e Device Boot Start End Blocks Id System /dev/sda1 2048 206847 102400 6 FAT16 /dev/sda2 * 206848 30926847 15360000 7 HPFS/NTFS/exFAT /dev/sda3 30926848 235726847 102400000 7 HPFS/NTFS/exFAT /dev/sda4 235728864 976771071 370521104 f W95 Ext'd (LBA) /dev/sda5 235728896 481488895 122880000 7 HPFS/NTFS/exFAT /dev/sda6 727252992 976771071 124759040 7 HPFS/NTFS/exFAT /dev/sda7 481500243 506674034 12586896 83 Linux /dev/sda8 506674098 514851119 4088511 82 Linux swap / Solaris /dev/sda9 514851183 727246484 106197651 83 Linux Partition table entries are not in disk order I think I may used the following command mke2fs -j -L "logical"/dev/sda2 but I had forgotten what kind of partition it was before I transfered it into ext3. perhaps ntfs Data was not lost, and I can view my files as I could in Windows. In Mandriva, there are following disks: 117.2 GB hard disk, files in it is the same as my Windows D:, and Ubuntu was installed in it; 119.0 GB hard disk is my G:, with my personal files in it; 12.0 GB is the same with Mandriva / (with means root), 101.3 GB hard disk with nothing but lost+found; DELLUTILITY should be Dell computer utilities pre-installed in my computer; logical is the disk which I had spoiled, I can view nothing but lost+found; and OS is the C: in my Windows. After I boot, grub lets me choose Mandriva or Windows. I chose Windows and it tells me: FILE system type unknown, partition type 0x7 Error 13: Invalid or unsupported executable format I doubt something wrong with windows MBR or something # cat /boot/grub/menu.lst timeout 5 color black/cyan yellow/cyan gfxmenu (hd0,6)/boot/gfxmenu default 0 title linux kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=linux root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot logo.nologo quiet resume=UUID=34c546e4-9c42-4526-aa64-bbdc0e9d64fd splash=silent vga=788 initrd (hd0,6)/boot/initrd.img title linux-nonfb kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=linux-nonfb root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot resume=UUID=34c546e4-9c42-4526-aa64-bbdc0e9d64fd initrd (hd0,6)/boot/initrd.img title failsafe kernel (hd0,6)/boot/vmlinuz BOOT_IMAGE=failsafe root=UUID=199581b7-ac7e-4c5f-9888-24c4f213cad8 nokmsboot failsafe initrd (hd0,6)/boot/initrd.img title windows root (hd0,1) makeactive chainloader +1 I can boot into Linux, but not Ubuntu, it boot into Mandriva. I don't have a boot disk. Help me find a way to make it work again.

    Read the article

  • Strangling the life out of Software Testing

    - by MarkPearl
    I recently did a course at the local university on Software Engineering. At the beginning of the course I looked over the outline of the subject and there seemed to be some really good content. It covered traditional & agile project methodologies, some general communication and modelling chapters and finished off with testing. I was particularly excited to see the section on testing as this was something I learnt on my own and see great value in. The course has now just ended and I am very disappointed. I now know one of the reasons why so few people i.e. in my region do Test Driven Development, or perform even basic testing methodologies. The topic was to academic! Yes, you might be able to list 4 different types of black box test approaches vs. white box test approaches and describe the characteristics of Smoke Tests, but never during course did we see an example of an actual test or how it might be implemented! In fact, if I did not have personal experience of applying testing in actual projects, I wouldn’t even know what a unit test looked like. Now, what worries me is the following… It took us 6 months to cover the course material, other students more than likely came out of that course with little appreciation of the subject – in fact they now have a very complex view of what a test is – so complex that I think most of them will never attempt it again on their own. Secondly, imagine studying to be a dentist without ever actually seeing a tooth? Yes, you might be able to describe a tooth, and know what it is made out of – but nobody would want a dentist who has never seen a tooth to operate on them. Yet somehow we expect people studying software engineering to do the same? This is not right. Now, before I finish my rant let me say that I know this is not the same everywhere in the world, and that there needs to be a balance on practical implementation and academic understanding – I am just disappointed that this does not seem to be happening at the institution that I am currently studying at ;-( Please, if you happen to be a lecturer or teacher reading this post – a combination of theory and practical's goes a long way. We need to up the quality of software being produced and that starts at learner level!

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >