Search Results

Search found 9062 results on 363 pages for 'big empin'.

Page 250/363 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Ati Radeon HD 3200 Graphics driver - Installation Problem

    - by samufi
    I have a fresh installation of Ubuntu 12.04 x86 and I am trying to install the proprietary driver for my "Radeon HD 3200 Graphics" video card. I know that there are already many threads about this topic, but I did not find a solution for my problem: For the installation I followed exactly these instructions: What is the correct way to install ATI Catalyst Video Drivers in 12.04 LTS? During the process I faced these problems: I executed ~$ debconf libstdc++6 dkms libqtgui4 wget execstack libelfg0 dh-modaliases and got: debconf: DbDriver "passwords" warning: could not open /var/cache/debconf/passwords.dat: Keine Berechtigung Can't exec "libstdc++6": Datei oder Verzeichnis nicht gefunden at /usr/share/perl/5.14/IPC/Open3.pm line 186. open2: exec of libstdc++6 dkms libqtgui4 wget execstack libelfg0 dh-modaliases failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 (translation of the German parts: "Keine Berechtigung" means: "no permission"; "Datei oder Verzeichnis nicht gefunden" means: "File or folder not found") Because I had no idea if it was a big issue, I just continued: ~$ sudo apt-get install ia32-libs There I got: Paketlisten werden gelesen... Fertig Abhängigkeitsbaum wird aufgebaut Statusinformationen werden eingelesen... Fertig Paket ia32-libs ist nicht verfügbar, wird aber von einem anderen Paket referenziert. Das kann heißen, dass das Paket fehlt, dass es abgelöst wurde oder nur aus einer anderen Quelle verfügbar ist. E: Paket »ia32-libs« hat keinen Installationskandidaten (Translation: [...] the package ia32-libs is not available but is referenced by an other package [...] E: package »ia32-libs« has no installation candidate) Once more I went on. The next steps worked quite fine. But when I came to the point: ~$ sudo dpkg -i *.deb There I got A popup message, something like there was a problem with a system application but in the terminal no errors were reported, also the packages seemed to be installed. so now the Ati Catalyst Center works amdcccle but fglrxinfo gave me X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 So there is something wrong. (Also there is not the possibility to enable these nice graphical features - the reason why I installed the proprietary driver) Because I worked with a completely fresh Installation I don't know how to fix the problem. If anybody could help I would be very tahnkful! =)

    Read the article

  • Teacher demands excessive/unjustified use of Design Patterns

    - by SoboLAN
    I study computer science and I have a class called "Programming Techniques". Its purpose is to teach (us) good object oriented design principles. During the semester we have homeworks, programs that we must write to demonstrate what we've learned. The lab assistant demands for each of these homeworks that specific design patterns should be used. For example, the current homework is an application used for processing customer orders. We are demanded to use either "Factory Method" or "Abstract Factory" design patterns for this. It gets even worse: at the end of the semester we must write a program (something more complex) that must use at least one creational pattern, at least one structural pattern and at least one behavioural pattern. Is it normal to demand this ? I mean, forcing us to design our programs in such a way that a specific design pattern makes sense is just beyond what I consider ok. If I'm a car mechanic and have a huge tool box, then I will use a certain tool from that box if and when the situation demands it. Not more, not less. If my design of the application doesn't demand at all the use of "Abstract Factory" (for example), then why should I implement it ? I'm not sure yet if the senior lecturer agrees with what the lab assistant is demanding, but I want to talk to him about it and I need solid arguments to do so. How should I approach this problem with him ? PS: I'm sure there must be a better way to teach us these things. Maybe making us each week read about 3 design patterns and the next week giving us a test with small but specific programming or architectural situations/problems. The goal in that test would be to identify what design patterns would make sense and how they could be implemented. This way, he can see if we understand them. EDIT: These homeworks are not just 100-line programs, they have quite a lot of requirements and are fairly complicated. This is the reason we have about 2 - 3 weeks of deadline for each of them. I agree that practicing this is the best way to learn. But shouldn't smaller programs/applications be used for this ? Something just for demonstrating purposes. Not big programs with lots of requirements/classes/etc.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 1, 2012

    - by Bob Rhubart
    Hurricane Sandy Edition Power outages in the Cleveland area made it impossible to publish posts on Tuesday and Wednesday. In my neighborhood most are still without power. The sound of howling winds that dominated on Monday and Tuesday has been replaced by the sound of of portable generators. My internet connection was restored only after AT&T U-Verse crewmen hooked up a portable generator to power the relay station up the street. Bear in mind that Cleveland is 500 miles from the Atlantic coast. Mobile Development Platform Strategy Chart: ADF Mobile, WebCenter Sites, Portal, Content and Social "Unlike desktop web focused efforts, the world of mobile has undergone change at a feverish pace," says social enterprise expert John Brunswick. His extensive post charts various resources that will help you keep up. ADF Essentials - The Bare Necessities | Floyd Teter The experiment is over... And now Oracle ACE Director Floyd Teter shares his impressions after spending some time with Oracle ADF Essentials, the free version of Oracle ADF. Expanding the Oracle Enterprise Repository with functional documentation Capgemini middleware specialist Marc Kuijpers shares information on how Oracle Enterprise Repository can be configured "to contain functional assets, i.e. functional designs, use cases and a logical data model" to aid in SOA governance efforts. A review of Oracle SOA Suite 11g Administrator’s Handbook | RedStack "More so than any other single piece of content that I have seen on the topic, it provides the information that a SOA administrator needs to know in order to successfully configure, manage, monitor, troubleshoot and backup an Oracle SOA environment." So says Oracle Fusion Middleware A-Team solution architect Mark Nelson of Oracle SOA Suite 11g Administrator’s Handbook, by Ahmed Aboulnaga and Arun Pareek. Eating our own dog food – Oracle’s internal deployment of Oracle IDM Oracle Fusion Middleware A-Team member Brian Eidelman recommends the recent podcast on Oracle’s internal deployment of Oracle OAM and OID. "This was a big project that involved migrating a bunch of critical, high volume applications to leverage OAM and OID," says Eidelman. "So I suggest you tune in to see and hear more about how we deploy our own software." Thought for the Day "Anyone who says they're not afraid at the time of a hurricane is either a fool or a liar, or a little bit of both." — Anderson Cooper Source: BrainyQuote

    Read the article

  • Go for the Deep Dive on Oracle Products and Technology

    - by Oracle OpenWorld Blog Team
    by Karen Shamban Oracle University gives you more learning for your conference investment. It’s easier than ever before to get in-depth Oracle product and technology training if you’re attending any of the Oracle conferences this fall, including Oracle OpenWorld, the Oracle Customer Experience Summit @ OpenWorld, the Oracle PartnerNetwork Exchange @ OpenWorld, and MySQL Connect. Why is it easier? Because Oracle University preconference training takes place on Sunday, September 30 from 8:00 a.m. to 3:30 p.m. And you’re going to be in town for the conference anyway, right? The training ends early enough in the afternoon that you’ll still be able to get good seats for conference opening keynotes and get psyched for the welcome reception that follows. Each session will be taught by an expert Oracle University instructor and will be fact-packed with demos and tips to help you do more than ever before with your Oracle product and technology investment. The training sessions being offered include: Applications:·             PeopleSoft Test Framework Script Creation and Optimization·             New Integration Technologies for PeopleTools 8.52·             Oracle Fusion Applications: Security Fundamentals Database and Systems:·             Certification Exam Cram: Oracle Database 11g: New Features for Administrators·             Exadata Database Machine Administration Workshop·             Introduction to Big Data·             Using Oracle Enterprise Manager Cloud Control 12c·             Using Java - for PL/SQL and Database Developers Fusion Middleware:·             Developing Portable Java EE Applications with the Enterprise JavaBeans 3.1 API and Java Persistence API 2.0·             Developing Secure Java Web Services·             How The Latest Java EE and SOA Help in Architecting and Designing Robust Enterprise Applications·             Oracle Business Intelligence 11g: Overview to Analyses and Dashboards·             Oracle Fusion Middleware 11g: Build Applications with ADF I·             Oracle Fusion Middleware 11g Administer Forms Services·             Oracle SOA Suite 11g Administration·             WebLogic Server Administration Essentials Don’t miss this great opportunity to maximize your Oracle OpenWorld experience and investment. Learn more about Oracle University training sessions.

    Read the article

  • Leveraging Social Networks for Retail

    - by David Dorf
    For retailers, social media is all about B2C2C. That is, Business to Consumer to Consumer, or more specifically, retailer to influencer to consumer. Traditional marketing targeted mass media, trying to expose the message to as many people as possible. While effective, this approach has never been very efficient, with high costs for relatively low penetration. Then it was thought that marketers should focus their efforts on a relative few super-influencers that would then sway the masses. History shows a few successes with this approach but lacked any consistency or predictability. After all, if super-influencers were easy to find, most campaigns would easily go viral. Alas, research shows that most wide-spread trends were the result of several fortunate events, including some luck. So do people exert influence over each other when it comes to purchase decisions? Of course they do, all the time. But that influence is usually limited to a small set of friends and specific specialization. For instance, although I have 165 friends on Facebook, I am only able to influence my close friends and family on PC purchases, and I have no sway at all for fashion purchases. People trust my knowledge on technology, but nobody asks my advice on shoes. How then should retailers leverage social networks in order to reinforce brand image and push promotions? Two obvious ways are Like and Share. Online advertisements or wall-postings receive more clicks when the viewer sees that friends have "liked" the posting. That's our modern-day version of word-of-mouth advertising. Statistics show that endorsements from friends make it more likely a person will engage. If my friends and I liked it, then I might also "share" (or "retweet" in the case of Twitter) it with other friends. In that case the retailer has paid for X showings of the advertisement, but sharing has pushed it to an additional Y people at no cost. And further, the implicit endorsement by the sharer makes it more likely the recipient will engage. So a good first step is to find people active in social networks that will Like and Share in order to exert influence. Its still tough to go viral, but doubling engagement is still a big step in the right direction. More complex social graph analysis would be a second step, but I'll leave that topic for another day. If you're interested in the academic side of social dynamics, I suggest reading Duncan Watts' work.

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – How to easily work with Database Diagrams

    - by Pinal Dave
    Databases are very widely used in the modern world. Regardless of the complexity of a database, each one requires in depth designing. To practice along please Download dbForge Studio now.  The right methodology of designing a database is based on the foundations of data normalization, according to which we should first define database’s key elements – entities. Afterwards the attributes of entities and relations between them are determined. There is a strong opinion that the process of database designing should start with a pencil and a blank sheet of paper. This might look old-fashioned nowadays, because SQL Server provides a much wider functionality for designing databases – Database Diagrams. When using SSMS for working with Database Diagrams I realized two things – on the one hand, visualization of a scheme allows designing a database more efficiently; on the other – when it came to creating a big scheme, some difficulties occurred when designing with SSMS. The alternatives haven’t taken long to wait and dbForge Studio for SQL Server is one of them. Its functions offer more advantages for working with Database Diagrams. For example, unlike SSMS, dbForge Studio supports an opportunity to drag-and-drop several tables at once from the Database Explorer. This is my opinion but personally I find this option very useful. Another great thing is that a diagram can be saved as both a graphic file and a special XML file, which in case of identical environment can be easily opened on the other server for continuing the work. During working with dbForge Studio it turned out that it offers a wide set of elements to operate with on the diagram. Noteworthy among such elements are containers which allow aggregating diagram objects into thematic groups. Moreover, you can even place an image directly on the diagram if the scheme design is based on a standard template. Each of the development environments has a different approach to storing a diagram (for example, SSMS stores them on a server-side, whereas dbForge Studio – in a local file). I haven’t found yet an ability to convert existing diagrams from SSMS to dbForge Studio. However I hope Devart developers will implement this feature in one of the following releases. All in all, editing Database Diagrams through dbForge Studio was a nice experience and allowed speeding-up the common database designing tasks. Download dbForge Studio now. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • Designing Mobile SMS text advertising system

    - by Ramraj Edagutti
    Currently, I am working on a product where we have an SMS text advertising system, and using this, we setup advertising campaigns for clients, and later these campaigns are sent to the end users. This is very similar to Google Adwords, but targeted to Mobile users via SMS. Just to give an overview of the system Each Campaign is mapped to an advertiser Campaign has start date and end date Campaign has a filter condition(s) or query to select the target user base from our database (to whom we send Campaigns) Target user base can be fixed, for e.g send campaign to 10000 users Target user base can also be dynamic based on query condition, for e.g send campaign to users who are active and from a particular state, district, town etc. (this way user base will be keep changing on daily basis) Campaign can have multiple campaign messages Each campaign message has start date and end date Each campaign message can have multiple message texts for different locales, for e.g English,Hindi,Telugu etc After creating an advertisement campaign, we run daily night job to provision the target user base for that a particular campaign in a separate table, and another daily job runs on morning times and checks provisioned table for campaigns and targeted users and sends the campaign to users via SMS. Problem is, current UI for creating advertising campaigns is designed in a very technical manner, I mean, normal user or business owner or clients can not use the UI to create a campaign. Below are reasons why the UI is very technical in nature Filter condition(s) or query input filed, takes user ids or mobile numbers or SQL queries. Most of times or almost every time, we use big SQL queries So we end up storing SQL queries in a database for a campaign, later we use this SQL query to fetch targeted user base. For scheduling these campaigns, we have input filed on UI which takes quartz cron expression(s) ( for e.g. send campaign on "0 0 9 1-10 MAR 2012" ), again very technical in nature Normal user or business owner, can not use the UI for creating campaigns for reasons mentioned above, Currently, we ourself (developers) helping clients to setup/create campaigns. we are trying to re-design the UI to make it more user friendly so that any user can go to UI and create an advertisement campaign by himself. I am thinking of re-designing the current UI similar to Google Adwords interface, especially for selecting target users based on user geography like country, state, city etc. I also need to select users based user subscription(s), which might make system even more complex. And also, for campaign scheduling, I am thinking of using weekdays with hours. For example, I will shows Monday to Sunday on UI, and user can select the from hours, to hours etc. Any better ideas or suggestion on how to design UI in very user friendly manner and what design should be followed on server side code (we write backend code on java/jpa/spring/quartz)? And I am looking for ideas or design patterns on how to build SQL queries (using JPA/Hinernate) programmatically on server side, based on varies conditions like based on country, state, town, village, and user subscriptions.

    Read the article

  • How to get my IR remote to work? Lirc can't see it

    - by user1234567
    I'm using Ubuntu 11.10 (amd64) and I'm trying to get my infrared remote control working. The IR device is a part of a DVB-T USB stick (Based on a RTL2832u chip). I'm using these drivers - it's the only way of getting this device to work under 11.10 that I found. It's a big impromevent from previous Ubuntu version, where I had to edit the driver's code. The device works quite great - and the IR part of it works, too. The driver's page says that the code it's in alpha stage, but I'm pretty sure that my issue has nothing to do with that. If, and only if driver's module is loaded with parameter rtl2832u_rc_mode=2 (which means "use NEC protocol for IR") the remote kind of works, I can see this by running cat /dev/.. ../input6 - when I press a button, random letters appear. The remote works just like a keyboard, but keys are totally messed up - when I press '5' the volume goes down, etc. I would like to use Lirc to fix that, but Lirc can't detect my device (i.e. irw shows nothing). I suspect, it's because something gets into control of the device and sets it up as a keyboard. Lirc seems to be working, it's KDE settings module works too, but it just doesn't detect the device. The Lirc page describes this issue, but since 2009 - the last year when that page was updated, Ubuntu moved from HAL (described there) to DeviceKit, rendering provided instruction useless. I had a similar issue with my previous remote, but the keys were not messed up so much - the remote was usable, so I gave up trying to get Lirc working. I tried the answer provided here, but it changed nothing. I also tried forcing lircd to use my device, but this didn't work too: for i in /sys/class/input/input* ; do echo -n "$(basename "$i"): "; cat "$i/name"; done shows input0: Power Button input1: Power Button input2: Logitech Logitech USB Keyboard input3: A4Tech PS/2+USB Mouse input6: IR-receiver inside an USB DVB receiver But when I run: lircd -n --device=name='IR*' as root (also tried with the full name) I always see: lircd-0.9.0[3983]: lircd(default) ready, using /var/run/lirc/lircd lircd-0.9.0[3983]: accepted new client on /var/run/lirc/lircd lircd-0.9.0[3983]: could not get file information for name=IR* lircd-0.9.0[3983]: default_init(): No such file or directory lircd-0.9.0[3983]: Failed to initialize hardware So, how to set up Lirc with devinput driver in such case?

    Read the article

  • Compute directional light frustum from view furstum points and light direction

    - by Fabian
    I'm working on a friends engine project and my task is to construct a new frustum from the light direction that overlaps the view frustum and possible shadow casters. The project already has a function that creates a frustum for this but its way to big and includes way to many casters (shadows) which can't be seen in the view frustum. Now the only parameter of this function are the normalized light direction vector and a view class which lets me extract the 8 view frustum points in world space. I don't have any additional infos about the scene. I have read some of the related Questions here but non seem to fit very well to my problem as they often just point to cascaded shadow maps. Sadly i can't use DX or openGl functions directly because this engine has a dedicated math library. From what i've read so far the steps are: Transform view frustum points into light space and find min/max x and y values (or sometimes minima and maxima of all three axis) and create a AABB using the min/max vectors. But what comes after this step? How do i transform this new AABB back to world space? What i've done so far: CVector3 Points[8], MinLight = CVector3(FLT_MAX), MaxLight = CVector3(FLT_MAX); for(int i = 0; i<8;++i){ Points[i] = Points[i] * WorldToShadowMapMatrix; MinLight = Math::Min(Points[i],MinLight); MaxLight = Math::Max(Points[i],MaxLight); } AABox box(MinLight,MaxLight); I don't think this is the right way to do it. The near plain probably has to extend into the direction of the light source to include potentional shadow casters. I've read the Microsoft article about cascaded shadow maps http://msdn.microsoft.com/en-us/library/windows/desktop/ee416307%28v=vs.85%29.aspx which also includes some sample code. But they seem to use the scenes AABB to determine the near and far plane which I can't since i cant access this information from the funtion I'm working in. Could you guys please link some example code which shows the calculation of such frustum? Thanks in advance! Additional questio: is there a way to construct a WorldToFrustum matrix that represents the above transformation?

    Read the article

  • Ajaxy

    - by Chris Skardon
    Today is the big day, the day I attempt to use Ajax in the app… I’ve never done this (well, tell a lie, I’ve done it in a ‘tutorial’ site, but that was a while ago now), so it’s going to be interesting.. OK, basics first, let’s start with the @Ajax.ActionLink Right, first stab: @Ajax.ActionLink("Click to get latest", "LatestEntry", new AjaxOptions { UpdateTargetId = "ajaxEntrant", InsertionMode = InsertionMode.Replace, HttpMethod = "GET" }) As far as I’m aware, I’m asking to get the ‘LatestEntry’ from the current controller, and in doing so, I will replace the #ajaxEntrant DOM bit with the result. So. I guess I’d better get the result working… To the controller! public PartialResult LatestEntry() { var entrant =_db.Entrants.OrderByDescending(e => e.Id).Single(); return PartialView("_Entrant", entrant); } Pretty simple, just returns the last entry in a PartialView… but! I have yet to make my partial view, so onto that! @model Webby.Entrant <div class="entrant"> <h4>@Model.Name</h4> </div> Again, super simple, (I’m really just testing at this point)… All the code is now there (as far as I know), so F5 and in… And once again, in the traditionally disappointing way of the norm, it doesn’t work, sure… it opens the right view, but it doesn’t replace the #ajaxEntry DOM element, rather it replaces the whole page… The source code (again, as far as I know) looks ok: <a data-ajax="true" data-ajax-method="GET" data-ajax-mode="replace" data-ajax-update="#ajaxEntrants" href="/Entrants/LatestEntrant">Click to get latest</a> Changing the InsertionMode to any of the other modes has the same effect.. It’s not the DOM name either, changing that has the same effect.. i.e. none. It’s not the partial view either, just making that a <p> has (again) no effect… Ahhhhh --- what a schoolboy error… I had neglected (ahem) to actually put the script bit into the calling page (another save from stackoverflow): <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.js")" type="text/javascript"></script> I’ve now stuck that into the _Layout.cshtml view temporarily to aid the development process… :) Onwards and upwards! Chris

    Read the article

  • As a web designer, which language should I learn first for my feature career? (PHP or JavaScript) [closed]

    - by kdevs3
    Possible Duplicates: Best Programming Language for Web Development How can I choose a web development language? What language will you choose if you are going to build something big? What is the right option of programming languages and tools for building our website? What is the easiest web programing language at....? Well, I'm more of a basic web designer. I know the easy stuff pretty well. (Ya know, html, css) But I've been trying to take it to the next step and I'm contemplating about what I should learn that will help me out the most in my future web design/programming career, should it be JavaScript or maybe I should try to learn a back end programming language such as PHP. Lately, I have been hearing about a lot how JavaScript is so great and useful now, because of libraries such as jQuery and what possibility's it can bring by using Node.js and other frameworks. I've only learned the most basic of JavaScript and used some jQuery (mostly plugins) so i wouldn't know at all of what it can actually do. Would JS being so popular as it is now and useful, be a reason to stick with JavaScript and only learn it that for now? Or as a web designer, how important would it be to learn how to make a web application/website operate and functional, and know how to work with servers, etc? (Such as getting forms to work and sending data to the server and back) I've took a look at frameworks such as Code Igniter before, and looks really simple to get started with if I try to learn PHP, But I'm not sure how important it is for my career and what I would gain out of it. I'm asking because I can't decide what I should learn first. When I select it, I really want to take my time and learn the language. I don't want to spend time on learning multiple languages at the same time, so I need to pick wisely. I'm trying to turn the right direction so my career can hopefully be successful in the feature. (If money/gaining a job asked if its important, then its a yeah, it is a bit) I'm hoping I can get opinions and suggestions on this question, thanks for giving me your thoughts also.

    Read the article

  • Why a graduate program in South Africa?

    - by anca.rosu
    South Africa, like many other countries, is desperate for skills. Good, solid, technical skills – together with a get-up-and-go attitude – and the desire to work for a world-class organization that is leading the way! In addition, we have made a commitment in South Africa that we need to transform our organization and develop and empower Black individuals who historically have not had the opportunity to participate in the global economy. It is through this investment in our country's people that we contribute to the development of a nation capable of competing on the global stage. This makes for an exciting recipe! We have: Plenty of young and talented individuals who are eager to get stuck in and learn. Formal, recognized qualifications that form the basis for further development. A huge big global organization – Oracle – that is committed to developing these graduates and giving them an opportunity that is out of this world! Mix the above ‘ingredients’ together Tackle and remove potential “lumps & bumps” along the way as we learn and grow together Nurture and care for each other in a warm but tough environment What have we achieved? In most cases, the outcome is an awesome bunch of new talent that is well equipped to face the IT world. Where we have the opportunity and suitable headcount available to employ these graduates at Oracle we snap them up – alternatively our business partners and customers are always eager to recruit Oracle graduates into their organizations! These individuals go through real-life work place experience whilst at Oracle. In some cases they get to travel internationally. The excitement and buzz gets into their system and their blood becomes truly RED! Oracle RED! This is valuable talent and expertise to have in our eco-system and it’s an exciting program to be a part of not only as a graduate but as an Oracle employee too!   If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com. Technorati Tags: South Africa,technical skills,graduate program,opportunity,global organization,new talent

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Internal Mutation of Persistent Data Structures

    - by Greg Ros
    To clarify, when I mean use the terms persistent and immutable on a data structure, I mean that: The state of the data structure remains unchanged for its lifetime. It always holds the same data, and the same operations always produce the same results. The data structure allows Add, Remove, and similar methods that return new objects of its kind, modified as instructed, that may or may not share some of the data of the original object. However, while a data structure may seem to the user as persistent, it may do other things under the hood. To be sure, all data structures are, internally, at least somewhere, based on mutable storage. If I were to base a persistent vector on an array, and copy it whenever Add is invoked, it would still be persistent, as long as I modify only locally created arrays. However, sometimes, you can greatly increase performance by mutating a data structure under the hood. In more, say, insidious, dangerous, and destructive ways. Ways that might leave the abstraction untouched, not letting the user know anything has changed about the data structure, but being critical in the implementation level. For example, let's say that we have a class called ArrayVector implemented using an array. Whenever you invoke Add, you get a ArrayVector build on top of a newly allocated array that has an additional item. A sequence of such updates will involve n array copies and allocations. Here is an illustration: However, let's say we implement a lazy mechanism that stores all sorts of updates -- such as Add, Set, and others in a queue. In this case, each update requires constant time (adding an item to a queue), and no array allocation is involved. When a user tries to get an item in the array, all the queued modifications are applied under the hood, requiring a single array allocation and copy (since we know exactly what data the final array will hold, and how big it will be). Future get operations will be performed on an empty cache, so they will take a single operation. But in order to implement this, we need to 'switch' or mutate the internal array to the new one, and empty the cache -- a very dangerous action. However, considering that in many circumstances (most updates are going to occur in sequence, after all), this can save a lot of time and memory, it might be worth it -- you will need to ensure exclusive access to the internal state, of course. This isn't a question about the efficacy of such a data structure. It's a more general question. Is it ever acceptable to mutate the internal state of a supposedly persistent or immutable object in destructive and dangerous ways? Does performance justify it? Would you still be able to call it immutable? Oh, and could you implement this sort of laziness without mutating the data structure in the specified fashion?

    Read the article

  • Oracle Open World 2012 is Here!

    - by thatjeffsmith
    Just a quick post today and then probably not much more until next week. Speaking, running hands on labs, meets and greets, and trying to keep up with folks like @oraclenerd means I won’t have much time to write until I get home from San Francisco. Wanted to give a quick shout out to my co-worker and partner-in-Product Management-crime, Ashley Chen this morning. She signed me up for a run across the Golden Gate and back with @bamcgill a few months ago…mostly with my permission. The only thing was, I didn’t run at the time, and that’s basically a 5k. But having goals is good. And yesterday I met a big goal of mine – not looking stupid trying to run across the Golden Gate Bridge. Ok, I did the run and mabye looked a little bit stupid. Ashley, Barry, and I Pre-Run Perfect weather and no fog to cloud the view! So the pre-show fun is over and now it’s time for the show fun to begin. At Oracle Open World? Come by our demo pods. We’re with the other Database folks in the back right-hand corner. We’ll have folks on hand to talk and show Oracle SQL Developer, Oracle SQL Developer Data Modeler, Migrations, and Oracle APEX Listener. Oracle SQL Developer Demo Pod I have the full schedule of SQL Developer presentations and hands on labs here. I know there’s a lot of news on tap this week in the world of Oracle, and we’ll start talking more about it soon. So be sure to subscribe to my feed if you don’t want to miss any of my posts. And I promise not to post any more pictures me. Speaking of pictures, thanks to @dmcghan – or as I call him, ‘Dan the Man’ for running with us and being our official portrait photographer! If you don’t follow him, he’s a great fountain of knowledge in the Oracle APEX world and is one of our ACEs.

    Read the article

  • XNA Notes 001

    - by George Clingerman
    Just a quick recap of things I noticed going on in or around the XNA community this past week. I’m sure there’s a lot I missed (it’s a pretty big community with lots of different parts to it) but these where the things I caught that I thought were pretty cool. The XNA Team Michael Klucher gave a list of books every gamer should read. http://twitter.com/#!/mklucher/status/22313041135673344 Shawn Hargreaves posted Nelxon Studio posting about a cheatsheet for converting 3.1 to 4.0 http://blogs.msdn.com/b/shawnhar/archive/2011/01/04/xna-3-1-to-4-0-cheat-sheet.aspx?utm_source=twitterfeed&utm_medium=twitter XNA Game Studio won the Frontline award for Programming Tool by GameDev magazine! Congrats to the XNA team! http://www.gdmag.com/homepage.htm XNA MVPs In January several MVPs were up for re-election, Jim Perry, Andy ‘The ZMan’ Dunn, Glenn Wilson and myself were all re-award a Microsoft MVP award for their contributions to the XNA/DirectX communities. https://mvp.support.microsoft.com/communities/mvp.aspx?product=1&competency=XNA%2fDirectX A movement to get Michael McLaughlin an MVP award has started and you can join in too! http://twitter.com/#!/theBigDaddio/status/22744458621620224 http://www.xnadevelopment.com/MVP/MichaelMcLaughlinMVP.txt Don’t forget you can nominate ANYONE for a MVP award, that’s how they work. https://mvp.support.microsoft.com/gp/mvpbecoming  XNA Developers James Silva of Ska Studios hit 9,200 sales of ZP2KX and recommends you listen to Infected Mushroom. http://twitter.com/#!/Jamezila/status/22538865357094912 http://en.wikipedia.org/wiki/Infected_Mushroom Noogy creator of the upcoming XBLA title Dust an Elysian tail posts some details into his art creation. http://noogy.com/image/statue/statue.html Xbox LIVE Indie Game News Microsoft posts acknowledging there was an issue with the sales data that has been addressed and apologized for not posting about it sooner. http://forums.create.msdn.com/forums/p/71347/436154.aspx#436154 Winter Uprising sales still chugging along and being updated by Xalterax (by those developers willing to actually share sales numbers. Thanks for sharing guys, much appreciated!) http://forums.create.msdn.com/forums/t/70147.aspx Don’t forget about Dream Build Play coming up in February! http://www.dreambuildplay.com/Main/Home.aspx The Best Xbox LIVE Indie Games December Edition comes out on NeoGaf http://www.neogaf.com/forum/showthread.php?t=414485 The Greatest XBox LIVE Indie Games of 2010 on DealSpwn – Congrats to DrMistry and MStarGames for his #1 spot with his massive XBLIG Space Pirates From Tomorrow! http://www.dealspwn.com/xbligoty-2010/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Dealspwn+%28Dealspwn%29 XNA Game Development The future of XACT and WP7 has finally been confirmed and we finally know what our options are for looping audio seamlessly on WP7. http://forums.create.msdn.com/forums/p/61826/436639.aspx#436639  Super Mario 3 Design Notes is an interesting read for XBLIG developers, giving some insight to the training that natural occurs for players as they start playing the game. Good things for XBLIG developers to think about. http://www.significant-bits.com/super-mario-bros-3-level-design-lessons

    Read the article

  • Dawn of the Enterprise Social Developer

    - by Mike Stiles
    Social is not just for poking friends, posting videos of cats playing pianos, or even just for brand marketing anymore. It has become a key form of communication internally and externally across every area of the enterprise. As a Java developer, are you positioning yourself for the integration of social into enterprise business systems that’s on the near horizon? Because it’s the work you do and the applications you build that will influence what the social-enabled enterprise is going to look like and how it’s going to operate. But as a social developer, step one is wrapping your arms around all the things that are possible. Traditionally, the best exploration, brainstorming and innovation come from collaborating with other developers. That’s how the big questions can be hashed (or hacked) out. Is Java the best social development environment? If not, what is? What’s already being done in terms of application integration? The JavaOne Social Developer Program will offer up a series of talks and events on those very issues Tuesday, October 2 at the San Francisco Hilton. If you’re interested in embarking on this newest frontier of enterprise social development, you can connect with others who are thinking the same thing and get moving on your first project.Talks will include: Emergence Of The Social EnterpriseExtending Social into Enterprise Applications and Business ProcessesIntro to Open Graph and Facebook's APIs Building the Next Wave of Social Commerce Platforms Social Data and the Enterprise LinkedIn: A Professional Network Built with Java Technologies and Agile Practice Social Developer Hackathon In addition to these learning and discussion opportunities, you might consider joining the new Oracle Social Developer Community (OSDC), where the interaction and collaboration can continue indefinitely. It doesn’t take a lot of tea leaf reading to know that the cloud will house the enterprise technology of the future, and social (as well as the rich data it brings) is going to be a major part of that as social integrates across every business function as there’s proven value for consumer facing initiatives. The next phase of social development is going to involve combining enterprise data from multiple sources, new and existing, social and traditional, in order to tell compelling and usable stories. And social is coming to the enterprise quickly, meaning you as a development leader should seek to understand not just what's worked on the consumer side, but what aspects of those successes can be applied inside the organization. Get educated, get connected, and consider registering for this forward-looking event now to get started with enterprise social development.

    Read the article

  • Career Advice: finding challenging work in software and web development

    - by dianovich
    Having left my physics degree early, I started out in the realm of web design / front end web development and was able to get work quite quickly. I moved on to spend a chunk of my time on servers and gained experience with frameworks like Wordpress and Drupal, then the likes of Codeigniter and CakePHP and became comfortable in Debian-based and RHEL/CentOS environments. I ventured in to iOS development and published a couple of native apps to the app store too! I have started to spend a good deal of my time writing Python and have invested a little time in Django. The problem is, I still spend a fair chunk of my time doing more front end web development (writing markup and CSS for site themes, design-lead JavaScript, small applications for which application architecture and software engineering are relatively unimportant or too time consuming to invest in) in my job. What I want to do is really exercise the systematic/logical portion of my brain and tackle challenging problems on a daily basis. I want to have to care about big-oh running times, modularity in software, DRY, performance tuning and development methodologies. I want to work for a firm whose clients say: "Yes, these things are important to us and we'll pay you to get them right." But it is difficult: I have no formal training and am potentially becoming a jack of all trades. Not that being a jack of many trades is necessarily a bad thing, but the scope of work I find myself involved in is far too broad. And, there are only so many hours in a day outside of work! My question is: where do I go from here? I am starting to work on a few open source projects and have started to publish content to my blog. But this isn't likely to make it past the recruitment consultants and HR departments of many-a-firm. And I do not, for example, work in a team that practices agile methodologies, so how do I get work in such a team to gain experience? While I have been responsible for implementing version control and some solid working practices into our current environment, there is only so far I can go in this context. What would convince you that i'm worth taking a risk? What would convince you that i'll have caught up the other guys in your employ in next to no time?

    Read the article

  • How important is knowing functionality before coding?

    - by minusSeven
    I work for a software development company where the development work have been off shored to us. The on shore team handle the support and talk directly to the clients. We never talk to the clients directly we just talk people from the on shore team who talk directly to the clients. When requirements come, on shore team talk to the clients and make requirement documents and informs us. We make design documents after studying the requirements (we follow traditional waterfall model ). But there is one problem in the whole process: nobody in the either off-shore or on-shore understand the functionality of the application completely. We just know its a big complex web app handling complex order processing, catalog management, campaign management and other activities. We struggle with the design document as the requirements would not be clear. It then goes into a series of questions/answers back and forth between the on shore team,off shore team and clients. We would often be told to understand functionality from the code. But that's usually not feasible as the code base is huge and even understanding a simple menu item take days if not weeks. We tried telling the clients to give us knowledge transfer about the application but to no avail. Our manager would often tell us to start coding even if the design document is not complete or requirements not clear. We would start by coding part of the requirement that seems clear and wait for the rest. This usually would delay the deployment by a month. In extreme cases we would have very low errors in the development and production but the clients would say that's not what they asked. That would start a blame game and a series of change requests and we would end up developing something very different. My question is how would you do development work if you don't know the functionality of the app fully? UPDATE About development methodology it isn't really my choice and I am not my team's lead It is the way it began. I tried to tell people about the advantages of agile but to no avail. Besides I don't think my team has the necessary mindset to work in AGILE environment.

    Read the article

  • 302: this blog will be closed

    - by preishuber
    After nearly 7 years I will discontinue blogging on this site. My resources are limited. You can reach my German blog which is used to support my customers. Looking back to a long an interesting journey ASP.NET by ScottGu That was the reason to attend this site and support Microsoft as much as I can. For that I was honored as ASP.NET MVP- thanks again. Meet Scoot several times. Great guy! Forums I have left NNTP forums a few years ago and now Microsoft closed it- It was my idea ;-) AJAX Was the wrong way- JQuery won the game IIS7 That is really a great plattform and the IIS team rules. I am sad that is so silent around that topic. ASP.NET after 2.0 Is no longer my world. I love ASP.NET and ASP.NET Server controls. I hate the discussion about how to follow the holy rules of MVC. Microsoft have dropped the goal to bring ASP.NET to #1 and accepted PHP is it. Facebook & Twittering Microblogging takes over a part of the blogging business. Shorter faster cheaper- or as SteveB mentioned - do more with less. Google Google is taking over the web. I am using Bing every time as I can but Google have more options. Sorry Microsoft you will loose that game. Apple That is not the biggest problem of Microsoft. the Ixxx takes over a small part but big money of the market, but the customers are not strongly linked. New wave new hype- Game over Apple. Silverlight My new home. I can reuse a lot of my skills and love the possibilitys. Silverligth will passing WPF-and strike Flash Windows phone 7 Also my skills fit. I just will use it for fun. I am not really satisfied about what I have heard from MIX. Guys from Redmond, I am sad to say you have been the best Smartphone OS and lost everything. The ADO vNext Story That will be the next mystic point. WCF, REST, JSON, ATOM and now OData. Nothing about SQL commands. LINQ, ORM is also not the final solution for multilayered disconnected async scenarios. Personally I prefere the OData idea and dislike the Swiss Army Knife (German Eierlegende Wollmilchsau) WCF. I am still in INETA Speakers board and I am glad to come to your user group. In all other cases you can hire me over ppedv AG. Good by and have good live.

    Read the article

  • Should CSS be listed on your resume under Languages?

    - by Sandeepan Nath
    I have some doubts like Whether CSS should be put under Languages or not? Although Wikipedia says Cascading Style Sheets (CSS) is a style sheet language ... But do they write CSS under the languages section of the resume, along with PHP, etc? Similarly what about HTML? I have some doubt and I don't want to sound like someone who is not aware of the trends. Just to give an example, currently I have the following languages,frameworks, technologies, etc. listed under the "Technical Expertise" section of my resume - Technical Expertise * Languages - Proficient - PHP 5, Javascript, HTML ?*,CSS ?*,Sass ?*. Beginner - Linux Bash. * Databases - MySQL 5. * Technologies - AJAX. * Frameworks/Libraries - Symfony, jQuery. * CMSes - Wordpress. Although my domain is Web-development/design, I welcome domain-agnostic answers which can provide some generic ideas/reasoning. I have seen, a lot of people messing up these sections (even more serious than my doubts :) ), putting things under wrong sub-headings and thus putting a big question mark on their understanding of those things. I don't know much about XML, Comet Technology etc. Considering those are included too, What things should be put under Languages? E.g. Should CSS be put under Languages? Please give some reasoning to support your views. Where should the others (XML, Comet, cURL etc. ) be put? I welcome some examples of how you put it. Or do you have an additional Keywords section where you write all the unsortables ? Considering a set of standards like W3C standards, etc. do you have a standards sub-heading? I guess I have put the contents of other sections Okay. But do let me know about your ideas and reasoning. After all, I understand there may not be a single answer to this, but let's see what is the trend. Thanks Updates Further, do you mention design patters you have used? Web Services etc.? Where do you mention SOAP, XML etc...

    Read the article

  • Version control and data provenance in charts, slides, and marketing materials that derive from code ouput

    - by EMS
    I develop as part of a small team that mostly does research and statistics stuff. But from the output of our code, other teams often create promotional materials, slides, presentations, etc. We run into a big problem because the marketing team (non-programmers) tend to use Excel, Adobe products, or other tools to carry out their work, and just want easy-to-use data formats from us. This leads to data provenance problems. We see email chains with attachments from 6 months ago and someone is saying "Hey, who generated this data. Can you generate more of it with the recent 6 months of results added in?" I want to help the other teams effectively use version control (my team uses it reasonably well for the code, but every other team classically comes up with many excuses to avoid it). For version controlling a software project where the participants are coders, I have some reasonable understanding of best practices and what to do. But for getting a team of marketing professionals to version control marketing materials and associate metadata about the software used to generate the data for the charts, I'm a bit at a loss. Some of the goals I'd like to achieve: Data that supported a material should never be associated with a person. As in, it should never be the case that someone says "Hey Person XYZ, I see you sent me this data as an attachment 6 months ago, can you update it for me?" Rather, data should be associated with the code and code-version of any code that was used to get it, and perhaps a team of many people who may maintain that code. Then references for data updates are about executing a specific piece of code, with a known version number. I'd like this to be a process that works easily with the tech that the marketing team already uses (e.g. Excel files, Adobe file, whatever). I don't want to burden them with needing to learn a bunch of new stuff just to use version control. They are capable folks, so learning something is fine. Ideally they could use our existing version control framework, but there are some issues around that. I think knowing some general best practices will be enough though, and I can handle patching that into the way our stuff works now. Are there any goals I am failing to think about? What are the time-tested ways to do something like this?

    Read the article

  • ASP.NET Membership Password Hash -- .NET 3.5 to .NET 4 Upgrade Surprise!

    - by David Hoerster
    I'm in the process of evaluating how my team will upgrade our product from .NET 3.5 SP1 to .NET 4. I expected the upgrade to be pretty smooth with very few, if any, upgrade issues. To my delight, the upgrade wizard said that everything upgraded without a problem. I thought I was home free, until I decided to build and run the application. A big problem was staring me in the face -- I couldn't log on. Our product is using a custom ASP.NET Membership Provider, but essentially it's a modified SqlMembershipProvider with some additional properties. And my login was failing during the OnAuthenticate event handler of my ASP.NET Login control, right where it was calling my provider's ValidateUser method. After a little digging, it turns out that the password hash that the membership provider was using to compare against the stored password hash in the membership database tables was different. I compared the password hash from the .NET 4 code line, and it was a different generated hash than my .NET 3.5 code line. (Tip -- when upgrading, always keep a valid debug copy of your app handy in case you have to step through a lot of code.) So it was a strange situation, but at least I knew what the problem was. Now the question was, "Why was it happening?" Turns out that a breaking change in .NET 4 is that the default hash algorithm changed to SHA256. Hey, that's great -- stronger hashing algorithm. But what do I do with all the hashed passwords in my database that were created using SHA1? Well, you can make two quick changes to your app's web.config and everything will be OK. Basically, you need to override the default HashAlgorithmTypeproperty of your membership provider. Here are the two places to do that: 1. At the beginning of your element, add the following element: <system.web> <machineKey validation="SHA1" /> ... </system.web> 2. On your element under , add the following hashAlgorithmType attribute: <system.web> <membership defaultProvider="myMembership" hashAlgorithmType="SHA1"> ... </system.web> After that, you should be good to go! Hope this helps.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >