Search Results

Search found 3574 results on 143 pages for 'difficult'.

Page 60/143 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Self Service Reporting With PowerPivot

    - by blakmk
    There are so many cool new features in Sql 2008 release 2 it was difficult for me to pick a topic for T-SQL Tuesday . But the one that I am now a secret fan of, I once resented for its creation. Let me explain, for years I have encountered reporting systems cobbled together in tools like Access and Excel built by "database hobbyists" who had no formal training in database design or best practices. They would take their monstrosities as far as they could go before ultimatley it stopped working or the person that wrote it left the company. At that point it would become the resident DBA's problem to support it as a Live application. So when I first heard of Power Pivot, a sense of Deja Vu overtook me and I felt like the guy in the Ausin Powers movie , knowing the inevitable is coming but somehow unsure how to get out of the way. But when I eventually saw it in action, I quickly realised that it is a very powerful tool. It has a much smaller "time to market" than traditional BI architectures. Combined with the new features of Excel, some pretty impressive dashboards can be produced.Of course PowerPivot is not a magic bullet and along with potential scalability issues there are the usual issues such as master data management and data quality that cannot be overcome easily with power pivot. As a tool though, it has potential. Traditional BI is expensive, both in terms of time and the amount of resources it takes to deliver the system. The time lag between an analyst or a commercial accountant requesting reports and the report being delivered can make a huge commercial difference. I have observed companies where empowered end users become extremely productive when allowed to plough in to various disperate datasets. It may not be the correct way or the most sustainable but its cheap and quick. In these times when budgets are being slashed and we are forced to deliver more with less, why not empower the end user in a tool that is designed for exactly this task.... @blakmk  

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • Algorithm for dynamically calculating a level based on experience points?

    - by George
    One of the struggles I've always had in game development is deciding how to implement experience points attributed to gaining a level. There doesn't seem to be a pattern to gaining a level in many of the games I've played, so I assume they have a static dictionary table which contains experience points vs. the level. e.g. Experience Level 0 1 100 2 175 3 280 4 800 5 ...There isn't a rhyme or reason why 280 points is equal to level 4, it just is. I'm not sure how those levels are decided, but it certainly wouldn't be dynamic. I've also thought about the possibility of exponential levels, as not to have to keep a separate lookup table, e.g. Experience Level 0 1 100 2 200 3 400 4 800 5 1600 6 3200 7 6400 8 ...but that seems like it would grow out of control rather quickly, as towards the upper levels, the enemies in the game would have to provide a whopping amount of experience to level -- and that would be to difficult to control. Leveling would become an impossible task. Does anyone have any pointers, or methods they use to decide how to level a character based on experience? I want to be fair in leveling and I want to stay ahead of the players as not to worry about constantly adding new experience/level lookups.

    Read the article

  • Efficient coding in Visual Studio (or another IDE), with touch typing

    - by cheeesus
    Moving the cursor to another position in code is one of the most frequent actions when coding. I don't write my programs from the beginning to the end, like a letter. However, moving the cursor requires me to move my right hand to the key arrows or to the mouse, which feels like an interruption to my writing rhythm, since I'm using touch typing. I want my hands to rest on the keyboard. It's difficult to explain what I mean, but I think every coder using touch typing knows what I mean. I tried many things, like defining some shortcuts as surrogate arrow keys (Shift+Alt+J, K, L, I), or buying a keyboard with a Trackpoint, Trackpad, or Trackball on it, but I have not yet found a satisfying solution to the problem. What is the best solution you know of, regardless of which IDE you use? Edit: Thank you for your answers. I am using a lot of shortkeys, but I think using a Vim plugin in Visual Studio would interfere too much with the shortkeys I am used to. Also, I have a keyboard with a built-in mouse, but I'm still looking for a better solution.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-27

    - by Bob Rhubart
    Resource Kit: Oracle Exadata for the Communications industry In addition to several customer case studies, in video and white paper formats, this resource kit also includes a technical overview of Oracle Exadata Database Machine and a product datasheet. Registration is required for those who don't already have a free Oracle.com membership account. Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. BPM – Disable DBMS job to refresh B2B Materialized View | Mark Nelson "If you are running BPM and you are not using B2B, you might want to disable the DBMS job that refreshes the B2B materialized view," says Fusion Middleware A-Team blogger Mark Nelson. Learn how in his short post. A Universal JMX Client for Weblogic –Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koser A concise how-to from Oracle Fusion Middleware A-Team blogger Stefan Koser. Thought for the Day "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." — C. A. R. Hoare Source: SoftwareQuotes.com/

    Read the article

  • Missing X-Spam-Status header

    - by Walt Stoneburner
    I recently upgraded to Ubuntu 14.04.1 LTS (trusty) and have followed the directions in https://help.ubuntu.com/14.04/serverguide/mail-filtering.html and am sending and receiving mail just fine. While I do see X-Virus-Scanned headers in my messages, which suggests mail is indeed being processed, I do not see any X-Spam-Level or X-Spam-Score headers being added to messages. This makes downstream procmailrc and client-side filtering ...more difficult. While having $final_spam_destiny = D_DISCARD in /etc/amavis/conf.d/20-debian_defaults does greatly reduce spam to my inbox, I had concerns of false-positives prior to tuning and didn't know were there going, so have set it to D_PASS for the time being. This exposed the problem. I'm not sure where to look to start diagnosing the problem (otherwise I'd post a suspect configuration file). /etc/amavis/conf.d/15-content_filter_mode has the lines uncommented to enable virus and spam checks, and virus checking appears to be working according to the headers. Spam Assassin certainly seems to be starting just fine, too. SpamAssassin debug facilities: info SA info: zoom: able to use 360/360 'body_0' compiled rules (100%) SpamAssassin loaded plugins: AskDNS, AutoLearnThreshold, Bayes, BodyEval, Check, DKIM, DNSEval, FreeMail, HTMLEval, HTTPSMismatch, Hashcash, HeaderEval, ImageInfo, MIMEEval, MIMEHeader, Pyzor, Razor2, RelayEval, ReplaceTags, Rule2XSBody, SPF, SpamCop, URIDNSBL, URIDetail, URIEval, VBounce, WLBLEval, WhiteListSubject SpamControl: init_pre_fork on SpamAssassin done I've also set $log_level = 2; in /etc/amavis/conf.d/50-user and don't see any obvious errors rolling by in the logs. Q: Any recommendations of what to try next? UPDATE (it appears that I have the right setting already): /etc/amavis/conf.d$ grep sa_tag_level_deflt * 20-debian_defaults:# $sa_tag_level_deflt = 2.0; # add spam info headers if at, or above that level 20-debian_defaults:$sa_tag_level_deflt = -999; # add spam info headers if at, or above that level

    Read the article

  • Ways to dynamically render a real world 3d environment in Unity3D

    - by Jake M
    Using Unity3D and C# I am attempting to display a 3d version of a real world location. Inside my Unity3D app, the user will specify the GPS coordinates of a location, then my app will have to generate a 3d plane(anything doesn't have to be a plane) of that location. The plane will show a 500 metre by 500 metre 3d snapshot of that location. How would you suggest I achieve this in Unity3D? What methodology would you use to achieve this? NOTE: I understand that this is a very difficult endevour(to render real world locations dynamically in Unity3d) so I expect to perform many actions to achieve this. I just don't know of all the technologies out there and which would be best for my needs For example: Suggested methodology 1: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Convert that file to a .3ds file using ??? third party converter(is there a converter that exists?) Import .3ds into Unity3D at runtime as a plane(is this possible)? Suggested methodology 2: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Parse .dae file using my own C# parser I will write(do you think its possible to write a .dae parser that can parse the .dae into an array of Vector3 that describe the height map of that location?) Dynamically create a plane in Unity3D and populate it with my array/list of Vector3 points(is it possible to create a plane this way?) Maybe I am meant to create a mesh instead of a plane? Can you think of any other ways I could render a real world 3d environment in Unity3D?

    Read the article

  • Finally! Entity Framework working in fully disconnected N-tier web app

    - by oazabir
    Entity Framework was supposed to solve the problem of Linq to SQL, which requires endless hacks to make it work in n-tier world. Not only did Entity Framework solve none of the L2S problems, but also it made it even more difficult to use and hack it for n-tier scenarios. It’s somehow half way between a fully disconnected ORM and a fully connected ORM like Linq to SQL. Some useful features of Linq to SQL are gone – like automatic deferred loading. If you try to do simple select with join, insert, update, delete in a disconnected architecture, you will realize not only you need to make fundamental changes from the top layer to the very bottom layer, but also endless hacks in basic CRUD operations. I will show you in this article how I have  added custom CRUD functions on top of EF’s ObjectContext to make it finally work well in a fully disconnected N-tier web application (my open source Web 2.0 AJAX portal – Dropthings) and how I have produced a 100% unit testable fully n-tier compliant data access layerfollowing the repository pattern. http://www.codeproject.com/KB/linq/ef.aspx In .NET 4.0, most of the problems are solved, but not all. So, you should read this article even if you are coding in .NET 4.0. Moreover, there’s enough insight here to help you troubleshoot EF related problems. You might think “Why bother using EF when Linq to SQL is doing good enough for me.” Linq to SQL is not going to get any innovation from Microsoft anymore. Entity Framework is the future of persistence layer in .NET framework. All the innovations are happening in EF world only, which is frustrating. There’s a big jump on EF 4.0. So, you should plan to migrate your L2S projects to EF soon.

    Read the article

  • Evolution of mainstream programming languages: simplicity versus complexity.

    - by Giorgio
    I had posted this question on http://stackoverflow.com but I was suggested that it may be more appropriate to post it on this forum. I did a quick search on this site and it seems to me that this question has not been asked yet. Please give me a hint if the topic has been raised already by someone else. Update I have rephrased this question, removed personal opinions and made it shorter. I hope in this way it is better suited for this forum. By looking at the recent development of Java (Java 7) and C++ (C++0x) I see that new features are added to these languages. For sure this makes it easier to use certain programming idioms, adding to the productivity of developers. On the other hand, there might be the following risks A language becomes too big, complex, and difficult to understand. It lacks coherence in the design, e.g. if it mixes different paradigms like object-orientation and functional programming, which might not fit well together. Questions: what is more important to you as a developer: to have a rich language that captures a large collection of programming idioms or to have a small language that aims at coherence and simplicity (of course, with a good deal of libraries and tools accompanying it)? Or is it possible to have both? With respect to these issues: How do you judge the current evolutions of main-stream programming languages like Java or C++? Are they becoming too complex, less intuitive? Do they have enough features? Do they need more? Are they still easy enough to understand and use?

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • The Iron Bird Approach

    - by David Paquette
    It turns out that designing software is not so different than designing commercial aircraft.  I just finished watching a video that talked about the approach that Bombardier is taking in designing the new C Series aircraft.  I was struck by the similarities to agile approaches to software design.  In the video, Bombardier describes how they are using an Iron Bird to work through a number of design questions in advance of ever having a version of the aircraft that can ever be flown.  The Iron Bird is a life size replica of the plane.  Based on the name, I would assume the plane is built in a very heavy material that could never fly.  Using this replica, Bombardier is able to valid certain assumptions such as the length of each wire in the electric system.  They are also able to confirm that some parts are working properly (like the rudders).  They even go as far as to have a complete replica of the cockpit.  This allows Bombardier to put pilots in the cockpit to run through simulated take-off and landing sequences. The basic tenant of the approach seems to be Validate your design early with working prototypes Get feedback from users early, well in advance of finishing the end product   In software development, we tend to think of ourselves as special.  I often tell people that it is difficult to draw comparisons to building items in the physical world (“Building software is nothing like building a sky scraper”).  After watching this video, I am wondering if designing/building software is actually a lot like designing/building commercial aircraft.   Watch the video here (http://www.theglobeandmail.com/report-on-business/video/video-selling-the-c-series/article4400616/)

    Read the article

  • PeopleSoft 8.52 iPad Certification

    - by Dave Bain
    One of the real gems in the PeopleTools 8.52 release is the certification of PeopleSoft applications running in the Safari Browser on an iPad.  It is nice that PeopleSoft is not constrained by technology like Adobe Flex/Flash, so announcements like this "Adobe drops plans for mobile Flash support" do not limit our mobile solution to custom mobile development. There are parts of PeopleSoft applications that operate better on iPads than others.  One of the best is Workcenters.  Workcenters were new to PeopleTools 8.51 and we are starting to see more and more adoption of them.  Workcenters are roll based landing pages that eliminate difficult navigation by providing access to most links, pages, and reports a user in a role needs.  One of the nicest I’ve seen is the Supply Manager Workspace.  Here are some links to screenshots of what a WorkCenter looks like on an iPad: Here's the standard PeopleSoft Login Page The Supply Manager Workspace looks great full screen on an iPad iPad has a great user interface to zoom, here's a screenshot of an upclose view of an analytic. Touch one of the analytics and it drills into the details. Go ahead and give it a try.  WorkCenters and Dashboards are starting to show up across applications.  For a quick one to try, navigate to the PeopleTools->Integration Broker->Integration Network WorkCenter.  It’s new in PeopleTools 8.52.

    Read the article

  • Oracle Java Embedded Client 1.1 Released

    - by Roger Brinkley
    Yesterday an update release of Oracle Java Embedded Client (OJEC) 1.1 quietly slipped out door for general availability. Until last year it was pretty difficult to get your hands on either a Connected Limited Device Configuration (CLDC) for small devices or a Connected Device Configuration (CDC) for medium devices java implementation without a substantial initial commitment. But with the the release of OJWC (CLDC) and OJEC (CDC) last year that has changed. OJEC 1.1 is a binary distribution designed for installation on medium configurations which is a mid range processor requiring a  slow startup time, seamless upgrades, in a cost sensitive hardware environment  anywhere from 3.5mb to 8 mb. There are headless as well as headed versions available. It is intended for devices, such as Blu-­-ray Disc players, set-­-top boxes, residential gateways,VOIP phones, and similar. From a software point of view, OJEC is the Java runtime platform implementation of Connected Device Configuration (CDC v1.1, JSR-­-218), Foundation Profile (FP v1.1, JSR-­-219), and Personal Basis Profile (PBP v1.1, JSR-­-217)  and includes optional packages RMI (JSR 66), JDBC (JSR 169) and XML API for Java ME (JSR 280), and Java TV (JSR-­-927). New to this release is support for the XML API (JSR 280) and a number of bug fixes and performance enhancements, including an improved Just-in-Time (JIT) compilation for the x86 chipset architecture. The platforms supported include ArmV5, ArmV6/ArmV7, MIPS 32 74K, and X86 in headless mode. For embedded developers there are number of advantages to using Java and if you have shied away from the JavaME edition in the past I would encourage you to look into the updated version of OJEC 1.1.

    Read the article

  • Problems with Maverick upgrade

    - by altenuta
    I upgraded to Maverick 10.10 from Lucid. I have an old Toshiba Satellite with a 1.1 MHz and 256MB RAM. Initially I couldn't get my wireless to work. That solved itself after installing various updates and programs. The problems that remain are: I have to authorize at least 2 times at start-up. This machine is Ubuntu only. No boot load screen. I have a ton of programs and system directories that are in my home folder. Is this normal? It is difficult to wake the computer from sleep. Usually I just shut it down and restart. Tonight I waited and got a message about corrupt memory. The computer takes forever to do just about everything. Slow to start programs or doing things on the web. I am a longtime Mac user (since 1986). I also manage a network of several windoze machines. I am definitely a GUI guy and do very little in the terminal so I really need to know where to begin to get things straightened out. Can I rescue this machine without wiping it and doing a fresh install? This is basically a hobby machine. Aside from all the programs and upgrades I've installed, I have almost no files or documents to worry about saving. Anyone have any ideas about the problems I'm having and the best way to proceed? Thanks, Al

    Read the article

  • how do you remember programming related stuff?

    - by dan leadgy
    How do you remember programming related stuff? Did you get the feeling you did encounter the error you have now a few years ago and you could swear you knew the cause but now you forgot it? Did you work with the xsl's string parsing some time ago but now you can't remember exactly which are the string functions altogether from xsl and you have to start from scratch? Or perhaps you forget about some feature from Apache Commons like "filtering a collection by some predicate" that you surely used in the past. So how do you do it? I tried having a blog but when I develop apps, I never find the time to update the blog or write about my experiences. Also, using a wiki is a nice thing but then I found it difficult to keep a clean separation between them since many times I needed to change a blog post to add new information about that topic. This made me think that I actually should have put this topic in the wiki instead of the blog. Do you have any systems that help you remember about your programming experience? What's your setup?

    Read the article

  • Showing content from pages at different URL's (masking), possibly with .htaccess

    - by zigojacko
    If I have URL's like:- domain.com/category/widgets/filter/blue domain.com/category/widgets/filter/red And it is pretty difficult to reconstruct them to something like:- domain.com/category/blue-widgets domain.com/category/red-widgets Is there any way at all that I can use URL rewrites or anything else with .htaccess or on the server to display the URL's as the domain.com/category/blue-widgets on the domain.com/category/widgets/filter/blue page? I've looked into masking URL's but got nowhere and this has been something bugging me for almost 6 months now. Is there any way to achieve what I want to do? FYI: This is a Magento website and the above process, I am wanting to implement for potentially hundreds of URL's. Edit To respond to @kkugelmann's answer:- I couldn't get your proposed RewriteRule to make a difference at all in the .htaccess file so I started testing a few things in this .htaccess tester:- The proposed RewriteRule didn't work in this tester:- However, the following did:- But adding any of these RewriteRule's into the website's .htaccess file did not rewrite the URL at all... Edit2 By the way, if I add [R=301,L] to the end of the URL rewrite rule, it does actually then rewrite the rule, but of course 301 redirects it as well which is unwanted behaviour. Edit3 I found another question with the same issue... And an accepted answer that solved the problem which seemed to be something to do with using mod_proxy and the [P] tag on the rule (if I try this, the page 404's).

    Read the article

  • Efficiency concerning thread granularity

    - by MaelmDev
    Lately, I've been thinking of ways to use multithreading to improve the speed of different parts of a game engine. What confuses me is the appropriate granularity of threads, especially when dealing with single-instruction-multiple-data (SIMD) tasks. Let's use line-of-sight detection as an example. Each AI actor must be able to detect objects of interest around them and mark them. There are three basic ways to go about this with multithreading: Don't use threading at all. Create a thread for each actor. Create a thread for each actor-object combination. Option 1 is obviously going to be the least efficient method. However, choosing between the next two options is more difficult. Only using one thread per actor is still running through every object in series instead of in parallel. However, are CPU's able to create and join threads in the granularity posed in Option 3 efficiently? It seems like that many calls to the OS could be really slow, and varying enormously between different hardware.

    Read the article

  • Is there a Source Insight alternative?

    - by hansioux
    I am not a developer, but for my work I trace a lot of codes. It is actually rather difficult reading other people's code, especially for bigger projects. Source Insight is a great application that stores all the symbols in a data base, so you can see a new function being called, click on it and see how the function is written. You can see all the referrer of a object or jump to a caller. You don't need to break the train of thought and think up shell commands just to find these things every time you ran into a new variable/structure/function from some other files. I have it running on WINE, but there are little glitches that sometimes gets in the way. I know people will mention C-scope, I've tried it, but it really isn't the same. So, with so many huge open source projects out there for Ubuntu, are there native tools to help read them efficiently? EDIT: Thanks for the suggestions, but does CODE::BLOCKS or CodeLite provide abilities to see the function that the mouse clicked on without jumping to it, so I can see the caller and callee at the same time?

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • Kids and programming: ScratchKara

    - by Mike Pagel
    Ever now and then I kept wondering how to share with my kids the excitement of creating something with your computer. Of course, today this is a bit more difficult, as they have seen 3D animation games and well-edited websites. I guess that's why they weren't all that hyped when I found my first computer model at our local recycling facilities (an 8-bit Laser VZ-200 with rubber keys). When I finally got it up and running with an old analog TV set they finally asked whether we could play soccer on it. Needless to say that my showing them how it remembers some BASIC commands and lists and executes them did not make any impression. So the question is for real: How do you get today's kids excited about programming? And just recently I looked again for environments that allow even young kids (mine are 7 and 9 years old now) to do something and have fun. Obviously any real, text-oriented programming language wouldn't work well. To cut it short: Something really nice was built by University of Oldenburg: ScratchKara. It is the perfect mixture of Kara, a simulation of a little ladybug and Scratch, an authoring environment from MIT. ScratchKara allows kids to initially simply explore how the bug moves and turns by pressing the action buttons, then move towards sequencing commands through drag & drop, and eventually end up building algorithms with procedures and functions. Even through it is built for kids and beginners, the environment comes with debugging and refactoring, which I found more than amazing. My kids love it and I have to admit I keep thinking about how to solve a bit more advanced problems with this language, which does not allow you to store any state information (other than your call stack). Yes, I am hooked, too... Once the language is understood you can then move to one of the original Kara versions, where you can define the bug's behavior through finite statemachines, Turing tables, Java and other textual languages. And from there, anything is possible.

    Read the article

  • How do you name your projects?

    - by Corey
    Naming is hard. Really, really hard. Even StackExchange is a prime example of this -- remember the huge domain name controversy that occurred when SE sites first started graduating? Anyway, I've got a project I'm working on but I have no idea what to name it! This simple fact has caused production to cease, because I'm at the point where I want to create repositories and database tables, and I don't want to name everything "Untitled project" and have to change potentially hundreds of lines of code in the future. Also, I would like to collaborate with others but it's difficult to be taken seriously if I refer to this as "some project I'm working on." It makes it seem like a new project in its infancy and doesn't garner a lot of interest. Just the simple fact act of having a name will make a huge impact in how it registers in others' minds. How do you guys name your projects? This particular one is a website, so not only do I need to find a good name, I need to find one with an available domain, which is next to impossible these days. How do you brainstorm? Who do you talk to (or not talk to)? Is there an "eureka!" moment when you stumble across something that works?

    Read the article

  • Tips for achieving "continual" delivery

    - by Ben
    A team is experiencing difficulty releasing software on a frequent basis (once every week). What follows is a typical release timeline: During the iteration: Developers work on stories on the backlog on short-lived (this is enthusiastically enforced) feature branches based on the master branch. Developers frequently pull their feature branches into the integration branch, which is continually built and tested (as far as the test coverage goes) automatically. The testers have the ability to auto-deploy integration to a staging environment and this occurs multiple times per week, enabling continual running of their test suites. Every Monday: there is a release planning meeting to determine which stories are "known good" (based on the testers' work), and hence will be in the release. If there is a known issue with a story, the source branch is pulled out of integration. no new code (only bug fixes requested by the testers) may be pulled into integration on this Monday to ensure the testers have a stable codebase to cut a release from. Every Tuesday: The testers have tested the integration branch as much as they possibly can have given the time available and there are no known bugs so a release is cut and pushed out to the production nodes slowly. This sounds OK in practise, but we have found that it is incredibly difficult to achieve. The team sees the following symptoms "subtle" bugs are found on production that were not identified on the staging environment. last minute hot-fixes continue into the Tuesday. problems on the production environment require roll-backs which blocks continued development until a successful live deployment is achieved and the master branch can be updated (and hence branched from). I think test coverage, code quality, ability to regression test quickly, last minute changes and environmental differences are at play here. Can anyone offer any advice regarding how best to achieve "continual" delivery?

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • Making an advertising server ads from different ad networks

    - by John
    In India there are many ad-networks(other than Adsense) who pay per acquisition or per lead. So Javascript ad code is not required(as fraud clicks don't matter as long as one converts). So an ad network will have many companies and each company will have many banner sizes for ads. Also suddenly any ad may be stopped just because company's target has met. Which is a common nuisance since if we don't remove those url's then that company will get conversions for free. I've a dozen sites and removing the ads are difficult every now and then. Also CPA based ads may not convert at all. That means I'll need to remove non-performing ads regularly. I've gone through: How can I show multiple ad networks on my site? . I've also visited DFP solution but without Adsense they wouldn't let me open account. I want to make an ad server wherein I'll feed new ads (banner image + link for click). I want to maintain categories there like ( shoes, phones, books etc). So if an ad is paused - i'll simply remove/pause the ad there while other ads in the category keep running. Also changing ad code within sites will no more be required. For example - let me have an ad category "clothing" where I can add ads from different companies. So if one of my site requests an ad from there it'll randomly select an ad in this category and return it to site for display. Removing/adding ads within this category will not affect the site requesting those ads. Any idea how to implement it?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >