Search Results

Search found 17754 results on 711 pages for 'field description'.

Page 400/711 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • How do I get network working on a Dell n5010 - Ubuntu 10.04

    - by cyberroger
    Hey. I just formatted a Dell n5010 and installed Ubuntu 10.04. Now,when i try to access the Internet it doesn't find the available networks. Informations about it: My network is set to broadcast SSID; Already typed lspci, and it doesnt returns my wireless card; Tried installing windows drivers from fabricant cd usind diswrapper, but it says it cant complete the installation because it cant find some "Device" folder; I don't know what else to do, I switch the wireless button on/off and it just can't see any wireless connection, includin mine. Here are the informations about lspci: *-network UNCLAIMED description: Network controller product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:12:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:fbc00000-fbc03fff And this: 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 18) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 06) 00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 06) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 12:00.0 Network controller: Broadcom Corporation Device 4727 (rev 01) 13:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Thanks!

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

  • libgdx intersection problem between rectangle and circle

    - by Chris
    My collision detection in libgdx is somehow buggy. player.png is 20*80px and ball.png 25*25px. Code: @Override public void create() { // ... batch = new SpriteBatch(); playerTex = new Texture(Gdx.files.internal("data/player.png")); ballTex = new Texture(Gdx.files.internal("data/ball.png")); player = new Rectangle(); player.width = 20; player.height = 80; player.x = Gdx.graphics.getWidth() - player.width - 10; player.y = 300; ball = new Circle(); ball.x = Gdx.graphics.getWidth() / 2; ball.y = Gdx.graphics.getHeight() / 2; ball.radius = ballTex.getWidth() / 2; } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); // draw player, ball batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(ballTex, ball.x, ball.y); batch.draw(playerTex, player.x, player.y); batch.end(); // update player position if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.LEFT)) player.x -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.RIGHT)) player.x += 250 * Gdx.graphics.getDeltaTime(); // don't let the player leave the field if(player.y < 0) player.y = 0; if(player.y > 600 - 80) player.y = 600 - 80; // check collision if (Intersector.overlaps(ball, player)) Gdx.app.log("overlaps", "yes"); }

    Read the article

  • Formatting: Group Multiline Alignment - Added

    - by Petr
    One week ago I have added two new properties for formating PHP code into NetBeans 7.1. In Alignment category there are new properties for Group Multiline Alignment - Assignment and Array Initializer.  The Assignment property influence position of the char '=' in a group of lines with assignments.  Let see the pictures below.  On the left site -  Assignment property is off and on the right site the property is on. As you can see, when the property is set on, then the assignment char '=' is placed after the longest identifier in a group. The group is defined as a number of lines that contains the same type of assignments. End of a group can be empty line, line where is only a comment, different expression, end of a block. This formatting options works for variable assignment, field initialization and constants.  The second new property is for Array Initializer.   Both properties are switched off by default. If you will play with it, please file any problem into our Bugzilla.  

    Read the article

  • ArchBeat Link-o-Rama Top 20 for April 1-9, 2012

    - by Bob Rhubart
    The top 20 most popular items shared via my social networks for the week of April 1 - 8, 2012. Webcast: Oracle Maximum Availability Architecture Best Practices w/Tom Kyte - April 12 Oracle Cloud Conference: dates and locations worldwide Bad Practice Use Case for LOV Performance Implementation in ADF BC | Oracle ACE Director Andresjus Baranovskis How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya MySQL Cluster 7.2 GA Released How to deal with transport level security policy with OSB | Jian Liang Webcast Series: Data Warehousing Best Practices http://bit.ly/I0yUx1 Interactive Webcast and Live Chat: Oracle Enterprise Manager Ops Center 12c Launch - April 12 Is This How the Execs React to Your Recommendations? | Rick Ramsey Unsolicited login with OAM 11g | Chris Johnson Event: OTN Developer Day: MySQL - New York - May 2 OTN Member discounts for April: Save up to 40% on titles from Oracle Press, Pearson, O'Reilly, Apress, and more Get Proactive with Fusion Middleware | Daniel Mortimer How to use the Human WorkFlow Web Services | Oracle ACE Edwin Biemond Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH IOUG Real World Performance Tour, w/Tom Kyte, Andrew Holdsworth, Graham Wood WebLogic Server Performance and Tuning: Part I - Tuning JVM | Gokhan Gungor Crawling a Content Folio | Kyle Hatlestad The Java EE 6 Example - Galleria - Part 1 | Oracle ACE Director Markus Eisele Reminder: JavaOne Call For Papers Closing April 9th, 11:59pm | Arun Gupta Thought for the Day "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable." — Leslie Lamport

    Read the article

  • Frequent grub rescue error: unknown filesystem

    - by user3215
    It's really frustrating. Three week's back story: I faced the following error on ubuntu 10.04 LTS Desktop, error : unknown filesystem grub rescue> I tried many solutions online apart from 1 and 2 and eventually I could not boot my system at all. Trying those solutions could not completely fix it and then I had to face initramfs(could not remember exact error). As I could not fix, a week back I did a fresh install of Ubuntu 12.04 Desktop and after a week i.e, now, I got the same grub rescue> error. Could anybody tell me what's the reason behind this error?. Is there any problem with the Ubuntu or it's a problem at my side due to some reason?. My machine description: I have two hard disks, windows is installed on one hard disk and on another it's Ubuntu. I installed the operating systems(vista, ubuntu) such a way one is not known to another, I mean to say, I'll unplug one hard disk and install the Ubuntu and similarly I'll unplug the ubuntu hard disk, installed the vista on another hard disk, generally I do(I actually don't want to run into grub/boot.ini issues installing OSs connecting both the hard disks). When ever I power on the computer windows will boot by default, and I'll generally boot to ubuntu by pressing F8 and selecting the ubuntu hard disk. Any help is greatly appreciated!. Thank you!

    Read the article

  • Platformer Enemy AI

    - by hayer
    I'm currently developing a platformer shooter. The game is multiplayer and while my net code could use some real work I have put that off for the time, so currently I'm trying to implement the AI. The game is pretty simple; Players run around on a map filled with a X amount of zombies that try to eat their brains, classic and overused I know. Weapons spawn at random intervals around the map. The problem is that the zombies, when they find their pray the have to follow it for some while.. And here is the problem, running the AI navcode seems to take for ever. So here is the ideas I have come up with so far Have the AI update at different intervals with a maximum of Y ms with no updates. Have the zombies assigned to groups of zombies. One is appointed the leader of the group who finds the way to the player - the rest just follows the leader. If the leader dies another one of the zombies in the group is appointed president of the zombie swarm. If there is less than five zombies in a group they try to meet up with other zombies.(Aka they are assigned to a different group and therefor a new leader) Multi-threading option one or two? For navigation I have some kinda navmesh(since the game is not tile-based) that tells the zombies where they can walk etc. If anyone else got some ideas on how to do navigation I would love some input. For LoS(zombie - player) I have split the map into grids. If the players grid is connected to the zombies grid(if I go with option two I would only need to check if leader zombies grid is connected to player, aka less checks) - if they are connected and there is more than 250ms since last check do a raytrace.. This is my first time programming AI so input on any field is appreciated.

    Read the article

  • Oracle SPARC SuperCluster and US DoD Security guidelines

    - by user12611852
    I've worked in the past to help our government customers understand how best to secure Solaris.  For my customer base that means complying with Security Technical Implementation Guides (STIGs) from the Defense Information Systems Agency (DISA).  I recently worked with a team to apply both the Solaris and Oracle 11gR2 database STIGs to a SPARC SuperCluster.  The results have been published in an Oracle White paper. The SPARC SuperCluster is a highly available, high performance platform that incorporates: SPARC T4-4 servers Exadata Storage Servers and software ZFS Storage appliance InfiniBand interconnect Flash Cache  Oracle Solaris 11 Oracle VM for SPARC Oracle Database 11gR2 It is targeted towards large, mission critical database, middleware and general purpose workloads.  Using the Oracle Solution Center we configured a SSC applied DoD security guidance and confirmed functionality and performance of the system.  The white paper reviews our findings and includes a number of security recommendations.  In addition, customers can contact me for the itemized spreadsheets with our detailed STIG reports. Some notes: There is no DISA STIG  documentation for Solaris 11.  Oracle is working to help DISA create one using their new process. As a result, our report follows the Solaris 10 STIG document and applies it to Solaris 11 where applicable. In my conversations over the years with DISA Field Security Office they have repeatedly told me, "The absence of a DISA written STIG should not prevent a product from being used.  Customer may apply vendor or industry security recommendations to receive accreditation." Thanks to the core team: Kevin Rohan, Gary Jensen and Rich Qualls as well as the staff of the Oracle Solution Center and Glenn Brunette for their help in creating the document.

    Read the article

  • Constructs for wrapping a hardware state machine

    - by Henry Gomersall
    I am using a piece of hardware with a well defined C API. The hardware is stateful, with the relevant API calls needing to be in the correct order for the hardware to work properly. The API calls themselves will always return, passing back a flag that advises whether the call was successful, or if not, why not. The hardware will not be left in some ill defined state. In effect, the API calls advise indirectly of the current state of the hardware if the state is not correct to perform a given operation. It seems to be a pretty common hardware API style. My question is this: Is there a well established design pattern for wrapping such a hardware state machine in a high level language, such that consistency is maintained? My development is in Python. I ideally wish the hardware state machine to be abstracted to a much simpler state machine and wrapped in an object that represents the hardware. I'm not sure what should happen if an attempt is made to create multiple objects representing the same piece of hardware. I apologies for the slight vagueness, I'm not very knowledgeable in this area and so am fishing for assistance of the description as well!

    Read the article

  • How to build Gantt chart from a set of Redmine tickets without filling dates in all of them?

    - by Alexander Gladysh
    Redmine 1.1.1 I've created a set of tickets for a new project. In each issue I filled Subject, Description and Estimated time fields. I also filled blocks/blocked by dependencies in Related issues. But the Gantt chart for this project is empty (that is, it contains all the tasks, but does not contain any "bars" for them). I need to get a Gantt chart (or any other visual representation) to show to other project members. I'd hate to type all that information again into OpenProj. Is there a way to get a serviceable Gantt chart from the Redmine? Update: In the answers below I read that to get working Gantt chart I have to input start date and due date manually for each issue. I believe that this information should be inferred automatically from start date of first ticket (first — depenency-wise), estimated time of each ticket, dependency graph, resource assignment and working hours calendar. Just as it happens in any minimally sane Gantt chart project management tool. To enter this information by hand and to keep it up-to-date manually as the project evolves is insane waste of time. Is there a way to generate Gantt chart from the set of Redmine tickets without filling in all this information manually? (Solutions involving data export + import in sane tool or involving existing plugins are perfectly acceptable.)

    Read the article

  • Keeping Aspect Screen Ratio While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • CodePlex Daily Summary for Monday, December 03, 2012

    CodePlex Daily Summary for Monday, December 03, 2012Popular Releasesmenu4web: menu4web 1.1 - free javascript menu: menu4web 1.1 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 22 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Quest: Quest 5.3 Beta: New features in Quest 5.3 include: Grid-based map (sponsored by Phillip Zolla) Changable POV (sponsored by Phillip Zolla) Game log (sponsored by Phillip Zolla) Customisable object link colour (sponsored by Phillip Zolla) More room description options (by James Gregory) More mathematical functions now available to expressions Desktop Player uses the same UI as WebPlayer - this will make it much easier to implement customisation options New sorting functions: ObjectListSort(list,...Mi-DevEnv: Development 0.1: First Drop This is the first drop of files now placed under source control. Today the system ran end to end, creating a virtual machine and installing multiple products without a single prompt or key press being required. This is a snapshot of the first release. All files are under source control. Assumes Hyper-V under Server 2012 or Windows 8, using Windows Management Framework with PowerShell 3.Chinook Database: Chinook Database 1.4: Chinook Database 1.4 This is a sample database available in multiple formats: SQL scripts for multiple database vendors, embeded database files, and XML format. The Chinook data model is available here. ChinookDatabase1.4_CompleteVersion.zip is a complete package for all supported databases/data sources. There are also packages for each specific data source. Supported Database ServersDB2 EffiProz MySQL Oracle PostgreSQL SQL Server SQL Server Compact SQLite Issues Resolved293...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.34: changes FIXED: Thanks Function when "Download each post in it's own folder" is disabled FIXED: "PixHub.eu" linksCleverBobCat: CleverBobCat 1.1.2: Changes: - Control loss of cart when decoupled fixed - Some problems with energy transfer usage if disabled system fixedD3 Loot Tracker: 1.5.6: Updated to work with D3 version 1.0.6.13300DirectQ: DirectQ II 2012-11-29: A (slightly) modernized port of Quake II to D3D9. You need SM3 or better hardware to run this - if you don't have it, then don't even bother. It should work on Windows Vista, 7 or 8; it may also work on XP but I haven't tested. Known bugs include: Some mods may not work. This is unfortunately due to the nature of Quake II's game DLLs; sometimes a recompile of the game DLL is all that's needed. In any event, ensure that the game DLL is compatible with the last release of Quake II first (...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.2: new UI for the Administration console Bugs fixes and improvement version 2.2.215.3JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...NHook - A debugger API: NHook 1.0: x86 debugger Resolve symbol from MS Public server Resolve RVA from executable's image Add breakpoints Assemble / Disassemble target process assembly More information here, you can also check unit tests that are real sample code.PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFDocument.Editor: 2013.5: Whats new for Document.Editor 2013.5: New Read-only File support New Check For Updates support Minor Bug Fix's, improvements and speed upsMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...New ProjectsASP.NET Youtube Clone: ASP.NET Youtube Clone is a complete script with basic and advance features which allow you to build complex social media sharing website in asp.net, c#, vb.net.Assembly - Halo Research Tool: Assembly is a program designed to aid in the development of creative modifications for the Xbox 360 Halo games. It also includes a .NET library for programmers.Async ContentPlaceHolder: Load your ASP.Net content placeholder asynchronously.Automate Microsoft System Center Configuration Manager 2012 with Powershell: Scripts to automate Microsoft System Center 2012 Configuration Manager with PowershellAzMan Contrib: AzMan Contrib aims to provide a better experience when using AzMan with ASP.NET WebForms and ASP.NET MVC.Badhshala Jackpot: Facebook application that features Slotmachine with 3 slots where each slot's position is predicted randomly. Tools Used: ASP.Net MVC 4, SQL Server, csharpsdk BREIN Messaging Infrastructure: This project allows for hiding & encapsulating an (WS based) infrastructure by providing the means for dynamic message routing. The gateway thereby enhances the messaging channels to enforce any amount of policies upon in- and outcoming messages. CricketWorldCup2011Trivia: Simple trivia game written in C# based on the 2011 Cricket World Cup.Flee#: A C# Port of the Flee evaluator. It's an expression evaluator for C# that compiles the expressions into IL code, resulting in very fast and efficient execution.Hamcast for multi station coordination: Amateur Radio multiple station operation tends to have loggers and operators striving to get particular information from each other, like what IP address and so forth, so I write this small multicast utility to help them. Supports N1MM and other popular loggers.LDAP/AD Claim Provider For SharePoint 2010: This claim provider implements search on LDAP and AD for SAML authentication (claims mode) in SharePoint 2010MicroData Parser: This library is preliminary implementation of the HTML5 MicroData specification, in C#.PCC.Framework: NET???????????ResumeSharp: ResumeSharp is a resume building tool designed to help keep your resume up-to-date easily. Additionally, you can quickly generate targeted resumes on the fly. It's developed in C#.Sharepoint SPConstant generator: This utility creates a hierarchally representation of a WSS 3.0 / MOSS 2007 Site Collection and generates a C# Source Code File (SPConstant.cs) with a nested structure of structs with static const string fields. This enables you to do the following: SPList list = web.Lists[SPConstant.Lists.Tasklist.Name]; You will then just have to regenerate the SPConstant file (eg. from within VS 2005 or from Command line) to update the name. Description is added to the XML-comments in the generated file ...SoftServe Tasks: a couple of tasks from the 'softserve' courses.Solid Edge Community Extensions: Solid Edge SDKStar Fox XNA Edition: Development of videogames for Microsoft platforms using XNA Game Studio. Remake of the classic videogame Star Fox (1993) for SNES game console.TinySimpleCMS: a tiny and simple cmsUgly Animals: Crossing of Angry Birds and Yeti SportsVIENNA Advantage ERP and CRM: A real cloud based ERP and CRM in C#.Net offering enterprise level functionality for PC, Mac, iPhone, Surface and Android with HTML5 and Silverlight UI.WebSitemap Localizer: WebSitemap Localizer is a utility to auto-convert your Web.sitemap file to support localization.

    Read the article

  • Convenience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later Edit: If I am going in the right direction with XML would it make sense to convert it to JSON and use maybe Kyro or EclipseLink JAXB (MOXy)

    Read the article

  • From The OU Classrooms...

    - by rajeshr
    No excuses for not doing this systematically, and I'm trying my best to break this bad habit of bulk uploads of class photographs and do it regularly instead. But for the time being, please forgive my laziness and live by my mass introduction of all fun loving, yet talented folks whom I met in the OU classrooms during the last three months or so through these picture essay that follow. It's unfortunate, I don't get to do this for my Live Virtual Classes for obvious reason,but let me take a moment to thank them all as well for choosing OU programs on various products. Thanks again to each one for memorable moments in the OU classrooms: Pillar Axiom MaxRep session at Bangkok. For detailed information on the OU course on Pillar Axiom Max Rep, access this page. Pillar Axiom SAN Administration Session at Bangkok. Know more about the product here. Details on the Pillar Axiom training program from Oracle University can be found here. Oracle Solaris ZFS Administration & Oracle Solaris Containers session at Hyderabad. Read more about ZFS here. Gain information on Solaris Containers by going here. Oracle University courses on Solaris 10 and its features can be viewed at this page. Oracle Solaris Cluster program at Hyderabad. Here's the OU landing page for the training programs on Oracle Solaris Cluster. Oracle Solaris 11 Administration Session at Bangalore. If you are interested to get trained on Solaris 11, get more details at this webpage. Sun Identity Manager Deployment Fundamentals session at Bangalore. The product is n.k.a Oracle Waveset IDM. Click here to get detailed description on this fabulous hands on training program. With Don Kawahigashi at Taipei for Pillar Axiom Storage training.

    Read the article

  • Application shortcut reappears on restart

    - by Nathan Friesen
    I have an application that I have built a .msi installer for throgh Microsoft Visual Studio 2010. I recently made some updates, including changing the version number and rebuilt the installer with these updates. The installer includes shortcuts on both the desktop and in the Start menu. Running the installer appears to work fine, and both of these shortcuts work. After restarting my computer I've found that the shortcuts are changed to have a Target type of Application (Installs on first use) and the Start In: field is changed to a location that doesn't exist. Once this happens, every time you use that shortcut it tries to install the application again and fails. I have also changed the name of the shortcut that the installer creates. This appears to work, and the shortcut still works after a restart. After the restart, though, the shortcut with the old name that doesn't work also appears on the desktop and in the Start menu. Does anyone have any ideas what I may have set up wrong, or what I need to change to get the shortcuts to be have properly?

    Read the article

  • How can a large, Fortran-based number crunching codebase be modernized?

    - by Dave Mateer
    A friend in academia asked me for advice (I'm a C# business application developer). He has a legacy codebase which he wrote in Fortran in the medical imaging field. It does a huge amount of number crunching using vectors. He uses a cluster (30ish cores) and has now gone towards a single workstation with 500ish GPUS in it. However where to go next with the codebase so: Other people can maintain it over next 10 year cycle Get faster at tweaking the software Can run on different infrastructures without recompiles After some research from me (this is a super interesting area) some options are: Use Python and CUDA from Nvidia Rewrite in a functional language. For example, F# or Haskell Go cloud based and use something like Hadoop and Java Learn C What has been your experience with this? What should my friend be looking at to modernize his codebase? UPDATE: Thanks @Mark and everyone who has answered. The reasons my friend is asking this question is that it's a perfect time in the projects lifecycle to do a review. Bringing research assistants up to speed in Fortran takes time (I like C#, and especially the tooling and can't imagine going back to older languages!!) I liked the suggestion of keeping the pure number crunching in Fortran, but wrapping it in something newer. Perhaps Python as that seems to be getting a stronghold in academia as a general-purpose programming language that is fairly easy to pick up. See Medical Imaging and a guy who has written a Fortran wrapper for CUDA, Can I legally publish my Fortran 90 wrappers to Nvidias' CUFFT library (from the CUDA SDK)?.

    Read the article

  • [Dear Recruiter] I developed in Mo'Fusion

    - by refuctored
    Forward: Sometimes I really feel like technology recruiters have no experience or knowledge of the field they are recruting for.  A warning to those companies hiring technical recruiters -- ensure that the technical recruiters you hire to fill a position are actually technical.  Here's proof below, where I make up completely ridiculous technologies, but still have interest from the recruiter for an interview. Letter to me: Hello - Your name came up as a possible match for a long term contract Cold Fusion Developer role I have in Bothell, WA.  This role requires you to be onsite in Bothell, WA. This is  a tough role to fill so I was hoping you might have someone you can recommend? Unfortunately no telecommute. Thank you! Sincerly, Mindy Recruiter My response: Mindy -- Wow I'm super-excited that you took the time to contact me about this position!  Let me tell you, you won't be disappointed with my skill set! Firstly, I've been developing in ColdFusion since 1993 before it was owned by Adobe and it was operating under code name, "Hot-Jack".  Recently I started developing under the Domain-View-Driven-Domain-Model (DVDDM), integrating client-side CF on Moobuntu.  Not only do I have a boat load of ColdFusion EXP,  I also have a ton of experience in the open source communities lesser known derivative of CF, Mo'Fusion (MF).  I've also invested thousands of hours of my time learning esoteric programming languages. Look forward to working with you! George And her response: Hi George – just left you a message. Give me a call at your convenience.  The role does require someone to be onsite here.. are you able to relocate yourself? Mindy [Sigh]

    Read the article

  • DVD RW+ Not showing up

    - by Manywa R.
    I'm running Ubuntu 12.10 on a Toshiba Satellite Pro A120 and my built in DVD Drive is not opening any cd/dvd/dvd rw that am trying to play on them. the drive seems to be mounted and recongnized: Output of sudo lshw: ... *-cdrom description: DVD-RAM writer product: DVD-RAM UJ-841S vendor: MATSHITA physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/sr0 version: 1.40 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=ready *-medium physical id: 0 logical name: /dev/cdrom and the disk seems to start but hang with the dvd drive LED solid amber.... the output of jun@jun-Satellite-Pro-A120:~$ dmesg | grep "sr0" [679396.184901] sr 1:0:0:0: [sr0] Unhandled sense code [679396.184910] sr 1:0:0:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [679396.184920] sr 1:0:0:0: [sr0] Sense Key : Hardware Error [current] [679396.184931] sr 1:0:0:0: [sr0] Add. Sense: Id CRC or ECC error [679396.184942] sr 1:0:0:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [679396.184965] end_request: I/O error, dev sr0, sector 0 [679396.184975] Buffer I/O error on device sr0, logical block 0 [679396.184984] Buffer I/O error on device sr0, logical block 1 [679396.184990] Buffer I/O error on device sr0, logical block 2 [679396.184996] Buffer I/O error on device sr0, logical block 3 [679396.185002] Buffer I/O error on device sr0, logical block 4 [679396.185008] Buffer I/O error on device sr0, logical block 5 [679396.185014] Buffer I/O error on device sr0, logical block 6 [679396.185020] Buffer I/O error on device sr0, logical block 7 [679396.185031] Buffer I/O error on device sr0, logical block 8 [679396.185038] Buffer I/O error on device sr0, logical block 9 [679396.185070] sr 1:0:0:0: [sr0] unaligned transfer [679396.185108] sr 1:0:0:0: [sr0] unaligned transfer Can someone help me through this? tired of moving around with an external dvd drive. Thanks

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How common is prototyping as the first stage of development?

    - by EpsilonVector
    I've been taking some software design courses in the past few semesters, and while I see the benefit in a lot of the formalism, I feel like it doesn't tell me anything about the program itself: You can't tell how the program is going to operate from the Use Case spec, even though it discusses what the program can do. You can't tell anything about the user experience from the requirements document, even though it can include quality requirements. Sequence diagrams are a good description of how the software works as the call stack, but are very limited, and give a highly partial view of the overall system. Class diagrams are great for describing how the system is built, but are utterly useless in helping you figure out what the software needs to be. Where in all this formalism is the bottom line: how the program looks, operates, and what experience it gives? Doesn't it make more sense to design off of that? Isn't it better to figure out how the program should work via a prototype and strive to implement it for real? I know that I'm probably suffering from being taught engineering by theoreticians, but I need to ask, do they do this in the industry? How do people figure out what the program actually is, not what it should conform to? Do people prototype a lot, or do they mostly use the formal tools like UML and I just didn't get the hang of using them yet?

    Read the article

  • cannot get upstart to run user job

    - by dre
    I am trying to get upstart start a user job during the boot of my machine. I have my conky.conf upstart config file in my $HOME/.init directory. Wenn I run "start conky" I get this error: dre@dre-laptop:~$ start conky start: Rejected send message, 1 matched rules; type="method_call", sender=":1.76" (uid=1000 pid=2843 comm="start conky ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init") dre@dre-laptop:~$ I (think I) know that this error has to do with the d-bus system (authentification). I also read (http://upstart.ubuntu.com/cookbook/#id96) that ubuntu 12.10 already has the right configuration in the d-bus config file "/etc/dbus-1/system.d/Upstart.conf" to allow normal users to use upstart. dre@dre-laptop:~/.init$ cat conky.conf description "conky, a system monitor appled" start on lightdm stop on shutdown # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background #expect fork # Start conky exec /usr/bin/conky So, who knows, what do I do wrong??? greetings Andre No one??? Please... give me your best shot.

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • Frequent Disconnects with an Intel 3945ABG Wireless Card

    - by Alex Forsythe
    I'm brand new to Ubuntu, and I really love it so far, but one issue I have encountered is that my WLAN is disconnecting about every 5-10 minutes. Often times the connection is repaired automatically, but sometimes the network manager will repeatedly reject my encryption key (which of course is correct). Occasionally after a disconnect, the wireless network fails to show up at all. The only way I can solve this seems to be by completely restarting Ubuntu or connecting with a USB wireless adapter. I am using WPA/WPA2 encryption, which I've read can cause problems with network-manager, but I experience the exact same issues with WICD. I should probably note that I've not experienced any of these issues using Windows 7 on my other partition. I have a hunch that there may be a better driver out there for my card, but I have no idea how to go about searching for it or installing it. Any help would be really appreciated! lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric lspci -nnk 09:00.0 Ethernet controller [0200]: Broadcom Corporation NetXtreme BCM5752 Gigabit Ethernet PCI Express [14e4:1600] (rev 02) Subsystem: Dell Device [1028:0201] Kernel driver in use: tg3 Kernel modules: tg3 0c:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection [8086:4222] (rev 02) Subsystem: Intel Corporation Device [8086:1020] Kernel driver in use: iwl3945 Kernel modules: iwl3945 rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no 3: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no

    Read the article

  • Is there a design pattern for chained observers?

    - by sharakan
    Several times, I've found myself in a situation where I want to add functionality to an existing Observer-Observable relationship. For example, let's say I have an Observable class called PriceFeed, instances of which are created by a variety of PriceSources. Observers on this are notified whenever the underlying PriceSource updates the PriceFeed with a new price. Now I want to add a feature that allows a (temporary) override to be set on the PriceFeed. The PriceSource should still update prices on the PriceFeed, but for as long as the override is set, whenever a consumer asks PriceFeed for it's current value, it should get the override. The way I did this was to introduce a new OverrideablePriceFeed that is itself both an Observer and an Observable, and that decorates the actual PriceFeed. It's implementation of .getPrice() is straight from Chain of Responsibility, but how about the handling of Observable events? When an override is set or cleared, it should issue it's own event to Observers, as well as forwarding events from the underlying PriceFeed. I think of this as some kind of a chained observer, and was curious if there's a more definitive description of a similar pattern.

    Read the article

  • "Don't do programming after a few years of starting career" Is this a fair advice?

    - by Muhammad Yasir
    I am a little experienced developer having around 5 years experience in PHP and somewhat less in Java, C# and trying to learn some Python now a days. Since the start of my career as a programmer I have been told every now and then by fellow programmers that programming is suitable for a few early years of carrier (most of them take it as 5 years) and that one must change the direction after it. The reason they present is that headaches and pressures associated with programming. They also say that programmers are less social and don't usually like to give time to their families etc. and specially "Oh come on, you can not do programming in your entire life!" I am somewhat confused here and need to ask others about it. If I leave programming then what do I do?! I guess teaching may be a good option in this case but it will require to first earn a PhD degree perhaps. It may also be noteworthy that in my country (Pakistan) the life of a programmer is not very good in that normally they must give 2-3 extra hours in office to accomplish urgent programming tasks. I have a sense that situation is somewhat similar in other countries and regions as well. So the question is, do you think it is a fair advice to change career from programming to something else after spending 5 years in this field? Thanks for sharing thoughts!

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >