Search Results

Search found 27989 results on 1120 pages for 'junior software developer'.

Page 278/1120 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Windows Phone 8 SDK

    - by Nikita Polyakov
    Yesterday, the new Windows Phone 8 was announced! Find out more about the cool new OS and the new devices supporting it at www.WindowsPhone.com Today at BUILD conference in Redmond, WA – Microsoft has announced general availability of the Developer SDK for Windows Phone 8! Get the SDK and more info in the dev center: http://dev.WindowsPhone.com Also watch the Windows Phone Developer blog. Also this is the best time to join the Windows Phone Store for just for $8 for next 8 days.

    Read the article

  • Configuring NAT and static IP on Cisco 877W

    - by David M Williams
    Hi all, I'm having trouble setting up a static IP reservation on a network. What I want to do is assign IP 192.168.1.105 to MAC address 00:21:5d:2f:58:04 and then port forward 35394 to it. If it helps, output from show ver says Cisco IOS software, C870 software (C870-ADVSECURITYK9-M), version 12.4(4)T7, release software (fc1) ROM: System bootstrap, version 12.3(8r)YI4, release software I have done this - service dhcp ip routing ip dhcp excluded-address 192.168.1.1 192.168.1.99 ip dhcp excluded-address 192.168.1.200 192.168.1.255 ip dhcp pool ClientDHCP network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 192.168.1.1 lease 7 ip dhcp pool NEO host 192.168.1.105 255.255.255.0 hardware-address 0021.5D2F.5804 ip nat inside source static tcp 192.168.1.105 35394 <PUBLIC_IP> 35394 extendable However, the machine is getting assigned IP address 192.168.1.101 not .105 ... any suggestions? Thanks !

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Western Digital Smartware not detect External HDD

    - by romilnagrani
    Hi people, i recently buy WD Mybook Essential HDD 1 TB. I downloaded and install Smartware software in both my desktop (windows xp) and laptop (Windows 7) but in both case the s/w is not able to detect the external hard disc. It shows desktop/laptop (Whichever is apt) on left hand side of software but not the hard drive on right side. Why so? i need to install smartware s/w as my friend had gave me which i suppose had deleted the software. please help me thanks

    Read the article

  • Advice for a getting a job in algorithmic trading - writing faster code

    - by Alex
    I am currently an intermediate Java developer working in the financial industry. I am considering trying to get into an algorithmic trading developer position. I am looking for any advice/resources that may help me obtain such a job. My naive initial thoughts are to concentrate on learning how to write faster, more memory efficient code whilst maintaining readability. Can anyone point me in the right direction of some useful resources for what I am aiming to achieve?

    Read the article

  • Problem displaying tiles using tiled map loader with SFML

    - by user1905192
    I've been searching fruitlessly for what I did wrong for the past couple of days and I was wondering if anyone here could help me. My program loads my tile map, but then crashes with an assertion error. The program breaks at this line: spacing = atoi(tilesetElement-Attribute("spacing")); Here's my main game.cpp file. #include "stdafx.h" #include "Game.h" #include "Ball.h" #include "level.h" using namespace std; Game::Game() { gameState=NotStarted; ball.setPosition(500,500); level.LoadFromFile("meow.tmx"); } void Game::Start() { if (gameState==NotStarted) { window.create(sf::VideoMode(1024,768,320),"game"); view.reset(sf::FloatRect(0,0,1000,1000));//ball drawn at 500,500 level.SetDrawingBounds(sf::FloatRect(view.getCenter().x-view.getSize().x/2,view.getCenter().y-view.getSize().y/2,view.getSize().x, view.getSize().y)); window.setView(view); gameState=Playing; } while(gameState!=Exiting) { GameLoop(); } window.close(); } void Game::GameLoop() { sf::Event CurrentEvent; window.pollEvent(CurrentEvent); switch(gameState) { case Playing: { window.clear(sf::Color::White); window.setView(view); if (CurrentEvent.type==sf::Event::Closed) { gameState=Exiting; } if ( !ball.IsFalling() &&!ball.IsJumping() &&sf::Keyboard::isKeyPressed(sf::Keyboard::Space)) { ball.setJState(); } ball.Update(view); level.Draw(window); ball.Draw(window); window.display(); break; } } } And here's the file where the error happens: /********************************************************************* Quinn Schwab 16/08/2010 SFML Tiled Map Loader The zlib license has been used to make this software fully compatible with SFML. See http://www.sfml-dev.org/license.php This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. *********************************************************************/ #include "level.h" #include <iostream> #include "tinyxml.h" #include <fstream> int Object::GetPropertyInt(std::string name) { int i; i = atoi(properties[name].c_str()); return i; } float Object::GetPropertyFloat(std::string name) { float f; f = strtod(properties[name].c_str(), NULL); return f; } std::string Object::GetPropertyString(std::string name) { return properties[name]; } Level::Level() { //ctor } Level::~Level() { //dtor } using namespace std; bool Level::LoadFromFile(std::string filename) { TiXmlDocument levelFile(filename.c_str()); if (!levelFile.LoadFile()) { std::cout << "Loading level \"" << filename << "\" failed." << std::endl; return false; } //Map element. This is the root element for the whole file. TiXmlElement *map; map = levelFile.FirstChildElement("map"); //Set up misc map properties. width = atoi(map->Attribute("width")); height = atoi(map->Attribute("height")); tileWidth = atoi(map->Attribute("tilewidth")); tileHeight = atoi(map->Attribute("tileheight")); //Tileset stuff TiXmlElement *tilesetElement; tilesetElement = map->FirstChildElement("tileset"); firstTileID = atoi(tilesetElement->Attribute("firstgid")); spacing = atoi(tilesetElement->Attribute("spacing")); margin = atoi(tilesetElement->Attribute("margin")); //Tileset image TiXmlElement *image; image = tilesetElement->FirstChildElement("image"); std::string imagepath = image->Attribute("source"); if (!tilesetImage.loadFromFile(imagepath))//Load the tileset image { std::cout << "Failed to load tile sheet." << std::endl; return false; } tilesetImage.createMaskFromColor(sf::Color(255, 0, 255)); tilesetTexture.loadFromImage(tilesetImage); tilesetTexture.setSmooth(false); //Columns and rows (of tileset image) int columns = tilesetTexture.getSize().x / tileWidth; int rows = tilesetTexture.getSize().y / tileHeight; std::vector <sf::Rect<int> > subRects;//container of subrects (to divide the tilesheet image up) //tiles/subrects are counted from 0, left to right, top to bottom for (int y = 0; y < rows; y++) { for (int x = 0; x < columns; x++) { sf::Rect <int> rect; rect.top = y * tileHeight; rect.height = y * tileHeight + tileHeight; rect.left = x * tileWidth; rect.width = x * tileWidth + tileWidth; subRects.push_back(rect); } } //Layers TiXmlElement *layerElement; layerElement = map->FirstChildElement("layer"); while (layerElement) { Layer layer; if (layerElement->Attribute("opacity") != NULL)//check if opacity attribute exists { float opacity = strtod(layerElement->Attribute("opacity"), NULL);//convert the (string) opacity element to float layer.opacity = 255 * opacity; } else { layer.opacity = 255;//if the attribute doesnt exist, default to full opacity } //Tiles TiXmlElement *layerDataElement; layerDataElement = layerElement->FirstChildElement("data"); if (layerDataElement == NULL) { std::cout << "Bad map. No layer information found." << std::endl; } TiXmlElement *tileElement; tileElement = layerDataElement->FirstChildElement("tile"); if (tileElement == NULL) { std::cout << "Bad map. No tile information found." << std::endl; return false; } int x = 0; int y = 0; while (tileElement) { int tileGID = atoi(tileElement->Attribute("gid")); int subRectToUse = tileGID - firstTileID;//Work out the subrect ID to 'chop up' the tilesheet image. if (subRectToUse >= 0)//we only need to (and only can) create a sprite/tile if there is one to display { sf::Sprite sprite;//sprite for the tile sprite.setTexture(tilesetTexture); sprite.setTextureRect(subRects[subRectToUse]); sprite.setPosition(x * tileWidth, y * tileHeight); sprite.setColor(sf::Color(255, 255, 255, layer.opacity));//Set opacity of the tile. //add tile to layer layer.tiles.push_back(sprite); } tileElement = tileElement->NextSiblingElement("tile"); //increment x, y x++; if (x >= width)//if x has "hit" the end (right) of the map, reset it to the start (left) { x = 0; y++; if (y >= height) { y = 0; } } } layers.push_back(layer); layerElement = layerElement->NextSiblingElement("layer"); } //Objects TiXmlElement *objectGroupElement; if (map->FirstChildElement("objectgroup") != NULL)//Check that there is atleast one object layer { objectGroupElement = map->FirstChildElement("objectgroup"); while (objectGroupElement)//loop through object layers { TiXmlElement *objectElement; objectElement = objectGroupElement->FirstChildElement("object"); while (objectElement)//loop through objects { std::string objectType; if (objectElement->Attribute("type") != NULL) { objectType = objectElement->Attribute("type"); } std::string objectName; if (objectElement->Attribute("name") != NULL) { objectName = objectElement->Attribute("name"); } int x = atoi(objectElement->Attribute("x")); int y = atoi(objectElement->Attribute("y")); int width = atoi(objectElement->Attribute("width")); int height = atoi(objectElement->Attribute("height")); Object object; object.name = objectName; object.type = objectType; sf::Rect <int> objectRect; objectRect.top = y; objectRect.left = x; objectRect.height = y + height; objectRect.width = x + width; if (objectType == "solid") { solidObjects.push_back(objectRect); } object.rect = objectRect; TiXmlElement *properties; properties = objectElement->FirstChildElement("properties"); if (properties != NULL) { TiXmlElement *prop; prop = properties->FirstChildElement("property"); if (prop != NULL) { while(prop) { std::string propertyName = prop->Attribute("name"); std::string propertyValue = prop->Attribute("value"); object.properties[propertyName] = propertyValue; prop = prop->NextSiblingElement("property"); } } } objects.push_back(object); objectElement = objectElement->NextSiblingElement("object"); } objectGroupElement = objectGroupElement->NextSiblingElement("objectgroup"); } } else { std::cout << "No object layers found..." << std::endl; } return true; } Object Level::GetObject(std::string name) { for (int i = 0; i < objects.size(); i++) { if (objects[i].name == name) { return objects[i]; } } } void Level::SetDrawingBounds(sf::Rect<float> bounds) { drawingBounds = bounds; cout<<tileHeight; //Adjust the rect so that tiles are drawn just off screen, so you don't see them disappearing. drawingBounds.top -= tileHeight; drawingBounds.left -= tileWidth; drawingBounds.width += tileWidth; drawingBounds.height += tileHeight; } void Level::Draw(sf::RenderWindow &window) { for (int layer = 0; layer < layers.size(); layer++) { for (int tile = 0; tile < layers[layer].tiles.size(); tile++) { if (drawingBounds.contains(layers[layer].tiles[tile].getPosition().x, layers[layer].tiles[tile].getPosition().y)) { window.draw(layers[layer].tiles[tile]); } } } } I really hope that one of you can help me and I'm sorry if I've made any formatting issues. Thanks!

    Read the article

  • How to edit semi-plaintext file and maintaining character structure?

    - by Raul
    I am using a software (Groupmail from Infacta) that uses exact / absolute %PATHS% for saving some settings in specific semi-plaintext file. This is a really bad idea because you can't move to USER folder, or like my case it does not work after migrating to a new computer with different language. For example: C:\Documents and Settings\USER\Local Settings\Application Data\Infacta is different than C:\Documents and Settings\USER\Configuración local\Datos de programa\Infacta Obviously, the software does not work well. I tried to solve this using Find/Replace the new PATH with Notepad++. While the Groupmail software loads well and shows settings correctly, the software fails when trying to save data on that file. I guess this is because length or number of replaced characters is different and also it corrupt the file. Please could you help me to edit this file maintaining file integrity / structure?

    Read the article

  • So, what&rsquo;s your blog URL?

    - by johndoucette
    Asked by many of my colleagues often enough, I decided to take the plunge and begin blogging. After many attempts to start and long discussions about what I should write about, I decided to give my “buddies” a series of lessons and tidbits to help them understand what it takes to manage a software development project in the real world. Stories of success and failure to keep hope alive. I am formally trained as a developer (BS/CS) and have scattered my code throughout the matrix since 1985 (officially working for the man). As I moved from job-to-job over my career, I have had good managers, bad ones, and ones who were – well, just sitting in the corner office. It wasn't until I began the transition and commitment to the role of project management that I began to take real software development management seriously. A boss once told me “put down the code. Start managing the people and process.” That was a scary time in my career. I loved solving really cool problems with a blank sheet of paper. It was an adrenaline rush to get an opportunity to start from scratch and write an application solution people would actually use and help them in their work/business. I felt that moving into “management” would remove me from the thrill and ownership I felt as a developer. It was a hard step to take, and one which I believe is hard for any developer. Well, I am here to help you through this transition. For those of you wanting to read my stories or learn about the tools and techniques I use on a daily basis, you too might just learn something you would have never thought of as an architect/developer. I am currently a Sr. Consultant at Magenic with the Boston branch office and primarily work with clients in the New England area. I am typically engaged as the lead project manager on our engagements, but also perform Application Lifecycle Management (ALM) assessments for development organizations as well as augment the Technical Evangelists for Microsoft and perform many Team Foundation Server (TFS) demos, installs and “get started” engagements. I have spoken at the New England Code Camp, our most recent CodeMastery event in Boston, and have written several whitepapers.   I am looking forward to helping you “Put down the code.” John Doucette

    Read the article

  • Do you tend to write your own name or your company name in your code?

    - by Connell Watkins
    I've been working on various projects at home and at work, and over the years I've developed two main APIs that I use in almost all AJAX based websites. I've compiled both of these into DLLs and called the namespaces Connell.Database and Connell.Json. My boss recently saw these namespaces in a software documentation for a project for the company and said I shouldn't be using my own name in the code. (But it's my code!) One thing to bear in mind is that we're not a software company. We're an IT support company, and I'm the only full-time software developer here, so there's not really any procedures on how we should write software in the company. Another thing to bear in mind is that I do intend on one day releasing these DLLs as open-source projects. How do other developers group their namespaces within their company? Does anyone use the same class libraries in personal and in work projects? Also does this work the other way round? If I write a class library entirely at work, who owns that code? If I've seen the library through from start to finish, designed it and programmed it. Can I use that for another project at home? Thanks, Update I've spoken to my boss about this issue and he agrees that they're my objects and he's fine for me to open-source them. Before this conversation I started changing the objects anyway, which was actually quite productive and the code now suits this specific project more-so than it did previously. But thank you to everyone involved for a very interesting debate. I hope all this text isn't wasted and someone learns from it. I certainly did. Cheers,

    Read the article

  • Best practice for marking a bug as resolved in Bugzilla ?

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • Evaluating Scrum - is it okay to have people with multiple roles in a Scrum team?

    - by Wayne M
    I'm evaluating some Agile-style methodologies for possible introduction to my team. With Scrum, is it allowable to have the same person perform multiple roles? We have a small team of four developers and a web designer; we don't really have a lead (I fulfill this role), QA testers or business analysts, and all of our development tasks come from the CIO. Automated testing is seen as a total waste of time, and everything focuses on speed and not quality. What will happen is the CIO will come up with a development task (whether a feature or a bug) and give it to a developer (not to the whole team, to an individual, often in private or out of the blue) who is then expected to get it completed. The CIO doesn't gather requirements beyond the initial idea (and this has bitten us before as we'll implement something only to find out that none of the end users can use the feature, because they weren't consulted or even informed about it before we developed it, and in a panic we'll be told to revert the change) but requires say in/approval of everything that we do. First things first, is a Scrum style something to consider to introduce some standards and practices? From reading, Scrum seems to rely on a bit more trust and communication and focuses more on project management than on development, which is something we are completely devoid of as we don't have any semblance of project management at present. Second, if it can work is it unreasonable for someone, let's say myself, to act as both ScrumMaster and a developer? Or for a developer to also be the Product Owner (although chances are this will be the CIO, who isn't a developer)? I realize the Scrum Master and the Product Owner should be different people but at the same time I don't think we have anyone who has the qualities of a Product Owner (chances are it would turn into a "I need all these stories, I don't care how but get it done" type of deal and/or any freeze would be unfrozen on a whim). It seems to me that I might need to pick and choose pieces of Scrum/XP/Lean to compensate for how things are done currently, as it's highly unlikely that the mentality can be changed; for instance Pair Programming would never fly (seen as a waste, you get half the tasks done if you need two people for everything), TDD would be a hard sell, but short cycles would be welcomed.

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

  • How to Configure a vm on the same machine to do remote desktop [closed]

    - by Varun K
    I want to achieve following: (Note I'd like to get this done first of all with Win7 as both host and vm OS) Install Windows 7/xp/Windows 8 VM on Windows 7/Windows 8 host machine Configure it so that I can connect to it via remote desktop. This is because I use a screen reader software and audio output directly from VMs is not highly responsive. My software has a feature that it can connect to its copy on the remote machine (during rdp session) and then start receiving the text description which it translates into audio on the client (host in this case) machine. I want to know: Which VM software can let me do this – VMWare/Ms Virtual PC or VirtualBox If it is possible with every VM software, could you give an example of how to do this with anyone of these 3? Specifically, I know how to install Windows on VM (on both VMWare/Virtual PC), but don't really know how to configure a network such that I can remote into that VM from host OS. Hope it clarifies what I'm trying to achieve.

    Read the article

  • ArchBeat Link-o-Rama for 10-17-2012

    - by Bob Rhubart
    This is your brain on IT architecture. Oracle Technology Network Architect Day in Los Angeles, Oct 2 Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables—plus a free lunch. Register now. Panel: On the Impact of Software | InfoQ Les Hatton (Oakwood Computing Associates), Clive King (Oracle), Paul Good (Shell), Mike Andrews (Microsoft) and Michiel van Genuchten (moderator) discuss the impact of software engineering on our lives in this panel discussion recorded at the Computer Society Software Experts Summit 2012. OTN APAC Tour 2012: Bangkok, Thailand - Oct 22, 2012 Mike Dietrich shares information on the upcoming OTN APAC Tour stop in Bangkok. Registration is open. Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster | Giri Mandalika Giri Mandalika shares an overview of a new Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3.. As Giri explains, "This solution was centered around the engineered system, SPARC SuperCluster T4-4." The Oldest Big Data Problem: Parsing Human Language | The Data Warehouse Insider Dan McClary offers up a new whitepaper "which details the use of Digital Reasoning Systems' Synthesys software on Oracle Big Data Appliance." Mobile Apps for EBS | Capgemini Oracle Blog Capgemini solution architect Satish Iyer breifly describes how Oracle ADF and Oracle SOA Suite can be used to fill the gap in mobile applications for Oracle EBS. Ease the Chaos with Automated Patching: Enterprise Manager Cloud Control 12c | Porus Homi Havewala This new OTN article is excerpted from Porus Homi Havewala's latest book, Oracle Enteprise Manager Cloud Control 12c: Managing Data Center Chaos (2012, Packt Publishing). Thought for the Day "Never make a technical decision based upon the politics of the situation, and never make a political decision based upon technical issues." — Geoffrey James Source: softwarequotes.com

    Read the article

  • Hierarchies on Steroids #2: A Replacement for Nested Sets Calculations

    In this sequel to his first "Hierarchies on Steroids" article, SQL Server MVP Jeff Moden shows us how to build a pre-aggregated table that will answer most of the questions that you could ask of a typical hierarchy. Any bets on whether Santa is packin’ a Tally Table in his bag or not? 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Windows Phone 7 Development Updates &ndash; March 8th 2011

    - by Nikita Polyakov
    Here are the latest update from the Windows Phone 7 Developer Worlds that went live this month. Here are some of the latest numbers: Windows Phone Marketplace currently offers more than 9,000 quality apps and games and enjoys a base of over 32,000 registered developers, delivering an average of 100 new apps every day. There have been over 1 million downloads of the developers tools for Windows Phone 7. Trial version help you sell more Trials result in higher sales by the numbers: Users like trials  - paid apps with trial functionality are downloaded 70 times more than paid apps that don’t Nearly 1 out of 10 trial apps downloaded convert to a purchase and generate 10 times more revenue on average than paid apps that don’t include trial functionality. Trial downloads convert to paid downloads quickly. More than half of trial downloads that convert to a sale do so within the 1st 24 hours of trial download, and mostly within 2 hours of trial download. Microsoft Ad Control is gaining traction By the numbers - ad supported Windows Phone 7 apps are: Roughly ¼ of all registered U.S. WP7 developers have downloaded the free Ad SDK for Silverlight and XNA Of ad funded apps, over 95 percent use the free Microsoft Advertising Ad Control Monthly impressions from our Ad Exchange has continued to grow by double digits – impressions increased by 376 percent since January Ad Control, the first wave of “How Do I” videos are now available on MSDN: Create an Ad in a Windows Phone 7 XNA Game App Register Ad-Enabled Windows Phone 7 Apps Measure Ad Performance of Windows Phone 7 Apps Boarder International App submission for Free Apps through Yalla Apps As of today you can start submitting your free applications in developer markets that are currently not covered by Microsoft. To submit your Free application if you DO NOT belong to one of the Marketplace supported countries, go to: Yalla Apps Marketplace Policy Updates: Free App Marketplace Submission upped to 100 and other news Microsoft has been revisiting a few of our Marketplace policies based on feedback from developers to reduce friction and cost, word for word: 1. We have raised the limit on the number of certifications that can be performed for FREE apps at no cost to the registered developer from five to 100. This was a common request from developers which we are glad to implement after building alternate methods to ensure that users can find and download high quality apps. 2. We have converted policy 5.6 - related to the inclusion of contact information for support - from a mandatory to an optional policy. This is still a strongly recommended best practice, but we recognized and responded to developer feedback that this policy was creating excessive drag on the certification process for developers without commensurate user benefit for all apps. 3. We also understand the desire for clarification with regard to our policy on applications distributed under open source licenses.  The Marketplace Application Provider Agreement (APA) already permits applications under the BSD, MIT, Apache Software License 2.0 and Microsoft Public License.  We plan to update the APA shortly to clarify that we also permit applications under the Eclipse Public License, the Mozilla Public License and other, similar licenses and we continue to explore the possibility of accommodating additional OSS licenses. Enjoy and happy coding! Official Blog Post for reference.

    Read the article

  • Package managers for Windows

    - by mezei.zoltan
    You might be familiar with Ninite. What I'd like to know is if there are good alternatives to that software for Windows. The features I expect: installs the latest version of software supports 64 bit installs where possible strips ads/toolbars/similar stuff provides a way to keep the programs updated after installation if I can add custom installers to the software, that's a big plus. Any ideas if such a program exists?

    Read the article

  • How will my Electronic Engineering degree be received in the Canadian Game Development market? [closed]

    - by Harikawashi
    I have a Electronic Engineering with Computer Science Degree from a reputable South African university. The EE with CS degree is basically Electronic Engineering, with some of the high voltage subjects thrown out and replaced with computer science subjects - mostly quite theoretical, but not in too much depth. I went on to earn a Masters Degree in Digital Signal Processing, focussing on Speech Recognition in Educational Applications. I have always loved programming - I taught myself QBASIC when I was in primary school, I learned Java at school, did some low level C at University, and taught myself C# and Python while doing my post graduate degree. C# is currently my strong suit, I think I am pretty capable with it. I have two years work experience in Namibia - working as a consulting electrical engineer (no software content whatsoever) and also developing C# desktop applications for the company I work for. I would like to move to Canada next year and work in the Game Development Industry as programmer or software engineer. My interests in particular are towards the more mathematical applications, like game and physics engines, or statistical disciplines like artificial intelligence. However, these are passions - not areas in which I have any work experience. So the question: How well will my BEngEE&CS and MScEng be received in the game industry? Seeing as it's not a pure software degree and I have no official software development work experience?

    Read the article

  • Have you used nDepend?

    - by Nick Harrison
    Have you Used NDepend? I have often wanted to use it, but never spent the money on it.   I have developed many tools that try to do pieces of what NDepend does, but never with as much success as they reach. Put simply, it is a tool that will allow you to udnerstand and monitor the architecture of your software, and it does it in some pretty amazing ways. One of the most impressive features is something that they call Code Query Language.   It allows you to write queries very similar to SQL to track the performance of various software metrics and use this to identify areas that are out of compliance with your standards and architecture. For instance, once you have analyzed your project, you can write queries such as : SELECT METHODS WHERE IsPublic AND CouldBePrivate  You can also set up such queries to provide warnings if there are records returned.    You can incorporae this into your daily build and compare build against build. There are over 82 metrics included to allow you to view your code in a variety of angles. I have often advocated for a "Code Inventory" database to track the state of software and the ROI on software investments.    This tool alone will take you about 90% of the way there. If you are not using it yet,  I strongly recommend that you do!

    Read the article

  • active-to-passive ftp solution

    - by Joris
    I have an ftp client (.NET app I don't have the source to) that only does active mode that needs to push data to an appliances ftp-server that only speaks passive. There is nothing I can do to modify the software on either end; but everything in between is fair game. (routing, windows or linux software, firewall tricks, ...) Is there some kind of ftp proxy software? Or some kind of solution I could try?

    Read the article

  • Recommended Method to Watch Amazon Prime using Ubuntu 14.04 LTS

    - by Kurt Sanger
    I realize that Hal is no longer in the Ubuntu Software Center for Ubuntu 14.04 and it is only available from a third party at this time. But I would like to know what Ubuntu's plans are for integrating DRM into Linux? Especially with Amazon's integration into the search tool, one would hope that they would make it easier for their Amazon Prime customers to watch Instant Videos. Is the repository for getting Hal for 13.10 safe for use? What will that break if I install it onto 14.04? Or do we need to find another OS that has DRM built into it? If Hal is okay to add to the OS using a third party repo, then why doesn't Ubuntu Software Center support it too? I imagine that Amazon's contract with the video copyright holders requires that they have some protection on electronically distributed media. I also imagine that getting Amazon to change is much harder than getting a bunch of software engineers to fix Ubuntu. Unless they don't want too. At which point Ubuntu isn't really a complete OS. Very disappointing. In general the ease of use of Ubuntu, the software center, and the large variety of applications was alluring. But breaking DRM wasn't a great idea. Can't wait to see what fails in our next update. Please tell us that there is a plan that is going to work in our future.

    Read the article

  • What are the legal considerations when forking a BSD-licensed project?

    - by Thomas Owens
    I'm interested in forking a project released under a two-clause BSD license: Copyright (c) 2010 {copyright holder} All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the disclaimer at the end. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (2) Neither the name of {copyright holder} nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. DISCLAIMER THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. I've never forked a project before, but this project is very similar to something that I need/want. However, I'm not sure how far I'll get, so my plan is to pull the latest from their repository and start working. Maybe, eventually, I'll get it to where I want it, and be able to release it. Is this the right approach? How, exactly, does this impact forking of the project? How do I track who owns what components or sections (what's copyright me, what's copyright the original creators, once I start stomping over their code base)? Can I fork this project? What must I do prior to releasing, and when/if I decide to release the software derived from this BSD-licensed work?

    Read the article

  • Best practice while marking a bug as resolved with Bugzilla (versioning of product and components)

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • links for 2010-12-22

    - by Bob Rhubart
    @hajonormann: BPM: Top Seven Architectural Topics in 2010 Oracle ACE Director Hajo Normann offers details on how to design a BPM/SOA solution including: modeling human interaction, improving BPM models, orchestrating composed services, central task management, new approaches for business-IT alignment, solutions for non-deterministic processes, and choreography. (tags: oracle otn soasymposium infoq soa bpm) InfoQ: Simplicity, The Way of the Unusual Architect Dan North talks about the tendency developers-becoming-architects have to create bigger and more complex systems. Without trying to be simplistic, North argues for simplicity, offering strategies to extract the simple essence from complex situations. (tags: ping.fm) Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV (Wim Coekaerts Blog) "One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) Oracle VM VirtualBox 4.0.0 Released! (Oracle's Virtualization Blog) And you were worried about what to get that special someone for Christmas... (tags: oracle otn virtualization virtualbox) Virtual Developer Day: Oracle WebLogic Server & Java EE (#OTNVDD) (Oracle Technology Network Blog (aka TechBlog)) "Virtual Developer Day is back with a vengeance! On Feb. 1, login to learn how Oracle WebLogic Server enables a whole new level of productivity for enterprise developers." Registration is open. (tags: oracle otn events webinar java) New Coherence 3.6 Oracle University Course (Cristóbal Soto's Blog) Cristóbal Soto shares information on the "Oracle Coherence 3.6: Share and Manage Data in Clusters" course now available through Oracle University. (tags: oracle otn grid coherence) The Aquarium: Oracle WebLogic Server & Java EE developer day "Oracle WebLogic is well on its way to contribute to the general Java EE 6 momentum and the OTN Blog has just announced a Virtual Developer Day for Oracle WebLogic." (tags: oracle otn weblogic java) Enterprise 2.0 Use Cases for Semantic Web (Reiser 2.0) "How can an enterprise improve the efficiency and effectiveness of their Knowledge and Community model leveraging semantic technologies and social networking dynamics?" - Peter Reiser (tags: oracle otn enterprise2.0 semanticweb) John Gøtze: European Interoperability Framework 2.0 "This week, the European Commission announced an updated interoperability policy in the EU. The Commission has committed itself to adopt a Communication that introduces the European Interoperability Strategy (EIS) and an update to the European Interoperability Framework (EIF)..." - John Gøtze (tags: entarch Interoperability) Andy Mulholland: Maybe Web 3.0 is quite understandable – and a natural result "The idea of Web 1.0 = content, Web 2.0 = people and Web 3.0 = services has a nice symmetrical feel to it, in fact it feels basically right as such a definition would include the two other major definitions as well. So if we put these things all together what picture do we see?" - Andy Mulholland (tags: web2.0 web3.0) Ken Downs: A Working Definition of Business Logic, with Implications for CRUD Code "The Wikipedia entry on 'Business Logic' has a wonderfully honest opening sentence stating that 'Business logic, or domain logic, is a non-technical term...'"  (tags: businesslogic crud)

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >