Search Results

Search found 23808 results on 953 pages for 'c source'.

Page 557/953 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • First steps with Oracle ADF Mobile for iOS and Android

    - by Bruno.Borges
    Oracle announced recently its new Mobile development platform, called Oracle ADF Mobile. With it, you can build truly Java applications, deploy and run real Java code on both Android and iOS with its self-contained Java runtime. It also comes with PhoneGap. which allows you to use any feature your phone offers, like sensors and camera. It's probably the most complete solution for mobile development out there, simply because with Oracle ADF Mobile, you can write Native, Hybrid or Web applications for your smartphone and tablet. Do you want to take a quick look on what can be done with it? Check out this video!  Now, to start with Oracle ADF Mobile, here are the first steps you will have to go through. Download Oracle JDeveloperGo to this link and download the install file for your environment (Windows, Linux-32bit or Generic) Install JDeveloper (of course)If you need help on this, look at the documentation (if you've downloaded 11gR2, click here) Download Oracle ADF Mobile BundleThis is the download page for Oracle ADF Mobile. Accept the license as usual at the top, and follow with the Download button. It will take you to another page, where you will see a table containing a download link. Click on it and it will start downloading a ZIP file. Start JDeveloperStart Oracle JDev. It may self update. Restart the IDE if you are asked to. Go to Help > Check for updates Click Next and make sure you are at the "Source" tab Select "Install From Local File" Select the Oracle ADF Mobile ZIP you downloaded on step 3 Finish the process   Now you have JDeveloper with Oracle ADF Mobile sucessfully installed! There are two great tutorials to start coding with ADF Mobile. Just choose your platform! Android Tutorial iOS Tutorial And have fun! :-) 

    Read the article

  • IoT? Time for Enterprise Architecture

    - by OTN ArchBeat
    Of course you've been listening to the latest OTN ArchBeat Podcast on the challenges and opportunities in the Internet of Things. If so, you'll also be interested in ZDNet blogger Joe McKendricks' recent post, Will the 'Internet of Things' make CIOs' jobs harder?. In that post McKendrick offers this important bit of advice that will certainly have architects saying "I told you so." Enterprises need to develop architectural approaches to the management of data. Meaning the development of repeatable processes to source, ingest, transform and store information. For years, IT managers simply bought more hardware and addressed data with on-off integration projects. Now it's time for enterprise architecture. IoT is an important new phase in the evolution of enterprise IT. Challenging? You bet! But meeting any such challenge requires big, broad thinking and planning. In that context Enterprise Architecture has always been important. But as IoT gains traction and speed, enterprise architecture should be top of mind for all concerned.

    Read the article

  • JSP / Tomcat / Apache setup overview on Fedora Core

    - by Richard T
    Hi Folks, For someone with so much Java experience, boy do I feel clueless - thanks in advance for your help in my grocking the present (Feb, 2010) JSP environment. Here's what I am hoping to learn: Do I understand correctly that most people use Apache to "front-end" their Tomcat servers, such that Apache "talks" directly to web clients and "proxies" Tomcat servers? Do I understand correctly that Apache isn't capable of serving JSP directly but requires a server (like Tomcat)? Is there an RPM package for Fedora Core so I don't have to build one myself? Or, does Fedora Core's package installer do a good job on this one from source code? (Some do, some don't!) While I'm here asking questions; Does Tomcat come with a working example that one can start hacking on as a way to get started quickly? If not, got a good suggestion? Thanks folks, RT

    Read the article

  • MightyMintyBoost Is a 3-in-1 Gadget Charger

    - by ETC
    If you’re looking for a versatile battery booster, this DIY 3-in-1 solar/usb/wall current charger known as the MightyMintyBoost will top of your phone, mp3 player, and other gadgets with ease. Instructables user Honus didn’t just build the MightMintyBoost to geek out and show off his electronics project skills (although it’s certainly a nifty little project to do so), he’s serious about solar power and the impact clean energy has: Apple has sold over 30 million iPodTouch/iPhone units- imagine charging all of them via solar power…. If every iPhone/iPodTouch sold was fully charged every day (averaging the battery capacity) via solar power instead of fossil fuel power we would save approximately 50.644gWh of energy, roughly equivalent to 75,965,625 lbs. of CO2 in the atmosphere per year. Granted that’s a best case scenario (assuming you can get enough sunlight per day and approximately 1.5 lbs. CO2 produced per kWh used.) Of course, that doesn’t even figure in all the other iPods, cell phones, PDAs, microcontrollers (I use it to power my Arduino projects) and other USB devices that can be powered by this charger- one little solar cell charger may not seem like it can make a difference but add all those millions of devices together and that’s a lot of energy! His MightyMintyBoost is a battery booster for devices that can charge via USB and it accepts incoming current from the solar panel on top (or, on cloudy days can be charged via a wall charger or the USB port on your computer). Hit up the link below to see his full build guide and create your own MightyMintyBoost. MightyMintyBoost [Instructables] Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • Migrating BizTalk 2006 R2 to BizTalk 2010 XLANGs Issue

    - by SURESH GIRIRAJAN
    When we migrate some BizTalk apps from BizTalk 2006 R2 to BizTalk 2010, and we ran into issue when a .net component called inside the orchestration. In the .net component we are trying to retrieve some promoted property and we also checked in the BizTalk group hub to validate it was promoted, no issues there.  Only when we try to access the data into the .net component we had issue. We just moved all the assembly what we had in BizTalk 2006 R2 to BizTalk 2010, didn’t recompile anything in BizTalk 2010 environment. But looking further there is couple of new namespace added to the Microsoft.XLANGs… in BizTalk 2010 compared to BizTalk 2006 R2 caused the issue. So all we did to fix the issue is recompile the project in 2010 environment and it worked fine. So it looks like some backward compatibility issue.  public static void Load(XLANGMessage msg) {  try  {      // get the process id from context.       object ctxVal = msg.GetPropertyValue(typeof(ProcessID)); … } BizTalk 2010: Error Message in the event viewer:  The service instance will remain suspended until administratively resumed or terminated. If resumed the instance will continue from its last persisted state and may re-throw the same unexpected exception. InstanceId: 441d73d3-2e84-49d2-b6bd-7218065b5e1d Shape name: Bulk Load ShapeId: bb959e56-9221-48be-a80f-24051196617d Exception thrown from: segment 1, progress 65 Inner exception: A property cannot be associated with the type 'Tellago.Common.Schemas.ProcessId'.   Exception type: InvalidPropertyTypeException Source: Microsoft.XLANGs.Engine Target Site: Microsoft.XLANGs.RuntimeTypes.MessagePropertyDefinition _getMessagePropertyDefinition(System.Type) The following is a stack trace that identifies the location where the exception occured   at Microsoft.XLANGs.Core.XMessage._getMessagePropertyDefinition(Type propType) at Microsoft.XLANGs.Core.XMessage.GetContentProperty(Type propType) at Microsoft.XLANGs.Core.XMessage.GetPropertyValue(Type propType) at Microsoft.BizTalk.XLANGs.BTXEngine.BTXMessage.GetPropertyValue(Type propType) at Microsoft.XLANGs.Core.MessageWrapperForUserCode.GetPropertyValue(Type propType) at Tellago.Common.Components.Load(XLANGMessage msg) at Tellago.SuspensionProcess.segment1(StopConditions stopOn) at Microsoft.XLANGs.Core.SegmentScheduler.RunASegment(Segment s, StopConditions stopCond, Exception& exp)

    Read the article

  • I'm receiving an SSL error in various browsers, but I can't find non-SSL content

    - by Scott Vercuski
    I'm receiving an error with my SSL connection. Using google chrome I see the following error: Your connection is encrypted with 128-bit encryption ... however this page includes other resources which are not secure I've scoured the source code, scripts and rendered code in the browser but cannot find where an http:// call is made. I've also used Fiddler2 to examine the traffic and everything is coming across via HTTPS. Has anyone run into this issue before and if so how did you go about finding the culprit. The website is running ASP.NET MVC3 in C#. The page in question is a simple payment page. The only external call is the google analytics tracking code. The page appears to load correctly, all images and scripts are in place.

    Read the article

  • Fail to upgrade from 10.10 to 11.04

    - by Ana Solís
    I was using Natty for a while, but while updating to the new release there was a blackout and it wasn't able to finish and Ubuntu failed to load after that. I thought, no worries I got my files backed up and I still got my 10.10 CD I used to put Ubuntu in my computer in the first place. So I installed it again, with the plan of using the update manager to get myself with the current release... Except I get this error: W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/natty/main/source/Sources.gz 404 Not Found , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/natty/main/binary-amd64/Packages.gz 404 Not Found , E:Some index files failed to download, they have been ignored, or old ones used instead. My internet connection is just fine, seeing as I'm able to post this, but I don't know what else to do. Tried to download Quantal in another computer and putting it on a DVD (since it won't fit in a CD...) and the stupid thing fails to load it, it skips it over and goes right back to Maverick... (not a faulty disk, it installed Ubuntu just fine in a friends computer...)

    Read the article

  • [Dear Recruiter] I developed in Mo'Fusion

    - by refuctored
    Forward: Sometimes I really feel like technology recruiters have no experience or knowledge of the field they are recruting for.  A warning to those companies hiring technical recruiters -- ensure that the technical recruiters you hire to fill a position are actually technical.  Here's proof below, where I make up completely ridiculous technologies, but still have interest from the recruiter for an interview. Letter to me: Hello - Your name came up as a possible match for a long term contract Cold Fusion Developer role I have in Bothell, WA.  This role requires you to be onsite in Bothell, WA. This is  a tough role to fill so I was hoping you might have someone you can recommend? Unfortunately no telecommute. Thank you! Sincerly, Mindy Recruiter My response: Mindy -- Wow I'm super-excited that you took the time to contact me about this position!  Let me tell you, you won't be disappointed with my skill set! Firstly, I've been developing in ColdFusion since 1993 before it was owned by Adobe and it was operating under code name, "Hot-Jack".  Recently I started developing under the Domain-View-Driven-Domain-Model (DVDDM), integrating client-side CF on Moobuntu.  Not only do I have a boat load of ColdFusion EXP,  I also have a ton of experience in the open source communities lesser known derivative of CF, Mo'Fusion (MF).  I've also invested thousands of hours of my time learning esoteric programming languages. Look forward to working with you! George And her response: Hi George – just left you a message. Give me a call at your convenience.  The role does require someone to be onsite here.. are you able to relocate yourself? Mindy [Sigh]

    Read the article

  • How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?

    - by Nupul
    We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user-stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user-stories?

    Read the article

  • internal compiler error

    - by hyperboreean
    I am getting this message: internal compiler error: Segmentation fault Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.4/README.Bugs> for instructions. for every compilation that takes longer (ie: linux kernel, kde sources etc.) I've tried other OS (at that moment I was on Fedora 12, now on Debian; was a Suse also) and it didn't work. I've tried replacing my hard disk, since it needed an upgrade either ways - that didn't work either. I assumed that it's the RAM fault - tested them with memtest and it says they are fine. Does anyone know what else I can do in order to figure out where the problem is?

    Read the article

  • Windows 2008 File Share

    - by user36540
    Hi, I have 3 Windows 2008 Standard servers in my system with no domain controller. Two of the servers are running a NLB cluster and the third server is a file server that the web servers connect to. I want to store my source code on the file server and point the IIS config to the network file share. The web sites also need access to a file share on the file server. I was able to share the network drive and access while logged into either of the web servers but my web apps are unable to access the file share - I assume due to permissions. Does anybody know the correct way to do this? Thanks, Chris

    Read the article

  • Windows HTTP proxy client to pass service requests to VPN

    - by Chris
    I've got access to a network via CheckPoint VPN (Windows client). Problem is, I have a linux box that needs to talk to its web services and the target web servers are inside the VPN. So far, we have been unable to connect linux to the VPN (and I'm not trying to solve that problem at the moment). I'm wondering if (temporarily) I can setup a proxy server on a Windows (XP) box to shuttle HTTP requests back and forth? If so, what'd be a good application to do this? (hopefully free/open-source) TIA

    Read the article

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

  • How can I find a computer on my network that is doing mass mailings?

    - by Alex Ciarlill
    I was notified by my isp that one of my machines is sending out spam. This happened about 3 months ago on windows machine running cygwin that was hacked due to an SSH vuln. The hackers setup IIS and SMTP. I cleared out the machine and all the services are disabled so I think that machine is okay I am wondering if there is any other way to identify which machine it could be coming from? The ISP has NO useful information such as source port, destination port, destination IP... nothing. I am running DD-WRT on my router, Windows 7 PC and a Windows XP PC.

    Read the article

  • Has anyone else read "Programming video games for the Evil Genius"

    - by Martin
    I bought this book called "Programming Video Games for the Evil Genius" by Ian Cinnamon. If there is anyone who has read or is familiar with this book I am wondering if they think it is worth reading. I am interested in making video games. I have already taken intro courses in C++, Java and Python and got through okay. I've been going through this book for about a month now(SLOWLY). All I have to do is type the code exactly in the book, BUT a lot of the code is not clearly explained. I do some research online but I usually still have some trouble answering my questions. Then I found stack overflow. It's been a ton of help. Right now I am trying to make a racing game right out of this book and I got to a point where the author left a bunch of errors in his code. One of the members of this website fixed it up for me, but added some stuff that I'm having trouble understanding. I spend more time trying to figure out the authors errors and fix them or get someone to help me fix them than I actually do learning code. I REALLY want to learn how to do this and I am ready and willing to put in the time, but I'm not sure if my time would be better spent learning from a different source. Are there any veterans out there that are familiar with this book and think it's worth it/not worth it? Should I try to move onto another book? Any advice for a fresh start for someone who wants to learn some video game programming?

    Read the article

  • Help: Best way to do a TV-out in 7300GT Nvidia video card?

    - by Martin Ongtangco
    I'm planning to recycle my old PC and build a Media Center using an open-source (C#) software called MediaPortal. My old PC has a GeForce 7300GT with a TV-out plug built-in. When I tested it last night, it wouldn't detect my JVC tv (CRT) using the current drivers. I even purchased a new copper-based TV-Out to RCA cable. I searched all 3 AV channels. The video card has 3 output ports: 2 DVI & 1 s-video. I used the s-video with a S-Video to RCA out cable. I swapped between PAL & NTSC So what I did was I downloaded the first version of an Nvidia driver for 7 series cards, but still even with the old console, it couldn't detect the TV. I'm running out of viable ideas. Anyone here had the same problem and fixed it? Any suggestion is appreciated. Thank you!

    Read the article

  • What is an elegant way to install non-repository software in 12.04?

    - by Tomas
    Perhaps I missed something when Canonical removed the "Create launcher" option from the right click menu, because I've really been missing that little guy. For me, it was the preferred way to install software that comes not in a .deb, but in a tar.gz, for example. (Note: in that tar.gz I have a folder with the compiled files, I'm NOT compiling from source) I just downloaded the new Eclipse IDE and extracted the tar.gz to my /usr folder. Now, I'd like to add it to my desktop and dash so it can be started easily. Intuitively I would right click the desktop and create a launcher. After this I'd copy the .desktop to /usr/share/applications. However, creating a launcher is not possible. My question: How would you install an already compiled tar.gz that you have downloaded from the internet? Below are a few things I've seen, but these are all more time-consuming than the right click option. If you have any better ideas, please let me know. Thanks! Manual copy & create a .desktop file: manually Simply extract the archive to /usr. Create a new text file, adding something along the lines of the code block below: [Desktop Entry] Version=1.0 Type=Application Terminal=false Exec="/usr/local/eclipse42/eclipse" Name="Eclipse 4.2" Icon=/home/tomas/icons/eclipse.svg Rename this file to eclipse42.desktop and make it executable. Then copy this to /usr/share/applications. Manually copy & create a .desktop file: GUI fossfreedom has elaborated on this in How can I create launchers on my desktop? Basically it involves the command: gnome-desktop-item-edit --create-new ~/Desktop After creating the launcher, copy it to /usr/share/applications.

    Read the article

  • Upgrade 10.04LTS to 10.10 problem

    - by Gopal
    Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tools Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade has aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmpe_xVWd/maverick", line 7, in <module> sys.exit(main()) File "/tmp/tmpe_xVWd/DistUpgradeMain.py", line 158, in main if app.run(): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 1616, in run return self.fullUpgrade() File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 1534, in fullUpgrade if not self.updateSourcesList(): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 664, in updateSourcesList if not self.rewriteSourcesList(mirror_check=True): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 486, in rewriteSourcesList distro.get_sources(self.sources) File "/tmp/tmpe_xVWd/distro.py", line 103, in get_sources source.template.official == True and AttributeError: 'Template' object has no attribute 'official' This is what i got when i tried to upgrade the desktop edition:sudo do-release-upgrade. One more info: I have kde installed.

    Read the article

  • iptables logging to diferent file via syslog-ng

    - by rahrahruby
    I have the following configuration in my iptables and syslog files: IPTABLES -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 222 -j ACCEPT -A INPUT -p tcp -m tcp --dport 3306 -j ACCEPT -A INPUT -j DROP -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 SYSLOG-NG destination d_iptables { file("/var/log/iptables/iptables.log"); }; filter f_iptables { facility(kern) and match("IN=" value("MESSAGE")) and match("OUT=" value("MESSAGE")); }; filter f_messages { level(info,notice,warn) and not facility(auth,authpriv,cron,daemon,mail,news) and not filter(f_iptables); }; log { source(s_src); filter(f_iptables); destination(d_iptables); };` I restart syslog-ng and the log is not written.

    Read the article

  • Rawr Code Clone Analysis&ndash;Part 0

    - by Dylan Smith
    Code Clone Analysis is a cool new feature in Visual Studio 11 (vNext).  It analyzes all the code in your solution and attempts to identify blocks of code that are similar, and thus candidates for refactoring to eliminate the duplication.  The power lies in the fact that the blocks of code don't need to be identical for Code Clone to identify them, it will report Exact, Strong, Medium and Weak matches indicating how similar the blocks of code in question are.   People that know me know that I'm anal enthusiastic about both writing clean code, and taking old crappy code and making it suck less. So the possibilities for this feature have me pretty excited if it works well - and thats a big if that I'm hoping to explore over the next few blog posts. I'm going to grab the Rawr source code from CodePlex (a World Of Warcraft gear calculator engine program), run Code Clone Analysis against it, then go through the results one-by-one and refactor where appropriate blogging along the way.  My goals with this blog series are twofold: Evaluate and demonstrate Code Clone Analysis Provide some concrete examples of refactoring code to eliminate duplication and improve the code-base Here are the initial results:   Code Clone Analysis has found: 129 Exact Matches 201 Strong Matches 300 Medium Matches 193 Weak Matches Also indicated is that there was a total of 45,181 potentially duplicated lines of code that could be eliminated through refactoring.  Considering the entire solution only has 109,763 lines of code, if true, the duplicates lines of code number is pretty significant. In the next post we’ll start examining some of the individual results and determine if they really do indicate a potential refactoring.

    Read the article

  • Version control for game development - issues and solutions?

    - by Cyclops
    There are a lot of Version Control systems available, including open-source ones such as Subversion, Git, and Mercurial, plus commercial ones such as Perforce. How well do they support the process of game-development? What are the issues using VCS, with regard to non-text files (binary files), large projects, etc? What are solutions to these problems, if any? For organization of Answers, let's try on a per-package basis. Update each package/Answer with your results. Also, please list some brief details in your answer, about whether your VCS is free or commercial, distributed versus centralized, etc. Update: Found a nice article comparing two of the VCS below - apparently, Git is MacGyver and Mercurial is Bond. Well, I'm glad that's settled... And the author has a nice quote at the end: It’s OK to proselytize to those who have not switched to a distributed VCS yet, but trying to convert a Git user to Mercurial (or vice-versa) is a waste of everyone’s time and energy. Especially since Git and Mercurial's real enemy is Subversion. Dang, it's a code-eat-code world out there in FOSS-land...

    Read the article

  • 10 Innovations in PeopleSoft 9.2 - #2 Lower TCO With The Peoplesoft Update Manager

    - by John Webb
    With the new PeopleSoft Update Manager in PeopleSoft 9.2 the way you manage updates to your PeopleSoft systems puts you in control of all changes on your schedule.   You can selectively apply patches with reduced time, effort, and cost.    Bundles and Maintenance Packs are no longer used.      Instead, a tailored custom package is automatically generated based on the parameters you select from the latest PeopleSoft source image.   You have access to all updates from Oracle on a cumulative basis and can select and search for specific updates such as new features, legal and regulatory changes, or a patch related to a specific issue, process or object.    Any prerequisites are automatically identified.  The  process of generating a change package is enabled through a new wizard with easy to follow steps and options.     As changes are introduced to your test environment the PeopleSoft Test Framework provides a closed loop process to run regression tests scripts against your changes.  For a quick overview of the PeopleSoft Update Manager check out the Video Feature Overview here: PeopleSoft Update Manager Video Feature Overview

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How do I download photos tagged of me from Facebook?

    - by Keith
    I want to be able to download (and back up) photos tagged of me in Facebook. I'm specifically not interested in my own photo albums - I uploaded them and therefore have them already in better quality than FB. What I want are the photos other have uploaded that have me in them. I have a couple of hundred of these now, and don't much fancy three hours of right-click save-as... There seem to be a couple of utilities that pop up with a quick search, but (call me paranoid) I'm wary about giving some random freeware app my login and password. Social safe looked promising, but as it doesn't support this feature at the moment it's kinda pointless. Can anyone recommend one that they've actually used? I'd consider an open source one - I'm a programmer and don't mind digging through to check that it doesn't do anything nasty.

    Read the article

  • Jung Meets the NetBeans Platform

    - by Geertjan
    Here's a small Jung diagram in a NetBeans Platform application: And the code, copied directly from the Jung 2.0 Tutorial:  public final class JungTopComponent extends TopComponent { public JungTopComponent() { initComponents(); setName(Bundle.CTL_JungTopComponent()); setToolTipText(Bundle.HINT_JungTopComponent()); setLayout(new BorderLayout()); Graph sgv = getGraph(); Layout<Integer, String> layout = new CircleLayout(sgv); layout.setSize(new Dimension(300, 300)); BasicVisualizationServer<Integer, String> vv = new BasicVisualizationServer<Integer, String>(layout); vv.setPreferredSize(new Dimension(350, 350)); add(vv, BorderLayout.CENTER); } public Graph getGraph() { Graph<Integer, String> g = new SparseMultigraph<Integer, String>(); g.addVertex((Integer) 1); g.addVertex((Integer) 2); g.addVertex((Integer) 3); g.addEdge("Edge-A", 1, 2); g.addEdge("Edge-B", 2, 3); Graph<Integer, String> g2 = new SparseMultigraph<Integer, String>(); g2.addVertex((Integer) 1); g2.addVertex((Integer) 2); g2.addVertex((Integer) 3); g2.addEdge("Edge-A", 1, 3); g2.addEdge("Edge-B", 2, 3, EdgeType.DIRECTED); g2.addEdge("Edge-C", 3, 2, EdgeType.DIRECTED); g2.addEdge("Edge-P", 2, 3); return g; } And here's what someone who attended a NetBeans Platform training course in Poland has done with Jung and the NetBeans Platform: The source code for the above is on Git: git://gitorious.org/j2t/j2t.git

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >