Search Results

Search found 19211 results on 769 pages for 'ui automated testing'.

Page 370/769 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Today's Links (6/22/2011)

    - by Bob Rhubart
    Presentations from the 4th International SOA Symposium + 3rd International Cloud Symposium Presentations from Thomas Erl, Anne Thomas Manes, Glauco Castro, Dr. Manas Deb, Juergen Kress, Paulo Mota, and many others. Experiencing the New Social Enterprise | Kellsey Ruppell Ruppell shares "some key points and takeaways from some of the keynotes yesterday at the Enterprise 2.0 Conference." Search-and-Rescue Technology Inspired by the Titanic | CIO.gov A look at the technology behind the US Coast Guard's Automated Mutual Assistance Vessel Rescue system. “He who does not understand history…" | The Open Group Blog "It’s down to us (IT folks and Enterprise Architects) to learn from history, to use methodologies intelligently, find ways to minimize the risk and get business buy-in". Observations in Migrating from JavaFX Script to JavaFX 2.0 | Jim Connors Connors' article "reflects on some of the observations encountered while porting source code over from JavaFX Script to the new JavaFX API paradigm." FY12 Partner Kickoff – Are you Ready? | Judson Althoff Blog What does Oracle have up its sleeve for FY12? Oracle executives reveal all in a live interactive event, June 28/29. Webcast: Walking the Talk: Oracle’s Use of Oracle VM for IaaS Event Date: 06/28/2011 9:00am PT / Noon ET. Speakers: Don Nalezyty (Dir. Enterprise Architecture, Oracle Global IT) and Adam Hawley (Senior Director, Virtualization, Product Management, Oracle).

    Read the article

  • The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques

    The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques as documented by Pfleeger. The agile methodology does promote human understanding and communication through the use of short iterative software development life cycles which forces stakeholders to review the project and adjust the project for any requirement changes.  Due to the consistent evaluations of a project and requirements, process are continually being refined, upgraded, and compared against other alternatives to ensure the best design delivered to the client. Due to the short repetitive development cycles, increased time is devoted to process management due to the fact that requirements and designs could be constantly changing. This requires additional forecasting, monitoring, and planning for each iteration. Because things can change so rapidly, automated guidance in performing process must be updated for each iteration because the environment and the available reusable process could change. In addition, the original guidance and suggestions for the project also need to be updated to account for these changes as well.   In essence the automation of process execution is supported by the agile methodology because during every iteration all processes must be tested, evaluated to ensure process integrity and compliance with the customer’s requirements. I do not think the agile approach diminishes modeling, in fact I think it increases the modeling because before the start of every development cycle, modeling must be checked for accuracy based on the changed requirements. So in essence the reduced time spent initially designing the models is in fact gained as the project completes every iteration of the project.

    Read the article

  • Strange spam posts not making sense

    - by Paaland
    I'm running a web site with a forum where one small part is open for posting from unregistered users. The site uses captcha, but still some spam posts get through every day. Here is the thing. All of the messages follow the same pattern, but all also come from different IP's. That makes me thing this is some sort of automated scripted "attack" from a botnet of some sorts. The strange thing is that all the messages start with six random characters and contains a couple of links. The words have no meaning and the domains in the links does not even exist. Why would anyone use time and resources spreading these things? Below you can see two of these messages: A5Zfs6 exrzvrbspntz, [url=http://nktqoqllnuab.com/]nktqoqllnuab[/url], [link=http://wtrenldadvsy.com/]wtrenldadvsy[/link], [http://rnlrqfgdvdot.com/] O2oLpL nqeffxhryfdk, [url=http://jutyurbpfxow.com/]jutyurbpfxow[/url], [link=http://jpcdtmdalpow.com/]jpcdtmdalpow[/link], [http://qopqwqxwjdjx.com/] Since all the messages come from different IP's I can't see blocking those will help much. For now I'm considering just dropping all messages following this pattern since it's quite easy to match with a regexp. Have anyone else seen these kinds of messages or know the point of posting them?

    Read the article

  • Setup a Autoreply Only Account

    - by dabrain
    For some very good reason you might would like to setup a 'autoreply' only account, without storing the incoming mail into a mailbox. If not already done, create an account via Delegated Admin Gui or commadmin Commandline Tool. Example: /opt/sun/comms/da/bin/commadmin user create -D admin -d vmdomain.tld -w enigma -F Mike -l    mparis -L Paris -W tester -E [email protected] -S mail -H mars.vmdomain.tld Setup mailDeliveryOption to autoreply mode only, so no email will be stored in the user mailbox, skip this step if you want incoming emails stored in the mailbox. ldapmodify -D "cn=Directory Manager" -w enigma -f /tmp/modfile [/tmp/modfile] dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify replace: mailDeliveryOption mailDeliveryOption: autoreply Setup mailSieveRuleSource with the autoreply text and 'do-not-reply' From address. The "Thank you ..." part becomes the subject. The next string in quotes is the body part of the message. The ":hours 0" denotes that we want a reply sent for every message. Finally,  the \n is used because of the wanted newlines in the body. ldapmodify -D "cn=Directory Manager" -w enigma -f /tmp/addfile [/tmp/addfile] dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: mailSieveRuleSource mailSieveRuleSource: require "vacation"; vacation :hours 0 :reply :from "do-not-reply   @domain.com" :subject "Thank you for contacting webpost" "Your Mail is being review   ed.\nTo access contact information please visit : http://www.domain.com \nPlease do    not reply to this e-mail as it is an automated response on your mail being accessed   .\n\nPublic Respose Unit.\n"

    Read the article

  • Branching and CI Builds with Agile

    - by Bob Horn
    We follow many agile processes, including automated tests, continuous integration, sprint reviews, etc... We're currently having a debate about how often we should branch release builds. We've been doing two-week sprints and trying to deploy to production at the end of each sprint. Some of us think we should be branching every sprint. Some of us think that's overkill. If a project encompasses three Visual Studio solutions, and we branch every sprint, then that's three branches, and three CI builds to create every two weeks. If we do this for six months, we'll end up with 36 branches and 36 CI builds. There is overhead involved in that. For those of us that think that branching every sprint is overkill, we don't have a very good alternative. On my last project, we deployed some solutions from the Main trunk. Yeah, that's not good, but it saved on some of the overhead. What's the right way to manage branching/releasing and CI builds, using agile, when we have such short (two-week) sprint cycles?

    Read the article

  • From TFS to Git

    - by Saeed Neamati
    I'm a .NET developer and I've used TFS (team foundation server) as my source control software many times. Good features of TFS are: Good integration with Visual Studio (so I do almost everything visually; no console commands) Easy check-out, check-in process Easy merging and conflict resolution Easy automated builds Branching Now, I want to use Git as the backbone, repository, and source control of my open source projects. My projects are in C#, JavaScript, or PHP language with MySQL, or SQL Server databases as the storage mechanism. I just used github.com's help for this purpose and I created a profile there, and downloaded a GUI for Git. Up to this part was so easy. But I'm almost stuck at going along any further. I just want to do some simple (really simple) operations, including: Creating a project on Git and mapping it to a folder on my laptop Checking out/checking in files and folders Resolving conflicts That's all I need to do now. But it seems that the GUI is not that user friendly. I expect the GUI to have a Connect To... or something like that, and then I expect a list of projects to be shown, and when I choose one, I expect to see the list of files and folders of that project, just like exploring your TFS project in Visual Studio. Then I want to be able to right click a file and select check-in... or check-out and stuff like that. Do I expect much? What should I do to easily use Git just like TFS? What am I missing here?

    Read the article

  • Do you store mysql exports in your version control tool for reverting to in event of error?

    - by Rob
    We run an internal web server with in-house software to run a manufacturing line. When new product features are to be added, either or both of the following occur: changes to the in-house server software may be required to support these - these are for significant changes in functionality, being code drive. changes to the MySQL database for new entries for the part numbers, these are for smaller changes, configurations, changes to already existing values and parameters -- such changes don't require code changes. Ideally we'd want our changes to be here rather than in item 1. Item 1 is version controlled in Subversion, so previous revisions can be referred to for rolling back to in the event of problems introduced in the latest revision. But what about changes to the MySQL database? We have quality processes to ensure that such changes are error-free but there is always a chance that errors can pass through, e.g. mistake in data entry or faults with the code that uses the MySQL corrupting the database etc. We have a automated backup every 6 hours but what if we want more manual defined checkpoints in between these intervals, we could use the same backup system but I wondered if folks here used other methods to store previous states of databases, e.g. exporting the database as a plain text SQL dump -- at least with this method it would be possible to see diffs e.g. in Beyond Compare for trouble shooting. Thoughts?

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • Topeka Dot Net User Group (DNUG) Meeting &ndash; April 6, 2010

    - by Robz / Fervent Coder
    Topeka DNUG is free for anyone to attend! Mark your calendars now! SPEAKER: Troy Tuttle is a self-described pragmatic agilist, and Kanban practitioner, with more than a decade of experience in delivering software in the finance and health industries and as a consultant. He advocates teams improve their performance through pursuit of better practices like continuous integration and automated testing. Troy is the founder of the Kansas City Limited WIP Society and is a speaker at local area groups on team related topics. He currently works as a Project Lead Consultant with AdventureTech Group of Kansas City, KS. TOPIC: Why Kanban? Kanban is receiving a large amount of attention recently. What does it offer compared to other approaches? Answering that question may require you to hit the “reset” button on previously held biases and assumptions. Kanban blends Lean thought with ideas from first generation agile methodologies. To get started with Kanban, we will examine what steps are necessary to establish a transparent, work-limited, pull system. We will highlight the perils of allowing too much work-in-progress and how it affects development performance. Once established, Kanban teams need only a few metrics and tools to monitor their performance and improvement. WHERE: Federal Home Loan Bank Topeka on the Security Benefit Campus – Directions? WHEN: 11:30 AM - 1:00 PM on April 6th, 2010 REGISTER: http://topekadotnet.wufoo.com/forms/topeka-dnug-meeting-attendance/ ADDITIONAL INFO: As always, please sign in and out of FHLBank to help them with their accountability. Please park in the visitors section at the front of the building when you arrive. If  there are no spots in visitors you may park in the overflow lot at the far east end of the facility.  Lunch will be provided and we will have some great door prizes!

    Read the article

  • Connect to bluetooth device from command line

    - by Ilari Kajaste
    Background: I'm using my bluetooth headset as audio output. I managed to get it working by the long list of instructions on BluetoothHeadset community documentation, and I have automated the process of activating the headset as default audio output into a script, thanks to another question. However, since I use the bluetooth headset with both my phone and computer (and the headset doesn't support two input connections) in order for the phone not to "steal" the connection when handset is turned on, I force the headset into a discovery mode when connecting to the computer (phone gets to connect to it automatically). So even though the headset is paired ok and would in "normal" scenario autoconnect, I have to always use the little bluetooth icon in the notification area to actually connect to my device (see screenshot). What I want to avoid: This GUI for connecting to a known and paired bluetooth device: What I want instead: I'd want to make the bluetooth do exactly what the clicking the connect item in the GUI does, only by using command line. I want to use command line so I can make a single keypress shortcut for the action, and would't need to navigate the GUI every time I want to establish a connection to the device. The question: How can I attempt to connect to a specific, known and paired bluetooth device from command line? Further question: How do I tell if the connection was successful or not?

    Read the article

  • In search of database delivery practitioners and enthusiasts

    - by Claire Brooking
    We know from speaking with many of you at tradeshows and user groups that database delivery is not a factory production line. During planning, evaluation, quality control, and disaster mitigation, the people having their say at each step means that successful database deployment is a carefully managed course of action. With so many factors involved at every stage, we would love to find a way for our software to help out, by simplifying processes, speeding them up or joining together the people and the steps that make it all happen. We’re hoping our new research group for database delivery (SQL Server and Oracle) will help us understand the views and experiences of those of you out there in the trenches managing database changes. As part of our new group, we’ll be running a variety of research sessions, including surveys and phone interviews, over coming months. If you have opinions to share on Continuous Integration or Continuous Delivery for databases, we’d love to hear from you. Your feedback really will count as the product teams at Red Gate build plans. For some of our more in-depth sessions, we’ll also be offering participants an Amazon voucher as a thank-you for your time. If you’re not yet practising automated database deployment processes, but are contemplating or planning it, please do consider joining our research group too. If you’d like to sign up to the group and find out more, please fill in a quick form online, and we’ll be in touch to let you know about new research opportunities you might be interested in. We look forward to hearing your stories!

    Read the article

  • How to read default key value with dconf or gsettings?

    - by Zta
    I would like to know the default value of a dconf/gsettings key. My question is a followup of the question below: Where can I get a list of SCHEMA / PATH / KEY to use with gsettings? What I'm trying to do, so create a script that reads all my personal preferences so I can back them up and restore them. I plan to iterate though all keys, like the script above, see what keys have been changed from their default value, and make a note of these, that can be restored later. I see that the dconf-editor display the keys' default value, but I'd very much like to script this. Also, I don't see how parsing the schemas /usr/share/glib-2.0/schemas/ can be automated. Maybe someone can help? gsettings get-default|list-defaults would be nice =) (Geesh, it was much easier in the old days where you just kept your ~/.somethingrc in subversion ... =\ Based on the answer given below, I've updated the script to print schema, key, key's data type, default value, and actual value: #!/bin/bash for schema in $(gsettings list-schemas | sort); do for key in $(gsettings list-keys $schema | sort); do type="$(gsettings range $schema $key | tr "\n" " ")" default="$(XDG_CONFIG_HOME=/tmp/ gsettings get $schema $key | tr "\n" " ")" value="$(gsettings get $schema $key | tr "\n" " ")" echo "$schema :: $key :: $type :: $default :: $value" done done This workaround basically covers what I need. I'll continue working on the backup scrip from here.

    Read the article

  • Finding bugs is difficult, right?

    - by Laila
    Something I hear developers tell us all the time is that they take pride in being a developer.and that bugs are a dent in that pride. Someone once told me "I know I have found bugs years later, and it's the worst feeling in the world." So how can you avoid that sinking feeling when you find out a bug has been in production months before someone lets you know about it? Besides, let's face it: hearing about a bug often means a world of pain, because it can take hours to track down where the problem is and more hours (if not days) to fix it. And during that time, you're not working on something new, and that, my friends, is really frustrating! So to cheer you up, we've created a Bug Hunt game, where you battle against the clock to spot bugs. We've really enjoyed putting this together and hope you enjoy playing it too. Once you're done with the bug hunt, we explain how easy it can be to find and fix bugs in real life, using a neat mechanism that we call Automated Error Reporting. Play the game now.

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits

    - by Anand Akela
    Earlier this month at the Oracle Open World 2012, we celebrated the first anniversary of Oracle Enterprise Manager 12c . Early adopters of  Oracle Enterprise manager 12c have benefited from its federated self-service access to complete application stacks, automated provisioning, elastic scalability, metering, and charge-back capabilities. Crimson Consulting Group recently interviewed multiple early adopters of Oracle Enterprise Manager 12c and captured their finding in a white Paper "Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains".  Here is summary of the finding :- On October 25th at 10 AM pacific time, Kirk Bangstad from the Crimson Consulting group will join us in a live webcast and share what learnt from the early adopters of Oracle Enterprise Manager 12c. Don't miss this chance to hear how private clouds could impact your business and ask questions from our experts. Webcast: Real-World Benefits of Private Cloud Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Benefits Date: Thursday, October 25, 2012 Time: 10:00 AM PDT | 1:00 PM EDT Register Today All attendees will receive the White Paper: Real-World Benefits of Private Cloud: Early Adopters of Oracle Enterprise Manager 12c Report Agility and Productivity Gains. Stay Connected Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • How do I convince my boss that it's OK to use an application to access an outside website?

    - by Cyberherbalist
    That is, if you agree that it's OK. We have a need to maintain an accurate internal record of bank routing numbers, and my boss wants me to set up a process where once a week someone goes to the Federal Reserve's website, clicks on the link to get the list of routing numbers (or the link giving the updates since a particular date), and then manually uploads the resultant text file to an application that will make the update to our data. I told him that a manual process was not at all necessary, and that I could write a routine that would access the FED's routing numbers in the application that keeps our data updated, and put it on whatever schedule was appropriate. But he is greatly opposed to doing this, and calls it "hacking the Federal Reserve website." I think he's afraid that the FED is going to get after us. I showed him the FED's robot.txt file, and the only thing it forbids is an automated indexing of pages with extension .cf*: User-agent: * # applies to all robots Disallow: CF # disallow indexing of all CF* directories and pages This says nothing about accessing the same data automatically that you could access manually. Anyone have a good counterargument to the idea that we'd be "hacking" the FED?

    Read the article

  • Reaping the Benefits of the Image Packaging System

    - by rickramsey
    source One of the promises made about Oracle Solaris 11 was easier installation. Remember? Do you also remember how involved installing Oracle Solaris Cluster used to be? It was so involved, in fact, that we (when we were Sun Microsystems) wouldn't even let you do it yourself. How times have changed. New - How to Automate The Installation of Oracle Solaris Cluster 4.0 Thanks to the new image packaging architecture in Oracle Solaris 11, you can now automate the installation of Oracle Solaris Cluster 4.0. Why is that such a big deal? As Lucia Lai explains it: "Without the AI, you would have to manually install the cluster components on the cluster nodes, and then run the scinstall tool to add the nodes to the cluster. If, instead, you use the AI, both the Oracle Solaris 11 and the Oracle Solaris Cluster 4.0 packages are installed onto the cluster nodes directly from Image Packaging System (IPS) repositories, and the nodes are booted into a new cluster with minimum user intervention." Lucia goes on to explain how to set up and configure the AI server, how to plan your cluster configuration for the automated installation, how to use the scinstall utility, how to set up the DHCP server, and more. A thorough, well-written article. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Process Improvement and the Data Professional

    - by BuckWoody
    Don’t be afraid of that title – I’m not talking about Six Sigma or anything super-formal here. In many organizations, there are more folks in other IT roles than in the Data Professional area. In other words, there are more developers, system administrators and so on than there are the “DBA” role. That means we often have more to do than the time we need to do it. And, oddly enough, the first thing that is sacrificed is process improvement – the little things we need to do to make the day go faster in the first place. Then we get even more behind, the work piles up and…well, you know all about that. Earlier I challenged you to find 10-30 minutes a day to study. Some folks wrote back and asked “where do I start”? Well, why not be super-efficient and combine that time with learning how to make yourself more efficient? Try out a new scripting language, learn a new tool that automates things or find out ways others have automated their systems. In general, find out what you’re doing and how, and then see if that can be improved. It’s kind of like doing a performance tuning gig on yourself! If you’re pressed for time, look for bite-sized articles (like the ones I’ve done here for PowerShell and SQL Server) that you can follow in a “serial” fashion. In a short time you’ll have a new set of knowledge you can use to make your day faster. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Are R&D mini-projects a good activity for interns?

    - by dukeofgaming
    I'm going to be in charge of hiring some interns for our software department soon (automotive infotainment systems) and I'm designing an internship program. The main productive activity "menu" I'm planning for them consists of: Verification testing Writing Unit Tests (automated, with an xUnit-compliant framework [several languages in our projects]) Documenting Code Updating wiki Updating diagrams & design docs Helping with low priority tickets (supervised/mentored) Hunting down & cleaning compiler/run-time warnings Refactoring/cleaning code against our coding standards But I also have this idea that having them do small R&D projects would be good to test their talent and get them to have fun. These mini-projects would be: Experimental implementations & optimizations Proof of concept implementations for new technologies Small papers (~2-5 pages) doing formal research on the previous two points Apps (from a mini-project pool) These kinds of projects would be pre-defined and very concrete, although new ideas from the interns themselves would be very welcome. Even if a project is too big or is abandoned, the idea would also be to lay the ground work so they can be retaken by another intern or intern team. While I think this is good in concept, I don't know if it could be good in practice, as obviously this would diminish their productivity on "real work" (work with immediate value to the company), but I think it could help bring aboard very bright people and get them to want to stay in the future (which, I think, is the end goal for any internship program). My question here is if these activities are too open ended or difficult for the average intern to accomplish and if R&D is an efficient use of an interns time or if it makes more sense for to assign project work to interns instead.

    Read the article

  • TDD with SQL and data manipulation functions

    - by Xophmeister
    While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against...

    Read the article

  • ORAchk version 2.2.5 is now available for download

    - by Gerry Haskins
    Those awfully nice ORAchk folks have asked me to let you know about their latest release... ORAchk version 2.2.5 is now available for download, new features in 2.2.5: Running checks for multiple databases in parallel Ability to schedule multiple automated runs via ORAchk daemon New "scratch area" for ORAchk temporary files moved from /tmp to a configurable $HOME directory location System health score calculation now ignores skipped checks Checks the health of pluggable databases using OS authentication New report section to report top 10 time consuming checks to be used for optimizing runtime in the future More readable report output for clusterwide checks Includes over 50 new Health Checks for the Oracle Stack Provides a single dashboard to view collections across your entire enterprise using the Collection Manager, now pre-bundled Expands coverage of pre and post upgrade checks to include standalone databases, with new profile options to run only these checks Expands to additional product areas in E-Business Suite of Workflow & Oracle Purchasing and in Enterprise Manager Cloud Control ORAchk has replaced the popular RACcheck tool, extending the coverage based on prioritization of top issues reported by users, to proactively scan for known problems within the area of: Oracle Database Standalone Database Grid Infrastructure & RAC Maximum Availability Architecture (MAA) Validation Upgrade Readiness Validation Golden Gate Enterprise Manager Cloud Control Repository E-Business Suite Oracle Payables (R12 only) Oracle Workflow Oracle Purchasing (R12 only) Oracle Sun Systems Oracle Solaris ORAchk features: Proactively scans for the most impactful problems across the various layers of your stack Streamlines how to investigate and analyze which known issues present a risk to you Executes lightweight checks in your environment, providing immediate results with no configuration data sent to Oracle Local reporting capability showing specific problems and their resolutions Ability to configure email notifications when problems are detected Provides a single dashboard to view collections across your entire enterprise using the Collection Manager ORAchk will expand in the future with high impact checks in existing and additional product areas. If you have particular checks or product areas you would like to see covered, please post suggestions in the ORAchk subspace in My Oracle Support Community. For more details about ORAchk see Document 1268927.2

    Read the article

  • Is the development of CLI apps considered "backwards"?

    - by user61852
    I am a DBA fledgling with a lot of experience in programming. I have developed several CLI, non interactive apps that solve some daily repetitive tasks or eliminate the human error from more complex albeit not so daily tasks. These tools are now part of our tool box. I find CLI apps are great because you can include them in an automated workflow. Also the Unix philosophy of doing a single thing but doing it well, and letting the output of a process be the input of another, is a great way of building a set of tools than would consolidate into an strategic advantage. My boss recently commented that developing CLI tools is "backwards", or constitutes a "regression". I told him I disagreed, because most CLI tools that exist now are not legacy but are live projects with improved versions being released all the time. Is this kind of development considered "backwards" in the market? Does it look bad on a rèsumè? I also considered all solutions whether they are web or desktop, should have command line, non-interactive options. Some people consider this a waste of programming resources. Is this goal a worthy one in a software project?

    Read the article

  • Is the Alternate Ubuntu installer still required for LVM or Software RAID setup?

    - by jimp
    Over the past 5 years, I have been setting up Ubuntu servers using the Alternate installer. I need to provision a new server today, and I'm curious if the Alternate CD is still the only way to setup LVM/RAID at installation time. I'm my limited experience with Red Hat Enterprise Linux, I noticed it's single installer configures LVM automatically. Has Ubuntu's installer, at least the standard "Server" installer, added support for LVM/RAID, or is the Alternate installer still required for that kind of server setup? http://mirror.anl.gov/pub/ubuntu-iso/DVDs/ubuntu/12.04.1/release/ Alternate install CD The alternate install CD allows you to perform certain specialist installations of Ubuntu. It provides for the following situations: setting up automated deployments; upgrading from older installations without network access; LVM and/or RAID partitioning; installs on systems with less than about 384MiB of RAM (although note that low-memory systems may not be able to run a full desktop environment reasonably). LVM has always been fundamental for our server needs, so I'm surprised if it is still not considered a server-worthy feature.

    Read the article

  • TransformXml Task locks config file identified in Source attribute

    - by alexhildyard
    As background: the TransformXml MSBuild task is typically invoked in a custom build step to mark up a web.config file with per-environment configuration; its flexible directives offer highly granular control over the insertion, removal, substitution and transformation of existing configuration hierarchies. For those using the TransformXML task (typically in a Web Deployment Project) I raised an issue against Visual Studio 2010, in which the file handle on the input file was not released, meaning that following transformation the source file remained locked. As a result, the best way to transform a file was first to rename it, transform it, and then copy it back, leaving the "locked" file to be freed up later.I just heard today that this has now been resolved in Visual Studio 2012 RTM. That's good news, because Web Config Transformations offer a lot. An intelligent, automated build process will swap in the relevant transform(s), making it much easier to synthesise the Developer and Build server builds. This makes for a simpler and more exemplary build process, and with the tighter coupling comes a correspondingly quicker response to Developmental change.Oh, and don't forget -- it isn't just web.configs you can transform. You can transform app.configs, or indeed any XML file that honours the task's schema and hierarchical rules.

    Read the article

  • Release: Oracle Java Development Kit 8, Update 20

    - by Tori Wieldt
    Java Development Kit 8, Update 20 (JDK 8u20) is now available. This latest release of the Java Platform continues to improve upon the significant advances made in the JDK 8 release with new features, security and performance optimizations. These include: new enterprise-focused administration features available in Oracle Java SE Advanced; products offering greater control of Java version compatibility; security updates; and a very useful new feature, the MSI compatible installer. Download Release Notes Java SE 8 Documentation New tools, features and enhancements highlighted from JDK 8 Update 20 are: Advanced Management Console The Java Advanced Management Console 1.0 (AMC) is available for use with the Oracle Java SE Advanced products. AMC employs the Deployment Rule Set (DRS) security feature, along with other functionality, to give system administrators greater and easier control in managing Java version compatibility and security updates for desktops within their enterprise and for ISVs with Java-based applications and solutions. MSI Enterprise JRE Installer Available for Windows 64 and 32 bit systems in the Oracle Java SE Advanced products, the MSI compatible installer enables system administrators to provide automated, consistent installation of the JRE across all desktops in the enterprise, free of user interaction requirements. Performance: String de-duplication resulting in a reduced footprint Improved support in G1 Garbage Collection for long running apps. A new 'force' feature in DRS (Deployment Rule Set) which allows system administrators to specify the JRE with which an applet or Java Web Start application will run. This is useful for legacy applications so end users don't need to approve security exceptions to run.  Java Mission Control 5.4 with new ease-of-use enhancements and launcher integration with Eclipse 4.4 JavaFX on ARM Nashorn performance improvement by persisting bytecode after inital compilation There's much more information to be found in the JDK 8u20 Release Notes.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >