Search Results

Search found 6845 results on 274 pages for 'systems'.

Page 170/274 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Where do I start in regards to making a Gnome/Unity Form Application

    - by JMK
    Ok so I am familiar with developing Form and Console applications on Windows using Visual Studio .Net with C#, but where do I start when it comes to Linux distro's like Ubuntu, is there an equivalent? How would one go about matching what they can do in a Windows environment with .Net and C# in a Linux environment without .Net coding in something like Java or C/C++? I am aware of Eclipse, does eclipse have a form designer or do you have to code the design of any Gnome/Unity forms manually? Can I use eclipse to write the Linux equivalent of a console application, that you just double click on to run? I also know about Mono, but the idea is that I want to learn how to develop software without using anything in the Microsoft stack and am not sure where to start. What is the standard language/framework used to develop these types of applications on Linux? As I become more proficient with Visual Studio, C# and .Net, it has struck me that without these Microsoft tools, I am nothing. I am only capable of developing for the Microsoft OS and this scares me. This isn't some anti Microsoft thing, Microsoft makes some incredible Software/Hardware/Operating Systems/IDE's, but it is generally a bad idea to put all of your eggs in one basket so if I want to learn how to develop Terminal and Gnome/Unity form applications where in the world do I start? I have used Linux on and off for years, but Windows has been my primary OS. However I have watched Linux get better and better and as much as I love Windows 7, I am dubious about Windows 8 (I for one will sorely miss my start menu)! Obviously MS aren't going anywhere anytime soon and I could spend the the next couple of decades developing for .Net without any issues but just because you can get away with something doesn't always mean it's a good idea. Thanks

    Read the article

  • HTG Explains: What Is Bitcoin, the Virtual Digital Currency?

    - by YatriTrivedi
    Bitcoin is a virtual currency that employs some very interesting principles. Here’s the skinny on what exactly it is and how the fascinating technology behind it works. Disclaimer: This is NOT financial or legal advice. This. Is. NOT. Financial. Or. Legal. Advice. This is not, in any way, shape, or form, financial or legal advice. We’re covering this topic because of the technological implementations it uses and the innovations it attempts to make. If you do anything because of this post, we are not responsible because this is NOT financial or legal advice. ^_^ Latest Features How-To Geek ETC Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 The History Of Operating Systems [Infographic] DriveSafe.ly Reads Your Text Messages Aloud The Likability of Angry Birds [Infographic] Dim an Overly Bright Alarm Clock with a Binder Divider Preliminary List of Keyboard Shortcuts for Unity Now Available Bring a Touch of the Wild West to Your Desktop with the Rango Theme for Windows 7

    Read the article

  • links for 2010-12-17

    - by Bob Rhubart
    Overview of Oracle Enterprise Manager Management Packs How does Oracle Enterprise Manager Grid Control do so much across so many different systems? Porus Homi Havewala has the answers.  (tags: oracle otn grid soa entarch) How to overcome cloud computing hurdles - Computerworld What does it take to go from 'we should move to the cloud' to a successful cloud computing strategy? This excerpt from Silver Clouds, Dark Linings offers advice on crossing cloud chasms and developing a successful roadmap. (tags: ping.fm) Security in OBIEE 11g, Part 2 Guest blogger Pravin Janardanam continues the discussion about OBIEE 11g Authorization and other Security aspects. (tags: oracle otn security businessintelligence obiee) Oracle Fusion Middleware Security: A Quick Note about Oracle Access Manager 11g and WebLogic "OAM 11g integrates with WebLogic using the very same components used to integrate OAM 10.1.4.3. Under most circumstances, that means using the OAM Identity Asserter...which asserts the OAM_REMOTE_USER header as the user principal in the JAAS subject." - Brian Eidelman (tags: WebLogic oracle) Comparison Between Cluster Multicast Messaging and Unicast Messaging Mode Weblogic wonders!!! "When servers are in a cluster, these member servers communicate with each other by sending heartbeats and indicating that they are alive. For this communication between the servers, either unicast or multicast messaging is used." -- Divya Duryea (tags: weblogic oracle) Ron Batra: Cloud Computing Series: IV: Database.com, ExaData on Demand and connecting the dots Oracle ACE Direct Ron Batra offers his assessment of recent rumblings in the Cloud. (tags: oracle otn oracleace cloud database)

    Read the article

  • Bahnbrechend und einsatzbereit: Oracle 12c In-Memory-Option Launch in Frankfurt

    - by Anne Manke
    Seit der Ankündigung der Oracle 12c In-Memory-Databankoption in San Francisco auf der Openworld im letzten Jahr, ist die DB Community gespannt, was diese bahnbrechende Technologie für Ad-hoc-Echtzeitanalysen von Live-Transaktionen, Data Warehousing, Reporting und Online Transaction Processing (OLTP) bringen wird. Die Messlatte liegt hoch, denn Larry Ellison verspricht mit der neuen 12c In-Memory-Option eine 100-fach schnellerer Verarbeitung von Abfragen bei Echtzeitanalysen für OLTP Prozesse oder Datawarehouses eine Verdoppelung der Transaktionsverarbeitung eine 100%ige Kompatibilität zu bestehenden Anwendungen Daten werden im Zeilenformat und Spaltenformat (In-Memory) abgelegt, und sind dabei aktiv und konsitstent Cloud-ready ohne Datamigration eine Ausweitung der In-Memory-basierten Abfrageprozesse auf mehrere Server    Um nur einige Features zu nennen >> mehr Infos finden Sie hier! Abfragen werden mit der neuen 12c In-Memory-Datenbankoption schneller bearbeitet, als die Anfrage gestellt werden kann, so Larry Ellison. Am 17. Juni 2014 wird die 12c In-Memory auf einer exklusiven Launch-Veranstaltung in Frankfurt am Main vorgestellt. Auf der Agenda stehen Vorträge, Diskussionen und eine LiveDemo der In-Memory-Datenbankoption.  Melden Sie sich jetzt an! Ort & Zeit: 17. Juni 2014, 9:30 - 15:15 Uhr in Radisson Blu Hotel (Franklinstrasse 65, 60486 Frankfurt am Main) Agenda 9:30 Registrierung 10:00 Begrüßung Guenther Stuerner, Vice President Sales Consulting, Oracle Deutschland (in deutscher Sprache) 10:15 Analystenvortrag Carl W. Olofson, Research Vice President, IDC (in englischer Sprache) 10:35 Keynote Andy Mendelsohn, Head of Database Development, Oracle (in englischer Sprache) 11:35 Podiumsdiskussion (in englischer Sprache): · Jens-Christian Pokolm, Postbank Systems AG · Andy Mendelsohn, Head of Database Development, Oracle · Carl W. Olofson, Research Vice President, IDC · Dr. Dietmar Neugebauer, Vorstandsvorsitzender, DOAG 12:30 Mittagessen 13:45 Oracle Database In Memory Option    Perform – Manage – Live Demo Ralf Durben, Senior Leitender Systemberater, Oracle Deutschland (in deutscher Sprache) 14:30 In Memory – Revolution for your DWH – Real Time Datawarehouse – Mixed Workloads – Live Demo – Live Data Query Alfred Schlaucher, Senior Leitender Systemberater, Oracle Deutschland (in deutscher Sprache) 15:15 Schlusswort & Networking

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2, 2012

    - by Bob Rhubart
    ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI, " reports Oracle ACE Director Andrejus Baranovskis. Big Data: Running out of Metric System | Andrew McAfee Do very large numbers make your brain hurt? Better stock up on aspirin. According to Andrew McAfee: "It seems safe to say that before the current decade is out we’ll need to convene a 20th conference to come up with some more prefixes for extraordinarily large quantities not to describe intergalactic distances or the amount of energy released by nuclear reactions, but to capture the amount of digital data in the world." Cloud computing will save us from the zombie apocalypse | Cloud Computing - InfoWorld "It's just a matter of time before we migrate our existing IT assets to public cloud systems," says InfoWorld cloud blogger David Linthicum. "Additionally, it's a short window until the dead rise from the grave and attempt to eat our brains." Is is Halloween or something? Thought for the Day "A computer lets you make more mistakes faster than any invention in human history—with the possible exceptions of hand guns and tequila." — Mitch Ratcliffe

    Read the article

  • Nvidia 740M still not working after Bumblebee installation

    - by Jon
    first of all, I have checked a lot of similar topics, but I still can't get my laptop to use Nvidia 740M. So first things first. I have a laptop Asus X550V(i5-3230, 4gb RAM, Nvidia 740M + Intel HD4000). I installed Ubuntu 13.10 alongside Win8(preinstalled) and both systems are running without problems. However, I have problem with second graphic card(Nvidia 740M), as Ubuntu doesn't recognize it. I installed bumblebee with this tutorial https://wiki.ubuntu.com/Bumblebee#Installation, but I still get error "Cannot access secondary GPU" error when trying to run ''optirun Steam'' in terminal. Then I tried to do this: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected. you need to edit the /etc/bumblebee/xorg.conf.nvidia (or /etc/bumblebee/xorg.conf.nouveau if using the noveau driver) and specify the correct BusID by following the instructions therein. But with lspci / VGA i get only info about Intel 4000, but no Nvidia. When I type only lspci, I get the line for Nvidia 740M,but after I edit the config file I still get second card error. Also, in /etc/bumblebee/xorg.conf.nvidia there wasn't BusID or anything similar, so I just added the whole line in device section. As I sad, I tried a lot of things to get it working, avoiding this forum(as I din't want to bother people with some solutions possible), but alas!, I had to bother you. If there is a need for some additional info, just say, no problem at all. Thank you very much in advance. :)

    Read the article

  • Oracle VM server for SPARC 2.2 on S11

    - by Liam Merwick
    Oracle VM Server for SPARC 2.2 has been released for a little while now. The https://blogs.oracle.com/virtualization blog has an overview of all the 2.2 features. Initially, what was released was the SVR4 package for Solaris 10 (which is unbundled and wasn't constrained by any external schedule). On Solaris 11, the 'ldomsmanager' package is built into Solaris (and therefore doesn't need to be downloaded separately) so it is delivered as part of an S11 Support Repository Update (SRU). Some of the features in 2.2 are specific to S11 (SR-IOV and the ability to live migrate between machines with different CPU types) and so there have been many requests to know when are the S11 bits coming. Solaris 11 SRU8.5 was released on Friday and this includes Oracle VM server for SPARC 2.2 so if you're already running an S11 SRU all you need do is a 'pkg update' to get the 2.2 bits. If you're still running the original S11 and your 'pkg publisher' output shows the /release repository then you'll need to sign up for the /support repo by getting the appropriate keys and certificates to access the repository (requires a support contract). The 2.2 Admin Guide documents how to do this upgrade on S11 Two S11 articles which have some useful details on upgrading (not just 'ldomsmanager') via the support repositories are: How to Update Oracle Solaris 11 Systems From Oracle Support Repositories by Glynn Foster Tips for Updating Your Oracle Solaris 11 System from the Oracle Support Repository by Peter Dennis In particular, if you'd like to stick with the v2.1 release when upgrading to SRU8.5 or greater, see the 'pkg freeze' section of Peter's article.

    Read the article

  • Leveraging .Net 4.0 Framework Tools For Encrypting Web Configuration Sections

    - by Sam Abraham
    I would like to share a few points with regards to encrypting web configuration sections in .Net 4.0. This information is also applicable to .Net 3.5 and 2.0. Two methods can work perfectly for encrypting connection strings in a Web project configuration file:   1-Do It All Yourself! In this approach, helper functions for encrypting/decrypting configuration file content are implemented. Program would explicitly retrieve appropriate content from configuration file then decrypt it appropriately.  Disadvantages of this implementation would be the added overhead for maintaining the encryption/decryption code as well the burden of always ensuring sections are appropriately decrypted before use and encrypted appropriately whenever edited.   2- Leverage the .Net 4.0 Framework (The Way to go!) Fortunately, all needed tools for protecting configuration files are built-in to the .Net 2.0/3.5/4.0 versions with very little setup needed. To encrypt connection strings, one can use the ASP.Net IIS Registration Tool (Aspnet_regiis.exe). Note that a 64-bit version of the tool also exists under the Framework64 folder for 64-bit systems. The command we need to encrypt our web.config file connection strings is simply the following:   Aspnet_regiis –pe “connectionstrings” –app “/sampleApplication” –prov “RsaProtectedConfigurationProvider”   To later decrypt this configuration section:   Aspnet_regiis –pd “connectionstrings” –app “/SampleApplication”   The following is a brief description of the command line options used in the example above. Aspnet_regiis supports many more options which you can read about in the links provided for reference below.   Option Description -pe  Section name to encrypt -pd  Section name to decrypt -app  Web application name -prov  Encryption/Decryption provider   ASP.Net automatically decrypts the content of the Web.Config file at runtime so no programming changes are needed.   Another tool, aspnet_setreg.exe is to be used if certain configuration file sections pertinent to the .Net runtime are to be encrypted. For more information on when and how to use aspnet_setreg, please refer to the references below.   Hope this helps!   Some great references concerning the topic:   http://msdn.microsoft.com/en-us/library/ff650037.aspx http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx http://msdn.microsoft.com/en-us/library/68ze1hb2.aspx

    Read the article

  • PHP - Making CMS (architecture, etc.)

    - by UnknownProgramer
    I'm in the stage of planning new CMS. Before I used WordPress and other open source CMS for my clients, but I always had to write new modules and even mess with the code in order to do certain things. Which as you understand is not the best thing to do. So I finally decided to make my own CMS to work with, the way I need. But before I start it, I would like to think it trough carefully to ensure that I won't need to rewrite it ground up, just because I forgot to include some feature into architecture or did it wrong. I would like to hear your thoughs and the most important I would like you to suggest me some articles or books on that subject, especially on architecture of such systems. I googled a few good books, but that is not enough. The way I'm planning to do it: PHP5, completely OOP, modules architecture. You make a page and add any modules you need there, but modules are not global, but local to a page so you can make two pages with the same module, but content will be different if you set different "content ID" for these two entities. But it can be set the same, so two pages has the same content of the modules put there. Also I plan to support online storage web service (like amazon S3) for images and files, so I would like to hear your thoughs on it too. Also I have not yet decided how to store language data. I don't want to use DB for that, but I haven't decided yet. Also I think I will support other DB with global DB class and separate DB wrappers for MySQL and other databases. And, well, I would appreciate any other information you can provide for that subject.

    Read the article

  • Making files generally available on Linux system (when security is relatively unimportant)?

    - by Ole Thomsen Buus
    Hi, I am using Ubuntu 9.10 on a stationary PC. I have a secondary 1 TB harddrive with a single big logical partition (currently formatted as ext4). It is mounted as /usr3 with options user, exec in /etc/fstab. I am doing highspeed imaging experiments. Well, only 260fps, but that still creates many individual files since each frames is saved as one png-file. The stationary is not used by anyone other than me which is why the default security model posed by ubuntu is not necessary. What is the best way to make the entire contents of /usr3 generally available on all systems. In case I need to move the harddrive to another Ubuntu 9.x or 10.x machine? When grabbing image with the firewire camera I use a selfmade grabbing software-utility (console based) in sudo-mode. This creates all files with root as owner and group. I am logged in as user otb and usually I do the following when having to make files generally available to otb: sudo chown otb -R * sudo chgrp otb -R * sudo chmod a=rwx -R * This takes some time since the disk now contains individual ~200000 files. After this, how would linux behave if I moved the harddrive to another system where the user otb is also available? Would the files still be accessible without sudo use?

    Read the article

  • Revolutionizing Digital Commerce

    - by bwalstra
    The confluence of the Internet, the pace of change in technology, and the demands of the value-conscious consumer are accelerating the evolution of the global digital marketplace at an unprecedented rate. Success in the new digital economy has become inextricably linked with the agility to launch innovative products, services, and new business models efficiently with minimal risk. A major obstacle to agility, and by extension to success in digital commerce, is the fact that by and large information technology (IT) infrastructure is tightly coupled with particular business models. Enterprises, through well intentioned but misconstrued costsaving belief, continue to customize existing infrastructure and create now silos to support new business models. In reality, this approach results in rigid, inflexible business processes and exposes the enterprise to unnecessary risks, higher opportunity costs, and lower profit margins. Oracle, a leading supplier of business solutions to the enterprise, is enabling the business strategies necessary to succeed in the digital economy by offering a modern, open, modular, and functionally comprehensive revenue management solution that decouples IT infrastructure from business models. Enterprises using the Oracle solution are able to focus on core competencies and innovate unimpeded, assuring that business and IT systems will seamlessly adapt to changing conditions of the digital economy. Revolutionizing Digital Commerce:  An Oracle Revenue Management Solution

    Read the article

  • Do we have enough time to build an electric car future?

    - by julien.groues
    A recent article from Greenbang has posed the question 'Do we have enough time to build an electric car future?'. The writer discusses that, although the future of transport might lie with electric cars, there is concern regarding whether we'll be able to build the market and infrastructure required to support them, before carbon and oil constraints create difficulties in powering the vehicles. Of course, the increasing use of Electric vehicles (EVs) is going to put excessive pressure on energy grids, as large volumes of electricity will need to be directed to charging points, which in turn must handle fluctuating demand at peak times. EVs are increasing in popularity as a sustainable method of transport to reduce carbon consumption, and electric utilities will have the opportunity, and the challenge, to quickly determine the best methods to fuel these vehicles and accommodate the associated increases in demand for energy. Critically, efficient software is required to provide diagnostic and predictive capabilities related to EV refuelling - for example, anticipated electricity flow will need to be addressed as the number of EVs on the road increases, and electricity will need to be directed to specific areas on-demand as vehicles attempt to recharge en-mass. But a smart grid infrastructure can meet these demands, intelligently. The implementation of a smart grid is not in the distant future, it is an achievable reality for utilities via simple installation of new software and technologies, which can be done incrementally for those facing existing legacy systems or concerned with upfront costs. The smart grid is integral to the monitoring and control of energy use as well as the future-proofing of the energy grid. A smart grid will be critical to meeting the electricity requirements of new EVs and will ensure their successful deployment by providing a reliable foundation for the data handling required to record and manage electricity distribution - from recording and assessing energy usage, to analysing data and sharing information with consumers via green billing. http://www.greenbang.com/do-we-have-enough-time-to-build-an-electric-car-future_14248.html

    Read the article

  • ArchBeat Link-o-Rama for 11/29/2011

    - by Bob Rhubart
    Webcast: Introducing Oracle WebLogic Server 12c: Developer Deep Dive December 1, 2011 11am - 12pm PT / 2pm - 3pm ET. Learn how Oracle WebLogic Server 12c enables rapid development of modern, lightweight Java EE 6 applications. Discover how you can leverage the latest development technologies, tools and standards when deploying to Oracle WebLogic Server across both conventional and Cloud environments. Web Services in BI Publisher 11g | Robin Moffatt BI Publisher 11g comes with a shiny set of new Web Services, superseding those that were in 10g. Robin Moffatt's article discusses some of the uses, and ways to implement them. Stanford expands free, online information technology course offerings | ZDNet Joe McKendrick reports on new Stanford online courses set to start in January 2012. Courses include Software as a Service and Computer Science 101. The federal government's secret 1966 cloud computing plan | ZDNet "Even as far back as 45 years ago, the US federal government struggled to consolidate and become more service-oriented across its agency silos," says McKendrick. SOA Made Simple; Architects in AZ; Introduction to Cloud Migration This week on the Oracle Technology Network Architect Home Page. New release of S-ASH v.2.3 | Marcin Przepiorowski A short post from Marcin Przepiorowski on the new version of Oracle Simulate ASH. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ Spend the day with your peers learning from Oracle experts on Cloud Computing, Engineered Systems, and more. Wednesday, December 14, 2011. 8:30am to 5:00pm. Registration is free, but seating is limited.

    Read the article

  • Cannot ping any computers on LAN

    - by Timothy
    I havem't been able to find a straight forward answer on this yet. I'm hoping people here are able to help! Keep in mind that I'm a complete beginner at this - this is the first installation i've done for any LINUX systems ever so please keep that in mind when answering this question. We are a complete Windows shop, using nothing but Microsoft products but looking into the value of OpenStalk however have been having problems getting Ubuntu Server installed and speaking to the network. The machine is getting an IP address which is telling me that some sort of DHCP activity is working but I'm not able to ping any computer on our network as well as not able to connect to the internet. Every time I try to ping i'm getting; Destination Host Unreachable I've tried using modifying the resolv.conf file with our static details to match my Windows 7 machine still with no luck. Even tried disabling the firewall on Ubuntu Server 11 and no luck. Any ideas? Please let me know if there is any information you need from the server and I'll post up.

    Read the article

  • How can I get run Ubuntu Desktop on my Galaxy Nexus?

    - by Jack Senechal
    On the Ubuntu for phones site it advertises the desktop view feature: "The phone becomes a full PC and thin client when docked.". And there's the demo by Canonical of something similar running under Ubuntu for Android. I realize they're different systems, but the end effect is in both is to have a full Ubuntu system running on the phone. I've installed Ubuntu Touch Preview on my Galaxy Nexus (toro), and it's working as expected (no cellular signal, but wifi works, etc). But when I plug in a monitor via HDMI it just mirrors the phone's touch display. There's also currently no bluetooth support for attaching keyboard and mouse. Keyboard only kind of works via USB, and mouse not at all. I've also tried running Ubuntu under Android via VNC, but the lack of responsiveness of VNC makes it impractical for daily use. I'd consider that route again if there is some way to make the UI more responsive. So the question is, how can set up my phone to run Ubuntu Desktop in a way that's useable as a laptop replacement? Is there a way to enable Desktop View on Ubuntu Touch? Or can I run Ubuntu for Android as in the previously referenced demo? Plugging into a monitor would be OK, but I'd love to be able to use the desktop interface with mouse and keyboard through the phone's screen as well. Touch input and an onscreen keyboard would be a plus but is definitely not necessary.

    Read the article

  • An Oracle Exastack Recap

    - by Kristin Rose
    For those ISVs and OEMs who tuned into Oracle’s FY13 Partner Kickoff, thank you! It was with your participation and presence that helped make this year’s show another great success. The OPN Communications team was lucky enough to get a chance to sit down with Chris Baker, Oracle SVP of Worldwide ISV, OEM & Java, all the way from London, as he recapped the achievements that were seen over the past year with the Oracle Exastack Program. Be sure to watch his short video below: Here are some highlights: 1000 “Readies”- Those partners that are ready to use the latest version of our products Over 100 partners that are ready to use Oracle Exastack,  Oracle Exadata Database Machine, and Oracle Exalogic Elastic Cloud New Oracle Exalytics machine for analysis In less than one year, more than 100 ISV applications have achieved an Oracle Exastack Ready status and more than 35 ISV applications have achieved Oracle Exastack Optimized status. These partners can be found by Oracle customers listed in the Oracle Solutions Catalog.Demonstrating to customers that their solutions are tuned to deliver optimum speed, scalability and reliability on Oracle Engineered Systems, Oracle partners are rapidly achieving Oracle Exastack Optimized certification. Read the press release here. By simplifying your company’s architecture with the Oracle Exastack program, both ISV’s and OEMs are able to better concentrate on their application and deliver enhanced benefits to their customers.Cheers!The OPN Communications Team

    Read the article

  • Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots as a new subversion revision to the source tree. The end result being that the HEAD revision is the same as the last snapshot, and other revisions are as numbered. Some other requirements: that the HEAD revision is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the HEAD revision. meanwhile, there should be continuity between files that do persist between snapshots. Subversion should know that there are previous versions of these files and not treat them as brand new files within each revision. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in Subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering as I would also like a solution in GIT (I will post an answer separately for GIT so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same question but for GIT: Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree? An outline answer for Subversion in stackoverflow.com but not enough specifics about the script: what commands to use, code to check valid scenarios if necessary - i.e. a working script basically. http://stackoverflow.com/questions/2203818/is-there-anyway-to-import-xcode-snapshots-into-a-new-svn-repository

    Read the article

  • ArchBeat Top 20 for March 25-31, 2012

    - by Bob Rhubart
    The top 20 most-clicked links as shared via my social networks for the week of March 25-31, 2012. Oracle Cloud Conference: dates and locations worldwide The One Skill All Leaders Should Work On | Scott Edinger BPM in Retail Industry | Sanjeev Sharma Oracle VM: What if you have just 1 HDD system | @yvelikanov Solution for installing the ADF 11.1.1.6.0 Runtimes onto a standalone WLS 10.3.6 | @chriscmuir Beware the 'Facebook Effect' when service-orienting information technology | @JoeMcKendrick Using Oracle VM with Amazon EC2 | @pythianfielding Oracle BPM: Adding an attachment during the Human Task Initialization | Manh-Kiet Yap When Your Influence Is Ineffective | Chris Musselwhite and Tammie Plouffe Oracle Enterprise Pack for Eclipse 12.1.1 update on OTN  A surefire recipe for cloud failure | @DavidLinthicum  IT workers bore brunt of offshoring over past decade: analysis | @JoeMcKendrick Private cloud-public cloud schism is a meaningless distraction | @DavidLinthicum Oracle Systems and Solutions at OpenWorld Tokyo 2012 Dissing Architects, or "What's wrong with the coffee?" | Bob Rhubart Validating an Oracle IDM Environment (including a Fusion Apps build out) | @FusionSecExpert Cookbook: SES and UCM setup | George Maggessy Red Samurai Tool Announcement - MDS Cleaner V2.0 | @AndrejusB OSB/OSR/OER in One Domain - QName violates loader constraints | John Graves Spring to Java EE Migration, Part 3 | @ensode Thought for the Day "Inspire action amongst your comrades by being a model to avoid." — Leon Bambrick

    Read the article

  • StreamInsight Now Available Through Microsoft Update

    - by Roman Schindlauer
    We are pleased to announce that StreamInsight v1.1 is now available for automatic download and install via Microsoft Update globally. In order to enable agile deployment of StreamInsight solutions, you have asked of us a steady cadence of releases with incremental, but highly impactful features and product improvements. Following our StreamInsight 1.0 launch in Spring 2010, we offered StreamInsight 1.1 in Fall 2010 with implicit compatibility and an upgraded setup to support side by side installs. With this setup, your applications will automatically point to the latest runtime, but you still have the choice to point your application back to a 1.0 runtime if you choose to do so. As the next step, in order to enable timely delivery of our releases to you, we are pleased to announce the support for automatic download and install of StreamInsight 1.1 release via Microsoft Update starting this week. If you have a computer: that is subscribed to Microsoft Update (different from Windows Update) has StreamInsight 1.0 installed, and does not yet have StreamInsight 1.1 installed, Microsoft Update will automatically download and install the corresponding StreamInsight 1.1 update side by side with your existing StreamInsight 1.0 installation – across all supported 32-bit and 64-bit Windows operating systems, across 11 supported languages, and across StreamInsight client and server SKUs. This is also supported in WSUS environments, if all your updates are managed from a corporate server (please talk to the WSUS administrator in your enterprise). As an example, if you have SI Client 1.0 DEU and SI Server 1.0 ENU installed on the same computer, Microsoft Update will selectively download and side-by-side install just the SI Client 1.1 DEU and SI Server 1.1 ENU releases. Going forward, Microsoft Update will be our preferred mode of delivery – in addition to support for our download sites, and media based distribution where appropriate. Regards, The StreamInsight Team

    Read the article

  • Were the first assemblers written in machine code?

    - by The111
    I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.

    Read the article

  • OUM is Flexible and Scalable

    - by user535886
    Flexible and Scalable Traditionally, projects have been focused on satisfying the contents of a requirements document or rigorously conforming to an existing set of work products. Often, especially where iterative and incremental techniques have not been employed, these requirements may be inaccurate, the previous deliverables may be flawed, or the business needs may have changed since the start of the project. Fitness for business purpose, derived from the Dynamic Systems Development Method (DSDM) framework, refers to the focus of delivering necessary functionality within a required timebox. The solution can be more rigorously engineered later, if such an approach is acceptable. Our collective experience shows that applying fit-for-purpose criteria, rather than tight adherence to requirements specifications, results in an information system that more closely meets the needs of the business. In OUM, this principle is extended to refer to the execution of the method processes themselves. Project managers and practitioners are encouraged to scale OUM to be fit-for-purpose for a given situation. It is rarely appropriate to execute every activity within OUM. OUM provides guidance for determining the core set of activities to be executed, the level of detail targeted in those activities and their associated tasks, and the frequency and type of end user deliverables. The project workplan should be developed from this core. The plan should then be scaled up, rather than tailored down, to the level of discipline appropriate to the identified risks and requirements. Even at the task level, models and work products should be completed only to the level of detail required for them to be fit-for-purpose within the current iteration or, at the project level, to suit the business needs of the enterprise and to meet the contractual obligations that govern the project. OUM provides well defined templates for many of its tasks. Use of these templates is optional as determined by the context of the project. Work products can easily be a model in a repository, a prototype, a checklist, a set of application code, or, in situations where a high degree of agility is warranted, simply the tacit knowledge contained in the brain of an analyst or practitioner. For further reading on agility, see Balancing Agility and Discipline: A guide fro the Perplexed.

    Read the article

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • the OpenJDK group at Oracle is growing

    - by john.rose
    p.p1 {margin: 0.0px 0.0px 12.0px 0.0px; font: 12.0px Times} span.s1 {text-decoration: underline ; color: #0000ee} The OpenJDK software development team at Oracle is hiring. To get an idea of what we’re looking for, go to the Oracle recruitment portal and enter the Keywords “Java Platform Group” and the Location Keywords “Santa Clara”.  (We are a global engineering group based in Santa Clara.)  It’s pretty obvious what we are working on; just dive into a public OpenJDK repository or OpenJDK mailing list. Here is a typical job description from the current crop of requisitions: The Java Platform group is looking for an experienced, passionate and highly-motivated Software Engineer to join our world class development effort. Our team is responsible for delivering the Java Virtual Machine that is used by millions of developers. We are looking for a development engineer with a strong technical background and thorough understanding of the Java Virtual Machine, Java execution runtime, classloading, garbage collection, JIT compiler, serviceability and a desire to drive innovations. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 7 years of software engineering or related experience.

    Read the article

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >