Search Results

Search found 5462 results on 219 pages for 'continue'.

Page 89/219 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Best practice with branching source code and application lifecycle

    - by Toni Frankola
    We are a small ISV shop and we usually ship a new version of our products every month. We use Subversion as our code repository and Visual Studio 2010 as our IDE. I am aware a lot of people are advocating Mercurial and other distributed source control systems but at this point I do not see how we could benefit from these, but I might be wrong. Our main problem is how to keep branches and main trunk in sync. Here is how we do things today: Release new version (automatically create a tag in Subversion) Continue working on the main trunk that will be released next month And the cycle repeats every month and works perfectly. The problem arises when an urgent service release needs to be released. We cannot release it from the main trunk (2) as it is under heavy development and it is not stable enough to be released urgently. In such case we do the following: Create a branch from the tag we created in step (1) Bug fix Test and release Push the change back to main trunk (if applicable) Our biggest problem is merging these two (branch with main). In most cases we cannot rely on automatic merging because e.g.: a lot of changes has been made to main trunk merging complex files (like Visual Studio XML files etc.) does not work very well another developer / team made changes you do not understand and you cannot just merge it So what you think is the best practice to keep these two different versions (branch and main) in sync. What do you do?

    Read the article

  • Cannot install extensions required for GNOME Shell themes

    - by Soham Chowdhury
    I keep getting this output: soham@fortress:~$ sudo apt-get install gnome-shell-extensions gnome-tweak-tool Reading package lists... Done Building dependency tree Reading state information... Done gnome-tweak-tool is already the newest version. The following NEW packages will be installed: gnome-shell-extensions 0 upgraded, 1 newly installed, 0 to remove and 43 not upgraded. 1 not fully installed or removed. Need to get 0 B/121 kB of archives. After this operation, 849 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 179291 files and directories currently installed.) Unpacking gnome-shell-extensions (from .../gnome-shell-extensions_3.4.1~git20120508.dfd7191a-0ubuntu1~12.04~ricotz0_all.deb) ... dpkg: error processing /var/cache/apt/archives/gnome-shell-extensions_3.4.1~git20120508.dfd7191a-0ubuntu1~12.04~ricotz0_all.deb (--unpack): trying to overwrite '/usr/share/locale/lv/LC_MESSAGES/gnome-shell-extensions.mo', which is also in package gnome-shell-extensions-common 3.2.0-0ubuntu1~oneiric1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/gnome-shell-extensions_3.4.1~git20120508.dfd7191a-0ubuntu1~12.04~ricotz0_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Update: Fixed that. Now GNOME Tweak Tool shows me an exclamation mark beside the extension enable button, saying "Extension doesn't support shell version". My GNOME shell is already the latest version. Help!

    Read the article

  • Is there a pattern or best practice for passing a reference type to multiple classes vs a static class?

    - by Dave
    My .NET application creates HTML files, and as such, the structure looks like variable myData BuildHomePage() variable graph = new BuildGraphPage(myData) variable table = BuildTablePage(myData) BuildGraphPage and BuildTablePage both require access data, the myData object. In the above example, I've passed the myData object to 2 constructors. This is what I'm doing now, in my current project. The myData object, and it's properties are all readonly. The problem is, the number of pages which will require this object has grown. In the real project, there are currently 4, but the new spec is to have about 20. Passing this object to the constructor of each new object and assigning it to a field is a little time consuming, but not a hardship! This poses the question whether it's better practice to continue as I have, or to refactor and create a new static class for myData which can be referenced from any where in my project. I guess my abilities to use Google are poor, because I did try and find an appropriate pattern as I am sure this type of design must be common place but my results returned nothing. Is there a pattern which is suited, or do best practices lean towards one implementation over another.

    Read the article

  • Programming as a minor

    - by Tomas Cokis
    Hello Everyone! I've never asked a question here at programmers, and for reasons which will become obvious later I've never answered one here, but I do poke around in short bursts. Anyway, I'm 15 right now, and I've been programming in C++ for 4 years, just working on my own projects that are aim so high as to never be finished. I've been working on a single project for the last year, and every 3 months, I add a new system into it. It might be a value tabling directory enabled log system, or a render system, or a class to load up xml files, whatever it is, I don't mind too much that the overall project (a 3d engine) isn't ever going to get finished, I just get some satisfaction from getting what I have done building and running. I don't know what I want to do when I grow up, although I suspect I'll go into some form of engineering, but I was interested in knowing if I do choose to go into a career as a developer, what kind of material I could look at to push myself up and get myself experience that might help my career later. I'm not talking about books in particular, I'm more interested in subjects areas that will get me access to good job opportunities, or that will give me a hand-up if I do computer science and software related courses at uni. One of the things I was thinking of doing was designing some of the logic gate components of a small computer - which I started briefly over the holidays, working out integer addition, subtraction and multiplication. That kind of stuff interests me, but is it really useful - or more useful then just more programming? But anyway, Any advice? Should I continue on my perpetual 3d engine? Are there any other projects or particular accomplishments that would help my education? Perhaps I should mention that I live in Perth, Australia, so local software companies are likely to be more scarce then usual.

    Read the article

  • Issue 55 - Skin Object Tokens, Optimized Control Panel, OWS Validation and Security, RAD

    April 2010 Welcome to Issue 55 of DNN Creative Magazine In this issue we focus on the new Skin Object token method introduced in DotNetNuke 5 for adding tokens into a DotNetNuke skin. A Skin Object Token is a web user control which covers skin elements such as the logo, menu, search, login links, date, copyright, languages, links, banners, privacy, terms of use, etc. Following this we demonstrate how to install and use two Advanced DotNetNuke Admin Control Panels which are available for free from Oliver Hine. These control panels provide an optimized version of the admin control panel to improve performance and page load times, as well as a ribbon bar control panel which adds additional features. Next, we continue the Open Web Studio tutorials, this month we demonstrate some very advanced techniques for building a car parts application in Open Web Studio. Throughout the tutorial we cover form input, validation, how to use dependant drop down lists, populating checkbox lists and introduce a new concept of data level security. Data level security allows you to control which data a user can access within a module. To finish, we have part five of the "How to Build a News Application with DotNetMushroom Rapid Application Developer (RAD)" article, where we demonstrate how to implement paging. This issue comes complete with 14 videos. Skinning: Skin Object Tokens for DotNetNuke 5 (8 videos - 64mins) Free Module: Advanced Optimized Control Panel by Oliver Hine (1 video - 11mins) Module Development Series: Form Validation, Dependant Drop Downs and Data Level Security in OWS (5 videos - 44mins) How to Implement Paging with DotNetMushroom RAD View issue 55 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 55 issues we have created 563 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • FoxTales: Behind the Scenes at Fox Software by Kerry Nietz

    Flash backs from the past! It's truly amazing to discover that software development from freshman to senior level as well as project management hasn't changed that much. Kerry Nietz describes his memoir from his final year at college to his first job at Fox Software to 'an early retirement' at Microsoft. This title also brought his other fictional novels to my attention. Once again here is the review I published on Amazon: Built to last! I have been around in software development for more than a decade now but honestly I have to admit it is only now that I took the opportunity to read about the history of my used to be primary programming language. In fact, I started with Visual FoxPro 6 back in 1999 and went only down to FoxPro for Windows 2.6 during migration projects - long after the stories described in this title. It is really interesting to see how they actually managed to create a great product with such a small team of developers. "Create the best Report Writer in the world, out of only sawdust, bubblegum, and dreams." - That's the best sentence I'm going to quote from this title in the future. An inspiration to achieve the impossible, only by taking small steps. Just begin the journey - one step after the next one. If you fall, stand up and continue to walk. Kerry takes the reader on an amazing trip through almost 4 years working at a small software company in Perrysburg, Ohio. That went from a another 'look-alike' of the mighty Ashton-Tate dBase to the leading force in database development, long before Microsoft Access (project name: Cirrus) was even finished. It survived Borland Paradox and even nowadays Visual FoxPro is still in daily use in thousands of companies world-wide. Actually, I'm glad that I had the chance to foster my programming knowledge with Visual FoxPro. After his excellent work in software development, Kerry went for a second career as a writer. I'm looking forward to read his other titles soon:

    Read the article

  • How to check if the tab page is dirty and prompt the user to save before navigating away using ajaxtoolkit tab control in ASP.NET

    Step 1: Put a hidden variable in Update panel <asp:HiddenField ID="hfIsDirty" runat="server" Value="0" /> Step 2: Put the following code in ajaxcontrol tool kit tabcontainer OnClientActiveTabChanged="ActiveTabChanged" Copy the following script in the aspx page. <script type="text/javascript">       //Trigger Server side post back for the Tab container       function ActiveTabChanged(sender, e) {           __doPostBack('<%= tcBaseline.ClientID %>', '');       }       //Sets the dirty flag if the page is dirty       function setDirty() {           var hf = document.getElementById("<%=hfIsDirty.ClientID%>");           if (hf != null)               hf.value = 1;       }       //Resets the dirty flag after save       function clearDirty() {           var hf = document.getElementById("<%=hfIsDirty.ClientID%>");           hf.value = 0;       }       function showMessage() { return "page is dirty" }       function setControlChange() {           if (typeof (event.srcElement) != 'undefined')           { event.srcElement.onchange = setDirty; }       }       function checkDirty() {           var tc = document.getElementById("<%=tcBaseline.ClientID%>");           var hf = document.getElementById("<%=hfIsDirty.ClientID%>");           if (hf.value == "1") {               var conf = confirm("Do you want o loose unsaved changes? Please Cancel to stay on page or OK to continue ");               if (conf) {                   clearDirty();                   return true;               }               else {                   var e = window.event;                   e.cancelBubble = true;                   if (e.stopPropagation) e.stopPropagation();                   return false;               }           }           else               return true;       }       document.body.onclick = setControlChange;       document.body.onkeyup = setControlChange;       var onBeforeUnloadFired = false;       // Function to reset the above flag.       function resetOnBeforeUnloadFired() {           onBeforeUnloadFired = false;       }       function doBeforeUnload() {           var hf = document.getElementById("<%=hfIsDirty.ClientID%>");           // If this function has not been run before...           if (!onBeforeUnloadFired) {               // Prevent this function from being run twice in succession.               onBeforeUnloadFired = true;               // If the form is dirty...               if (hf.value == "1") {                   event.returnValue = "If you continue you will lose any changes that you have made to this record.";               }           }           window.setTimeout("resetOnBeforeUnloadFired()", 1000);       }       if (window.body) {           window.body.onbeforeunload = doBeforeUnload;       }       else           window.onbeforeunload = doBeforeUnload;   </script> Step 3: Here is how the tabcontrol should look like <asp:UpdatePanel ID="upTab" runat="server" UpdateMode="conditional">                     <ContentTemplate>                         <ajaxtoolkit:TabContainer ID="tcBaseline" runat="server" Height="400px" OnClientActiveTabChanged="ActiveTabChanged">                             <ajaxtoolkit:TabPanel ID="tpPersonalInformation" runat="server">                                 <HeaderTemplate>                                     <asp:Label ID="lblPITab" runat="server" Text="<%$ Resources:Resources, Baseline_Tab_PersonalInformation %>"                                         onclick="checkDirty();"></asp:Label>                                 </HeaderTemplate>                                 <ContentTemplate>                                     <asp:PlaceHolder ID="PlaceHolder1" runat="server"></asp:PlaceHolder> </ContentTemplate>                             </ajaxtoolkit:TabPanel> span.fullpost {display:none;}

    Read the article

  • Can't boot Ubuntu 12.04 from external Hard Drive using Mac

    - by Catgirl the Crazy
    Recently, I upgraded the RAM and hard drive on my Early 2008 Macbook to improve the performance. Rather than throw away the old hard drive, I bought an enclosure for it to turn it into an external hard drive, and, since all the data was migrated to my new drive, I decided to install Ubuntu on it for funsies (note: I am a near-total Ubuntu n00b). My first attempt to install Ubuntu didn't work (it gave me errors about not being able to find the BIOS or something), but my second attempt finished successfully (can't remember what, if anything, I did different). However, when I plug the external drive into my Macbook, it gives me a message saying it can't read the disk. Moreover, when I go into the Startup Manager (i.e.: what you get when you turn on the Macbook while holding the option key), the external drive is not one of the available startup disks. I thought this might be because I have an older Macbook, so I tried booting it with my mom's Late 2011 Macbook, and got the same results. Then I tried booting it through my dad's Dell laptop that runs Windows 7, and that time it worked. This is really counter intuitive to me, since the hard drive originally came from a Macbook, so if anything you'd think it would be less compatible with the Windows laptop than the Macbook. In case it helps, here's a link to a picture of how I set up the partition table while doing the install (not shown there is the fact that I checked the "Format?" box next to the /boot partition, since it gave me a warning when I tried to continue the installation without doing so) Anyone have any clue at all? If it helps, the hard drive I'm using is a 120GB 5400-rpm Serial ATA hard disk drive.

    Read the article

  • SFML programs fails to debug with glslDevil

    - by Zhen
    I'm testing the glslDevil debugger with a simple (and working) SFML application in Linux + NVidia. But it always fails in the window creation step: W! Program Start | glXGetConfig(0x86a50b0, 0x86acef8, 4, 0xbf8228c4) | glXGetConfig(0x86a50b0, 0x86acef8, 5, 0xbf8228c8) | glXGetConfig(0x86a50b0, 0x86acef8, 8, 0xbf8228cc) | glXGetConfig(0x86a50b0, 0x86acef8, 9, 0xbf8228d0) | glXGetConfig(0x86a50b0, 0x86acef8, 10, 0xbf8228d4) | glXGetConfig(0x86a50b0, 0x86acef8, 11, 0xbf8228d8) | glXGetConfig(0x86a50b0, 0x86acef8, 12, 0xbf8228dc) | glXGetConfig(0x86a50b0, 0x86acef8, 13, 0xbf8228e0) | glXGetConfig(0x86a50b0, 0x86acef8, 100000, 0xbf8228e4) | glXGetConfig(0x86a50b0, 0x86acef8, 100001, 0xbf8228e8) | glXCreateContext(0x86a50b0, 0x86acef8, (nil), 1) E! Child process exited W! Program termination forced! And the code that fails: #include <SFML/Graphics.hpp> #define GL_GLEXT_PROTOTYPES 1 #define GL3_PROTOTYPES 1 #include <GL/gl.h> #include <GL/glu.h> #include <GL/glext.h> int main(){ sf::RenderWindow window{ sf::VideoMode(800, 600), "Test SFML+GL" }; bool running = true; while( running ){ sf::Event event; while( window.pollEvent(event) ){ if( event.type == sf::Event::Closed ){ running = false; }else if(event.type == sf::Event::Resized){ glViewport(0, 0, event.size.width, event.size.height); } } window.display(); } return 0; } Is It posible to solve this problem? or get around the problem to continue the gslsDevil use?.

    Read the article

  • WebLogic Application Server: free for developers! by Bruno Borges

    - by JuergenKress
    Great news! Oracle WebLogic Server is now free for developers! What does this mean for you? That you as a developer are permitted to: "[...] deploy the programs only on your single developer desktop computer (of any type, including physical, virtual or remote virtual), to be used and accessed by only (1) named developer." But the most interesting part of the license change is this one: "You may continue to develop, test, prototype and demonstrate your application with the programs under this license after you have deployed the application for any internal data processing, commercial or production purposes" (Read the full license agreement here). If you want to take advantage of this licensing change and start developing Java EE applications with the #1 Application Server in the world, read now the previous post, How To Install WebLogic Zip on Linux! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic free,WebLogic for developers,WebLogic license,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Hot fix published for TFS2010 upgrade issues

    - by jehan
    Microsoft has released a hot fix for the issues that are identified after the migration of TFS2005/TFS2008 servers to TFS2010. The issues are related to Merging and Labels: ·         Labels that were created before the upgrade are entirely empty.  Labels could be also have incorrect contents. ·         The merge wizard in Visual Studio does not display all valid merge targets for a given source path/branch. ·         During merging, merge candidates are shown for changes that were already merged prior to the upgrade. If you have not yet upgraded to TFS 2010, the hotfix is now available and is highly recommended to be applied before configuring your team project collections. Because this hotfix applies to the upgrade of version control content, it must be applied after TFS 2010 setup is complete, but before configuration is started.  At the end of the setup experience, the Success screen is shown indicating the completion of the installation.  Normally, users will continue on to the configuration part, but in this case, the user need to cancel the configuration part by un-checking the “Launch Team Foundation Server Configuration Tool” box, which will enable the Cancel button. After exiting setup, the hotfix executable can be run to update the upgrade steps. Once the hotfix is installed, the TFS Configuration Wizard will need to be re-launched from the Start Menu to complete the upgrade process.    The hotfix has been published on MSDN Code Gallery – you can find it here: http://code.msdn.microsoft.com/KB2135068   If you have upgraded to TFS2010 and facing any of the above issues, then checkout this KB for Resolution: http://support.microsoft.com/kb/2193796/en-us

    Read the article

  • Craftsmanship Tour Day 1: Didit Long Island

    - by Liam McLennan
    On Monday I was at Didit for my first ever craftsmanship visit. Didit seem to occupy a good part of a non-descript building in Rockville Centre Long Island. Since I had arrived early from Seattle I had some time to kill, so I stopped at the Rockville Diner on the corner of N Park Ave and Sunrise Hwy. I thoroughly enjoyed the pancakes and the friendly service. After walking to the Didit office I met Rik Dryfoos, the Didit Engineering Manager who organised my visit, and got the introduction to Didit and the work they are doing. I spent the morning in the room shared by the Didit developers, who are working on some fascinating deep engineering problems. After lunch at a local Thai place I setup a webcam to record an interview with Rik and Matt Roman (Didit VP of Engineering). I had a lot of trouble with the webcam, including losing several minutes of conversation, but in the end I was very happy the result. Here are the full interviews with Rik and Matt: Interview with Rik Dryfoos Interview with Matt Roman We had a great chat, much of which is captured in the recording. It was such great conversation that I almost missed my train to Manhattan. I’m sure Didit will continue to do well with such a dedicated and enthusiastic team. I sincerely thank them for hosting me for the day. If you are looking for a true agile environment and the opportunity to work with a high quality team then you should talk to Didit.

    Read the article

  • fern-wifi-cracker "Exec format error" breaks packaging system

    - by cunix
    root@cunix:/home/cunix# sudo apt-get remove fern-wifi-cracker Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libqt4-test libqt4-sql-mysql mysql-common libqt4-xmlpatterns libqt4-help python-qt4 python-sip libqt4-sql-sqlite libqt4-sql macchanger libqt4-designer libmysqlclient16 python-scapy libqt4-scripttools Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: fern-wifi-cracker 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 3,514kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 167661 files and directories currently installed.) Removing fern-wifi-cracker ... dpkg (subprocess): unable to execute installed pre-removal script (/var/lib/dpkg/info/fern-wifi-cracker.prerm): Exec format error dpkg: error processing fern-wifi-cracker (--remove): subprocess installed pre-removal script returned error exit status 2 Errors were encountered while processing: fern-wifi-cracker E: Sub-process /usr/bin/dpkg returned an error code (1) how to uninstall?

    Read the article

  • Oracle's SPARC T4, 007 Style

    - by Kristin Rose
    The names 4, T4, and this power house travels hand in hand with its good friend SPARC. About 6 years ago on-chip encryption acceleration was first shipped in a commercial system, the SPARC T1. Today, thanks to Oracle SPARC innovative leadership in on-chip encryption acceleration, complex cryptographic computations was born and has since rapidly evolved. Customers can now have security with performance because we my friend, are in the Age of Big Data.If you need some high speed action in your life, listen here. The SPARC T4 systems offer customers much more value for applications than just increased performance through its cross sell opportunity. This is done by enabling partners to integrate your own applications to Oracle’s SPARC T4 Servers for Cloud deployments, and providing direct business benefits that supersedes the commodity approach to data center computing such as security, performance and optimization.As companies continue down this complex path of big data, eCommerce, and mobility, the need to provide better and more in-depth security is more prominent than ever. Oracle’s SPARC T4 processor allows customers to deliver the highest levels of application security, as well as deliver the necessary level performance without added cost, and complexity.To learn more behind the value of SPARC T4, check out a more in-depth blog here. For more on the SPARC T4 family of products, click here.Encryption Lives Another Day,The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Duke's Choice Award Goes Regional

    - by Tori Wieldt
    We are pleased to announce the expansion of the Duke's Choice Award program to include regional awards in conjunction with each international JavaOne conference.  The expanded Duke's Choice Award program celebrates Java innovation happening within specific regions and provides an opportunity to recognize winners locally. Regions include Latin America (LAD), Europe Africa Middle East (EMEA), and Asia.  The global program will  continue in association with the flagship JavaOne conference.  First up: Duke's Choice Awards LAD.  Three winners will be announced on stage during JavaOne Latin America December 4th to 6th and in the Jan/Feb issue of Java Magazine.   Submit your nominations now through October 30th!  Nominations are accepted from anyone, including Oracle employees,  for compelling uses of Java technology or community involvement.  Duke's Choice Awards LAD judges include community members Yara Senger (Brazil) and Alexis Lopez (Colombia). In keeping with the 10 year tradition of the Duke's Choice Award program, the most important ingredient is innovation. Let's recognize and celebrate the innovation that Java delivers within Latin America! www.java.net/dukeschoiceLAD To see the 2012 global Duke's Choice Awards winners now, subscribe to Java Magazine

    Read the article

  • How Can You Get More Productive In Life Sciences Sales?

    - by charles.knapp
    Only half of all doctors will meet with pharmaceutical sales reps, and that percentage continues to decrease. Furthermore, when reps are granted an opportunity to share information, the average interaction is only about a minute and a half. Concurrently, call quotas continue to increase. What does this matter? Sales reps need to spend less time on traditional planning and after-call reporting, more time making calls, and make more productive use of short presentation times. Fortunately for sales reps, Oracle offers the first life sciences CRM that is designed to double sales time and halve reporting time. In particular, our new Life Sciences Edition Offline Client is designed so that you can actually turn the screen around, so that your CRM is useful for presentations and not just reporting, whether you are connected to cloud or working offline such as in restricted clinical environments. Watch Piers Evans, Industry Strategy Director, show what this looks like in the day of a typical pharmaceutical sales representative. By use of this code snippet, I agree to the Brightcove Publisher T and C found at https://accounts.brightcove.com/en/terms-and-conditions/. -- This script tag will cause the Brightcove Players defined above it to be created as soon as the line is read by the browser. If you wish to have the player instantiated only after the rest of the HTML is processed and the page load is complete, remove the line. -- brightcove.createExperiences();

    Read the article

  • Project Euler 12: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 12.  As always, any feedback is welcome. # Euler 12 # http://projecteuler.net/index.php?section=problems&id=12 # The sequence of triangle numbers is generated by adding # the natural numbers. So the 7th triangle number would be # 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms # would be: # 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... # Let us list the factors of the first seven triangle # numbers: # 1: 1 # 3: 1,3 # 6: 1,2,3,6 # 10: 1,2,5,10 # 15: 1,3,5,15 # 21: 1,3,7,21 # 28: 1,2,4,7,14,28 # We can see that 28 is the first triangle number to have # over five divisors. What is the value of the first # triangle number to have over five hundred divisors? import time start = time.time() from math import sqrt def divisor_count(x): count = 2 # itself and 1 for i in xrange(2, int(sqrt(x)) + 1): if ((x % i) == 0): if (i != sqrt(x)): count += 2 else: count += 1 return count def triangle_generator(): i = 1 while True: yield int(0.5 * i * (i + 1)) i += 1 triangles = triangle_generator() answer = 0 while True: num = triangles.next() if (divisor_count(num) >= 501): answer = num break; print answer print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Online Media Daily: Oracle Takes Social Marketing Seriously

    - by Richard Lefebvre
    In the article published on Nov 12, 2012 and titled "Oracle Integrates Social Marketing Into Enterprise To Gain Marketing Revs," Online Media Daily explores Oracle's approach to social marketing. The publication says that Oracle is focused on showing marketers how to integrate social data into corporate business processes and how to "socialize" the corporate world. The article goes on to state:"Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.    Read more: http://www.mediapost.com/publications/article/187096/oracle-integrates-social-marketing-into-enterprise.html#ixzz2CPMZ1w3D Meg Bear, VP of cloud social platform at Oracle, sees the integration with ERP systems as a differentiator for the company. Oracle Social Relationship Management launched last month. It integrates social data into traditional enterprise applications like Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce and Oracle ERP." The post goes on to quote a Forrester analyst stating the following:""There's room for any process-driven application to run more efficiently, especially if they're socially enabled," said Rob Koplowitz, VP and principal analyst at Forrester Research. "It takes the human part of the process not generally captured today to provide better access to content, information and collective actions." Koplowitz said several acquisitions support Oracle's long-term vision: to layer social on top of other enterprise apps, like its ERP platform." With many great acquisitions under our belt and organically grown social tools, the market recognizes that Oracle is poised to seize the moment in socially enabled business apps. Continue reading the full article here.

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

  • Introducing NFakeMail

    - by João Angelo
    Ever had to resort to custom code to control emails sent by an application during integration and/or system testing? If you answered yes then you should definitely continue reading. NFakeMail makes it easier for developers to do integration/system testing on software that sends emails by providing a fake SMTP server. You’ll no longer have to manually validate the email sending process. It’s developed in C# and IronPython and targets the .NET 4.0 framework. With NFakeMail you can easily automate the testing of components that rely on sending mails while doing its job. Let’s take a look at some sample code, we start with a simple class containing a method that sends emails. class Notifier { public void Notify() { using (var smtpClient = new SmtpClient("localhost", 10025)) { smtpClient.Send("[email protected]", "[email protected]", "S1", "."); smtpClient.Send("[email protected]", "[email protected]", "S2", ".."); } } } Then to automate the tests for this method we only need to the following: [Test] public void Notify_T001() { using (var server = new FakeSmtpServer(10025)) { new Notifier().Notify(); // Verifies two messages are received in the next five seconds var messages = server.WaitForMessages(count: 2, timeout: 5000); // Verifies the message sender Debug.Assert(messages.All(m => m.From.Address == "[email protected]")); } } The created FakeSmtpServer instance will act as a simple SMTP server and intercept the messages sent by the Notifier class. It’s even possible to verify some fields of each intercepted message and by default all intercepted messages are saved to the file system in MIME format.

    Read the article

  • Great Blog Comments

    - by Paul Sorensen
    Just a quick note to let you know that in the interest of keeping the most useful content available here on the Oracle Certification Blog, we do moderate the comments. We welcome (and encourage dialog, questions, comments, etc) here on the topics at hand. We'll never 'censor' out a comment just because we don't like it - in fact, this is how we often learn ways in which we can do better. But of course we will filter out the typical list like anyone else: crude/offensive remarks, foul language, reference to illegal activity, etc. We will also often redirect any customer-service type inquiries to [email protected] where they can best be handled.Also, if you have a question of a general nature, please research it on the Oracle Certification website first. We often won't respond to questions asking such as "tell me how to get 11g ocp", as we've already made sure that you have that kind of information available. Now if we've inadvertently 'hidden' something on our site (gulp), then fair enough - please let us know that you're having a hard time finding it and we'll be sure to try and "unbury it" ;-)Additionally, you may have more of an 'opinion' type question, such as "should I do 'x' certification or 'y' certification." For these, we highly recommend checking on the Oracle Technology Network (OTN) Certification Forum, where you can engage in peer-to-peer discussions, share techniques, advice and best practices with others in the field.In the meantime, please continue to share your thoughts, ideas, opinions, tech tips etc - we look forward to seeing them and passing them wherever we can!QUICK LINKS:Oracle Certification WebsiteEmail - Customer ServiceOracle Technology Network (OTN) Certification Forum

    Read the article

  • Question No 207630 replied by Marco Braida

    - by kishor
    Question No 207630 replied by Marco Braid After your reply I could install Synaptic,then AptonCD,Gdebi and some other applications. I had made APTONCD. I was convienced that Ubuntu 12.04 LTS is working well. I had removed everything ( Ubuntu 12.04,Fuduntu,Linix mint 13) from my hard disk and reinstalled Ubuntu 12.04 LTS. Your sugestion which worked well earlier does not work now. I get following msg on terminal. Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: gdebi-core The following NEW packages will be installed: gdebi gdebi-core 0 upgraded, 2 newly installed, 0 to remove and 411 not upgraded. Need to get 0 B/185 kB of archives. After this operation, 1,398 kB of additional disk space will be used. Do you want to continue [Y/n]? y Media change: please insert the disc labeled 'APTonCD for ubuntu precise - amd64 (2012-10-14 10:27) DVD1' in the drive '/media/cdrom/' and press enter Media change: please insert the disc labeled 'APTonCD for ubuntu precise - amd64 (2012-10-14 10:27) DVD1' in the drive '/media/cdrom/' and press enter Media change: please insert the disc labeled 'APTonCD for ubuntu precise - amd64 (2012-10-14 10:27) DVD1' in the drive '/media/cdrom/' and press enter This is for gdebi similar msg is on terminal when I tried for Synaptic and AptonCD. I had downloaded files for Synaptic and treid but without susscess. Kishor

    Read the article

  • Oracle’s Java Community Outreach Plan

    - by Tori Wieldt
    As the steward of Java, Oracle recognizes the importance and value of the Java community, and the relevant role it plays in keeping Java the largest, most vibrant developer community in the world.   In order to increase Oracle’s touch with Java developers worldwide, we are shifting our focus from a flagship JavaOne event followed by several regional JavaOne conferences, to a new outreach model which continues with the JavaOne flagship event, as well as a mix of online content, regional Java Tours, and regional 3rd party event participation.  1. JavaOne JavaOne continues to remain the premier hub for Java developers where you are given the opportunity to improve your Java technical skills, and interact with other members of the Java community. JavaOne is centered on open collaboration and sharing, and Oracle will continue to invest in JavaOne as a unique stand-alone event for the Java community. Oracle recognizes that many developers cannot attend JavaOne in person, therefore Oracle will share the wealth of the unique event material to those developers through a new and easy-to-access online Java program. While online JavaOne content cannot address the importance of actual face-to-face community/developer engagements and networking, online content does aide in extending the Java technical learning opportunity to a broader collection of developers. 2. Java Developer Day Tours Oracle will execute regional Java Developer Days with recognized Java User Groups (JUGs) with participation from Java Evangelist and Java Champions. This allows local, regional specific Java topics to be addressed both by Oracle and the Java community. In addition, Oracle will deliver more virtual technical content programs to reach developers where an existing JUG may not have a presence. 3. Sponsorship of Community-Driven Regional Events/Conferences Oracle also recognizes that improved community dialog and relations are achievable by continued Oracle sponsorship and onsite participation at both established/well-recognized 3rd party events and new emerging/growing 3rd party events. Oracle’s ultimate goal is to be an even better steward for Java by reaching more of the Java ecosystem with face-to-face and online community engagements. We look forward to planning tours and events with you, members of the Java community.

    Read the article

  • The Workshop Technique Handbook

    - by llowitz
    The #OUM method pack contains a plethora of information, but if you browse through the activities and tasks contained in OUM, you will see very few references to workshops.  Consequently, I am often asked whether OUM supports a workshop-type approach.  In general, OUM does not prescribe the manner in which tasks should be conducted, as many factors such as culture, availability of resources, potential travel cost of attendees, can influence whether a workshop is appropriate in a given situation.  Although workshops are not typically called out in OUM, OUM encourages the project manager to group the OUM tasks in a way that makes sense for the project. OUM considers a workshop to be a technique that can be applied to any OUM task or group of tasks.  If a workshop is conducted, it is important to identify the OUM tasks that are executed during the workshop.  For example, a “Requirements Gathering Workshop” is quite likely to Gathering Business Requirements, Gathering Solution Requirements and perhaps Specifying Key Structure Definitions. Not only is a workshop approach to conducting the OUM tasks perfectly acceptable, OUM provides in-depth guidance on how to maximize the value of your workshops.  I strongly encourage you to read the “Workshop Techniques Handbook” included in the OUM Manage Focus Area under Method Resources. The Workshop Techniques Handbook provides valuable information on a variety of workshop approaches and discusses the circumstances in which each type of workshop is most affective.  Furthermore, it provides detailed information on how to structure a workshops and tips on facilitating the workshop. You will find guidance on some popular workshop techniques such as brainstorming, setting objectives, prioritizing and other more specialized techniques such as Value Chain Analysis, SWOT analysis, the Delphi Technique and much more. Workshops can and should be applied to any type of OUM project, whether that project falls within the Envision, Manage or Implementation focus areas.  If you typically employ workshops to gather information, walk through a business process, develop a roadmap or validate your understanding with the client, by all means continue utilizing them to conduct the OUM tasks during your project, but first take the time to review the Workshop Techniques Handbook to refresh your knowledge and hone your skills. 

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >