Search Results

Search found 12089 results on 484 pages for 'rule of three'.

Page 256/484 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Gathering all data in single iteration vs using functions for readable code

    - by user828584
    Say I have an array of runners with which I need to find the tallest runner, the fastest runner, and the lightest runner. It seems like the most readable solution would be: runners = getRunners(); tallestRunner = getTallestRunner(runners); fastestRunner = getFastestRunner(runners); lightestRunner = getLightestRunner(runners); ..where each function iterates over the runners and keeps track of the largest height, greatest speed, and lowest weight. Iterating over the array three times, however, doesn't seem like a very good idea. It would instead be better to do: int greatestHeght, greatestSpeed, leastWeight; Runner tallestRunner, fastestRunner, lightestRunner; for(runner in runners){ if(runner.height > greatestHeight) { greatestHeight = runner.height; tallestRunner = runner; } if(runner.speed > ... } While this isn't too unreadable, it can get messy when there is more logic for each piece of information being extracted in the iteration. What's the middle ground here? How can I use only a single iteration while still keeping the code divided into logical units?

    Read the article

  • SQL SERVER – Answer – Value of Identity Column after TRUNCATE command

    - by pinaldave
    Earlier I had one conversation with reader where I almost got a headache. I suggest all of you to read it before continuing this blog post SQL SERVER – Reseting Identity Values for All Tables. I believed that he faced this situation because he did not understand the difference between SQL SERVER – DELETE, TRUNCATE and RESEED Identity. I wrote a follow up blog post explaining the difference between them. I asked a small question in the second blog post and I received many interesting comments. Let us go over the question and its answer here one more time. Here is the scenario to set up the puzzle. Create Table with Seed Identity = 11 Insert Value and Check Seed (it will be 11) Reseed it to 1 Insert Value and Check Seed (it will be 2) TRUNCATE Table Insert Value and Check Seed (it will be 11) Let us see the T-SQL Script for the same. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Reseed to 1 DBCC CHECKIDENT ('TestTable', RESEED, 1) GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Truncate table TRUNCATE TABLE [TestTable] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Question for you Here -- Clean up DROP TABLE [TestTable] GO Now let us see the output of three of the select statements. 1) First Select after create table 2) Second Select after reseed table 3) Third Select after truncate table The reason is simple: If the table contains an identity column, the counter for that column is reset to the seed value defined for the column. Reference: Pinal Dave (http://blog.sqlauthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Podcast Show Notes: Are You Future Proof?

    - by Bob Rhubart
    On September 14, 2012 ZDNet blogger Joe McKendrick published Why IT is a Profession in Flux, a short article in which he makes the observation that "IT professionals are under considerable pressure to deliver more value to the business, versus being good at coding and testing and deploying and integrating." I forwarded that article to my list of Usual Suspects (the nearly 40 people who have participated in the podcast over the last 3 years), along with a suggestion that I wanted to put together a panel discussion to further explore the issue. This podcast is the result. As it happened, three of the people who responded to my query were in San Francisco for Oracle OpenWorld, as was I, so I seized the rare opportunity for a face to face conversation. The participants are all Oracle ACE Directors, as well as architects: Ron Batra, Director of Cloud Computing at AT&T Basheer Khan, Founder, President and CEO at Innowave Technology Ronald van Luttikhuizen, Managing Partner at Vennster. The Conversation Listen to Part 1 Future-Proofing: As powerful forces reshape enterprise IT, your IT and software development skills may not be enough. Listen to Part 2 Survival Strategy: Re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what’s truly valuable. Listen to Part 3 Writing on the Wall: Do the technological trends that are shaping enterprise IT pose any threat to basic software development roles? What opportunities do these changes represent? The entire conversation is also available in video format from the OTN YouTube Channel. Your Two Cents What are you doing to future-proof your IT career? Share your thoughts in the comments section.

    Read the article

  • Sync Google Contacts with QuickBooks

    - by dataintegration
    The RSSBus ADO.NET Providers offer an easy way to integrate with different data sources. In this article, we include a fully functional application that can be used to synchronize contacts between Google and QuickBooks. Like our QuickBooks ADO.NET Provider, the included application supports both the desktop versions of QuickBooks and QuickBooks Online Edition. Getting the Contacts Step 1: Google accounts include a number of contacts. To obtain a list of a user's Google Contacts, issue a query to the Contacts table. For example: SELECT * FROM Contacts. Step 2: QuickBooks stores contact information in multiple tables. Depending on your use case, you may want to synchronize your Google Contacts with QuickBooks Customers, Employees, Vendors, or a combination of the three. To get data from a specific table, issue a SELECT query to that table. For example: SELECT * FROM Customers Step 3: Retrieving all results from QuickBooks may take some time, depending on the size of your company file. To narrow your results, you may want to use a filter by including a WHERE clause in your query. For example: SELECT * FROM Customers WHERE (Name LIKE '%James%') AND IncludeJobs = 'FALSE' Synchronizing the Contacts Synchronizing the contacts is a simple process. Once the contacts from Google and the customers from QuickBooks are available, they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete contacts in either data source as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for QuickBooks V3, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the QuickBooks ADO.NET Data Provider V3, which can be obtained here.

    Read the article

  • PC On/Off Time Charts Windows Uptime; No Logging Necessary

    - by Jason Fitzpatrick
    Windows: PC On/Off Time is a graphical tool that displays your PC’s uptime, downtime, errors, and more all in a clear and portable package. One of the hassles of using logging tools is that you usually have to enable the logging and then wait for results to pile up before seeing anything useful (such as when you turn on the logging on your router). PC On/Off Time taps right into the event logs your Windows PC is already keeping so you get immediate access to your uptime history. If you look at the screenshot above you can see an accurate picture of the last few weeks of uptime on my computer. October 23-24 I didn’t boot down my PC, the rest of the time I hibernated it overnight when I wasn’t using it, November 1st I installed an SSD (you can see the burst of reboots and short uptimes) and then November 9th there was a brief power outage that caused an unexpected stop (the red arrows on the timeline for the 9th). The free version offers a three-week peek back into your uptime history (upgrade to the Pro version for $12.75 or for free using Trial Pay to unlock your completely uptime history).PC On/Off Time is Windows only. PC On/Off Time [via Addictive Tips] Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive Follow How-To Geek on Google+

    Read the article

  • Animating Tile with Blitting taking up Memory.

    - by Kid
    I am trying to animate a specific tile in my 2d Array, using blitting. The animation consists of three different 16x16 sprites in a tilesheet. Now that works perfect with the code below. BUT it's causing memory leakage. Every second the FlashPlayer is taking up +140 kb more in memory. What part of the following code could possibly cause the leak: //The variable Rectangle finds where on the 2d array we should clear the pixels //Fillrect follows up by setting alpha 0 at that spot before we copy in nxt Sprite //Tiletype is a variable that holds what kind of tile the next tile in animation is //(from tileSheet) //drawTile() gets Sprite from tilesheet and copyPixels it into right position on canvas public function animateSprite():void{ tileGround.bitmapData.lock(); if(anmArray[0].tileType > 42){ anmArray[0].tileType = 40; frameCount = 0; } var rect:Rectangle = new Rectangle(anmArray[0].xtile * ts, anmArray[0].ytile * ts, ts, ts); tileGround.bitmapData.fillRect(rect, 0); anmArray[0].tileType = 40 + frameCount; drawTile(anmArray[0].tileType, anmArray[0].xtile, anmArray[0].ytile); frameCount++; tileGround.bitmapData.unlock(); } public function drawTile(spriteType:int, xt:int, yt:int):void{ var tileSprite:Bitmap = getImageFromSheet(spriteType, ts); var rec:Rectangle = new Rectangle(0, 0, ts, ts); var pt:Point = new Point(xt * ts, yt * ts); tileGround.bitmapData.copyPixels(tileSprite.bitmapData, rec, pt, null, null, true); } public function getImageFromSheet(spriteType:int, size:int):Bitmap{ var sheetColumns:int = tSheet.width/ts; var col:int = spriteType % sheetColumns; var row:int = Math.floor(spriteType/sheetColumns); var rec:Rectangle = new Rectangle(col * ts, row * ts, size, size); var pt:Point = new Point(0,0); var correctTile:Bitmap = new Bitmap(new BitmapData(size, size, false, 0)); correctTile.bitmapData.copyPixels(tSheet, rec, pt, null, null, true); return correctTile; }

    Read the article

  • How do I boot into console mode (redux)

    - by Leo Simon
    I'm running Ubuntu 12.04. This question was asked some time ago How do I disable the boot splash screen? but the answers didn't work for me. The standard way to boot into console mode used to be to edit /etc/default/grub and set GRUB_CMDLINE_LINUX_DEFAULT="text" This worked fine until I ran the fix proposed in https://help.ubuntu.com/community/SoundTroubleshootingProcedure in order to get sound to work. Since then, I have disabled the boot-splash-screen, but I can avoid what I presume is the lightdm login prompt screen. All I want to do is disable this gui and be prompted with a console login prompt. (Shouldnt be so hard should it???) I read in three 33416 mentioned above that there was a bug in lightdm (it wasn't recognizing "text" properly as an option for GRUB_CMDLINE_LINUX_DEFAULT.) But this discussion happened more than a year ago, and it's surely been fixed. Yet my lightdm is uptodate (so I'm told when I try to update it with apt-get). As suggested in one of the above, I tried sudo update-rc.d -f lightdm remove which resulted in a hung machine. I managed to recover using recovery mode, but now I still get the gui again. Another suggestion is to edit /etc/init/lightdm.override. I've done this and set it to "manual" as suggested, but lightdm simply ignores this. Could somebody suggest how to proceed please? Thanks very much, Leo

    Read the article

  • Podcast Show Notes: Architect Day Panel Highlights

    - by Bob Rhubart
    The 2010 series of Oracle Technology Network Architect Day events kicked off in May with events in Dallas, Texas, Redwood Shores, California, and Anaheim, California. The centerpiece of each Architect Day event is a panel discussion that brings together the day's various presenters along with experts drawn from the local Oracle community. This week’s ArchBeat program presents highlights from the panel discussion from the event held in Anaheim. Listen The voices you’ll hear in these highlights belong to (listed in order of appearance): Ralf Dossmann: Director of SOA and Middleware in Oracle’s Enterprise Solutions Group LinkedIn | Oracle Mix Floyd Teter: Innowave Technology, Oracle ACE Director Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Basheer Khan: Innowave Technology, Oracle ACE Director Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Jeff Savit:  Oracle virtualization expert, former Sun Microsystems principal engineer Blog | LinkedIn | Oracle Mix Geri Born: Oracle security analyst LinkedIn | A 10-minute podcast can't really do justice to the hour-long panel discussion at each Architect Day event, let alone the discussion that is characteristic of each session throughout each Architect Day. But at least you’ll get a taste of what you’ll find at the live events. You’ll find slide decks and more from this first series of 2010 events in the Architect Day Artifacts post on this blog. More dates/cities will be added soon to the Architect Day schedule.  Coming Soon Next week’s ArchBeat program kicks off a three-part series featuring Cameron Purdy,  Oracle ACE Director Aleksander Seovic, and Oracle ACE John Stouffer in a conversation about data grid technology and Oracle Coherence. Stay tuned: RSS Technorati Tags: oracle,oracle technology network,archbeat,arch2arch,podcast,architect day del.icio.us Tags: oracle,oracle technology network,archbeat,arch2arch,podcast,architect day

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • Learn to Take a Punch, Learn to Counter, Keep Moving Forward

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/10/28/154483.aspxDuring a boxing workout a few months ago our trainer had us do something called “breadbaskets”. That’s where you hold your arms up and a partner punches you in your midsection – your breadbasket. I put my arms up, and braced for impact. The trainer came over, saw I was a bit nervous, and coached me through. I can see the fear in your eyes. Don’t be afraid to take the punch. Tighten your core, breathe through the hit. Don’t panic. Over the summer we’d do counter drills as well. This is where a partner throws a punch, you defend but also throw one back – a counter punch. You never just sit back and take a beating, you deflect the blow and come back with one more powerful. These lessons on fighting can apply to all aspects of our lives and any attempts at success that we have. I saw this image recently and agree with it 100%: Success is never a straight forward line. It’s messy, its wrought with failures, its learning over time and applying those life lessons. It’s learning how to take punches and lose your fear, its seeing a punch coming and countering it, but most of all its not giving up and continually moving forward. We do stairs at boxing, which is running up and down three flights of stairs. I’m not anywhere near incredible shape and after doing multiple stairs in a single workout you can feel gassed, tired, even discouraged after hitting the second floor and seeing everyone else running by you. I read a quote from Martin Luther King Jr. that I cling to throughout my day: You want to be successful? Take the punches, but learn how to take them. Counter them. and no matter what, always move forward.

    Read the article

  • How does flocking algorithm work?

    - by Chan
    I read and understand the basic of flocking algorithm. Basically, we need to have 3 behaviors: 1. Cohesion 2. Separation 3. Alignment From my understanding, it's like a state machine. Every time we do an update (then draw), we check all the constraints on both three behaviors. And each behavior returns a Vector3 which is the "correct" orientation that an object should transform to. So my initial idea was /// <summary> /// Objects stick together /// </summary> /// <returns></returns> private Vector3 Cohesion() { Vector3 result = new Vector3(0.0f, 0.0f, 0.0f); return result; } /// <summary> /// Object align /// </summary> /// <returns></returns> private Vector3 Align() { Vector3 result = new Vector3(0.0f, 0.0f, 0.0f); return result; } /// <summary> /// Object separates from each others /// </summary> /// <returns></returns> private Vector3 Separate() { Vector3 result = new Vector3(0.0f, 0.0f, 0.0f); return result; } Then I search online for pseudocode but many of them involve velocity and acceleration plus other stuffs. This part confused me. In my game, all objects move at constant speed, and they have one leader. So can anyone share me an idea how to start on implement this flocking algorithm? Also, did I understand it correctly? (I'm using XNA 4.0)

    Read the article

  • Using Sandcastle to build code contracts documentation

    - by DigiMortal
    In my last posting about code contracts I showed how code contracts are documented in XML-documents. In this posting I will show you how to get code contracts documented with Sandcastle and Sandcastle Help File Builder. Before we start, let’s download Sandcastle tools we need: Sandcastle Sandcastle Help File Builder Install Sandcastle first and then Sandcastle Help File Builder. Because we are generating only HTML based documentation we upload to server we don’t need any other tools. Of course, we need Cassini or IIS, but I expect it to be already there in your machine. Open your project and turn on XML-documentation for project and contracts. Now let’s run Sandcastle Help File Builder. We have to create new project and add our Visual Studio solution to this project. Now set the HelpFileFormat parameter value to be Website and let builder build the help. You have to wait about two or three minutes until help is ready. Take a look at your documentation that Sandcastle generated – you see not much information there about code contracts and their rules. Enabling code contracts documentation Now let’s include code contracts to documentation. Follow these steps: Open Sandcastle folder and make copy of vs2005 folder. Open CodeContracts folder (c:\program files\microsoft\contracts\) and unzip the archive from sandcastle folder. Copy all unzipped files to Sandcastle folder. Create (yes, create new) and build your Sandcastle Help File Builder documentation project again. Open help. In my case I see something like this now. As you can see then contracts are documented pretty well. We can easily turn on code contracts XML-documentation generation and all our contracts are documented automatically. To get documentation work we had to use Sandcastle help file fixes that are installed with code contracts and if we had previously Sandcastle Help File Builder project we had to create it from start to get new rules accepted. Once the documentation support for contracts works we have to do nothing more to get contracts documented.

    Read the article

  • Podcast Show Notes: Redefining Information Management Architecture

    - by Bob Rhubart-Oracle
    Nothing in IT stands still, and this is certainly true of business intelligence and information management. Big Data has certainly had an impact, as have Hadoop and other technologies. That evolution was the catalyst for the collaborative effort behind a new Information Management Reference Architecture. The latest OTN ArchBeat series features a conversation with Andrew Bond, Stewart Bryson, and Mark Rittman, key players in that collaboration. These three gentlemen know each other quite well, which comes across in a conversation that is as lively and entertaining as it is informative. But don't take my work for it. Listen for yourself! The Panelists(Listed alphabetically) Andrew Bond, head of Enterprise Architecture at Oracle Oracle ACE Director Stewart Bryson, owner and Co-Founder of Red Pill Analytics Oracle ACE Director Mark Rittman, CIO and Co-Founder of Rittman Mead The Conversation Listen to Part 1: The panel discusses how new thinking and new technologies were the catalyst for a new approach to business intelligence projects. Listen to Part 2: Why taking an "API" approach is important in building an agile data factory. Listen to Part 3: Shadow IT, "sandboxing," and how organizational changes are driving the evolution in information management architecture. Additional Resources The Reference Architecture that is the focus of this conversation is described in detail in these blog posts by Mark Rittman: Introducing the Updated Oracle / Rittman Mead Information Management Reference Architecture Part 1: Information Architecture and the Data Factory Part 2: Delivering the Data Factory Be a Guest Producer for an ArchBeat Podcast Want to be a guest producer for an OTN ArchBeat podcast? Click here to learn how to make it happen.

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Problem with installing Flash

    - by Hannah
    This question has been posted loads of times, I know, but seriously I've spent hours and HOURS searching the web for answers. Anyways, basically I have just installed Ubuntu 13.04 for the first time. I tried to get on youtube and found out I cannot play videos. When I went to install flash the windows popped up saying either install Adobe flash or gnash. installed Flash from the Adobe website. Three different files types to choose from. YUM, .tar.gz or .rpm. Now from what I could gather .tar.gz is the right one so I tried it and it downloaded without the error message. It comes up with the box that says open with archive manager. Extracted the files but none of them were of any use. Could not work out how to install it. There was no file intended for installation. So what am I meant to do with these files? Would it be easier to just install Gnash?

    Read the article

  • What is a simple deformer in which vertices deform linearly with control points?

    - by sebf
    In my project I want to deform a complex mesh, using a simpler 'proxy' mesh. In effect, each vertex of the proxy/collision mesh will be a control point/bone, which should deform the vertices of the main mesh attached to it depending on weight, but where the weight is not dependant on the absolute distance from the control point but rather distance relative to the other affecting control points. The point of this is to preserve complex three dimensional features of the main mesh while using physics implementations which expect something far simpler, low resolution, single surface, etc. Therefore, the vertices must deform linearly with their respective weighted control points (i.e. no falloff fields or all the mesh features will end up collapsed) - as if each vertex was linked to a point on the plane created by the attached control points and deformed with it. I have tried implementing the weight computation algorithm in this paper (page 4) but it is not working as expected and I am wondering if it is really the best way to do what I want. What is the simplest way to 'skin'* an arbitrary mesh, to another arbitrary mesh? *By skin I mean I need an algorithm to determine the best control points for a vertex, and their weights.

    Read the article

  • Duke's Choice Award Goes Regional

    - by Tori Wieldt
    We are pleased to announce the expansion of the Duke's Choice Award program to include regional awards in conjunction with each international JavaOne conference.  The expanded Duke's Choice Award program celebrates Java innovation happening within specific regions and provides an opportunity to recognize winners locally. Regions include Latin America (LAD), Europe Africa Middle East (EMEA), and Asia.  The global program will  continue in association with the flagship JavaOne conference.  First up: Duke's Choice Awards LAD.  Three winners will be announced on stage during JavaOne Latin America December 4th to 6th and in the Jan/Feb issue of Java Magazine.   Submit your nominations now through October 30th!  Nominations are accepted from anyone, including Oracle employees,  for compelling uses of Java technology or community involvement.  Duke's Choice Awards LAD judges include community members Yara Senger (Brazil) and Alexis Lopez (Colombia). In keeping with the 10 year tradition of the Duke's Choice Award program, the most important ingredient is innovation. Let's recognize and celebrate the innovation that Java delivers within Latin America! www.java.net/dukeschoiceLAD To see the 2012 global Duke's Choice Awards winners now, subscribe to Java Magazine

    Read the article

  • Designing configuration for subobjects

    - by Stefano Borini
    I have the following situation: I have a class (let's call it Main) encapsulating a complex process. This class in turn orchestrates a sequence of subalgorithms (AlgoA, AlgoB), each one represented by an individual class. To configure Main, I have a configuration stored into a configuration object MainConfig. This object contains all the config information for AlgoA and AlgoB with their specific parameters. AlgoA has no interest to the information relative to the configuration of AlgoB, so technically I could have (and in practice I have) a contained MainConfig.AlgoAConfig and MainConfig.AlgoBConfig instances, and initialize as AlgoA(MainConfig.AlgoAConfig) and AlgoB(MainConfig.AlgoBConfig). The problem is that there is some common configuration data. One example is the printLevel. I currently have MainConfig.printLevel. I need to propagate this information to both AlgoA and AlgoB, because they have to know how much to print. MainConfig also needs to know how much to print. So the solutions available are I pass the MainConfig to AlgoA and AlgoB. This way, AlgoA has technically access to the whole configuration (even that of AlgoB) and is less self-contained I copy the MainConfig.printLevel into AlgoAConfig and AlgoBConfig, so I basically have three printLevel information repeated. I create a third configuration class PrintingConfig. I have an instance variable MainConfig.printingConfig, and then pass to AlgoA both MainConfig.AlgoAConfig and MainConfig.printingConfig. Have you ever found this situation? How did you solve it ? Which one is stylistically clearer to a new reader of the code ?

    Read the article

  • Which tools should be used for data migration between environments?

    - by Paula Speranza-Hadley
    Ø  With the Oracle Utilities Application Framework based products there are a number of tools provided that can be used to transfer data from one environment to another. Ø  There are three main tools that implementations use: §  ConfigLab - A configurable copy facility is metadata aware and therefore understands the relationships between objects and by invoking the relevant maintenance objects validates the data copied. This utility uses the object validation to help ensure data integrity. Basically it is a set of configuration tables and a set of batch jobs to perform the migration of data. §  Bundling - A configurable release management tool that allows exporting of Advanced Configuration Environment based objects (business services, business objects, UI Maps etc) from one environment to another. §  Blueprint - An Oracle Utilities Software Development Kit (SDK) based tool to import metadata from the development environment to your initial testing environment. The utility is command line based and basically uses a text based configuration file to drive the utility on the source and target sides. Ø  Each tool has a role in an implementation but you must be careful to use the right tool for the right job within an implementation. The suggestions are as follows: §  Only use the Blueprint tool for migrating data from your development platform to your initial test environment. The blueprint tool is not designed to move large amounts of data and certainly is risky, if not used correctly, and can potentially break the integrity of your data. §  The SDK provides the configuration data that it is used for (mainly meta-data). This should not be extended as, while it can perform data migration on any data, it is not efficient and risky for certain types of configuration data. Ø  Additional information can be found in the following whitepaper:  Oracle Utilities Application Framework - Release Management - Software Configuration Management on MyOracle.com

    Read the article

  • Column order can matter

    - by Dave Ballantyne
    Ordinarily, column order of a SQL statement does not matter. Select a,b,c from table will produce the same execution plan as   Select c,b,a from table However, sometimes it can make a difference.   Consider this statement (maxdop is used to make a simpler plan and has no impact to the main point):   select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc from sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) If you look at the execution plan, you will see similar to this That is three sorts.  One for RownAsc,  one for RownDesc and the final one for the ‘Order by’ clause.  Sorting is an expensive operation and one that should be avoided if possible.  So with this in mind, it may come as some surprise that the optimizer does not re-order operations to group them together when the incoming data is in a similar (if not exactly the same) sorted sequence.  A simple change to swap the RownAsc and RownDesc columns to produce this statement : select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc , ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc from Sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) Will result a different and more efficient query plan with one less sort. The optimizer, although unable to automatically re-order operations, HAS taken advantage of the data ordering if it is as required.  This is well worth taking advantage of if you have different sorting requirements in one statement. Try grouping the functions that require the same order together and save yourself a few extra sorts.

    Read the article

  • When i trid to install ubuntu 11.10 i get an error '"Windows Backend object has no attribute 'iso-path' - see log for details.'

    - by Raja
    I am trying to install Ubuntu 11.10 in windows XP, Everything went as before until the countdown clock reached zero, then I got "Windows Backend object has no attribute 'iso-path' - see log for details. It's done it three times now. (Formatting in between) The end of the log says 11-01 17:20 DEBUG TaskList: New task check_iso 11-01 17:20 DEBUG TaskList: ### Running check_iso... 11-01 17:20 DEBUG CommonBackend: Checking Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: checking Ubuntu ISO Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: wrong size: 8094031872 900000000 11-01 17:20 DEBUG TaskList: ### Finished check_iso 11-01 17:20 ERROR TaskList: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in call File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path' 11-01 17:20 DEBUG TaskList: # Cancelling tasklist 11-01 17:20 DEBUG TaskList: # Finished tasklist 11-01 17:20 ERROR root: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 130, in select_task File "\lib\wubi\application.py", line 205, in run_cd_menu File "\lib\wubi\application.py", line 120, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in call File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path'

    Read the article

  • How to popularize Nemerle (or another programming language)?

    - by keykeeper
    Any .NET developer who is interested in different programming languages knows that F# is the most popular functional language for the .NET platform nowadays. The only fact describing the popularity of F# is the great support of Microsoft. But we are not limited with F# at all. There are some other functional languages on the .NET platform. I'm very disappointed with the fact that Nemerle isn't well-known. It's an awesome language which supports three paradigms: object-oriented, functional and meta- programming. I won't try to explain why I like it so much. The problem is that I can't use it at work. I think that only really brave companies can rely on Nemerle. It's almost unknown, that's why it's hard to find new developers for the project. Noone wants to make a first step with Nemerle if it can influence the budget what is reasonable. So, here is a question: what can I do to make Nemerle more popular? Here are my first ideas: implement open-source projects using Nemerle; make presentations on different conferences; write articles.

    Read the article

  • Adsense click bot is click bombing my site

    - by Graham
    I have a site that get's roughly 7,000 - 10,000 page views per day right now. Starting around 1 AM on 7/1/12 I noticed the CTR was rising dramatically. These clicks would be credited then de-credited soon after. So, they were obviously fraudulent clicks. The next day I had about 200 clicks in account with about 100 of them being fraudulent. It's about 3 - 8 per hour evenly dispersed for each of the three ads 24 hours a day. This leads me to believe that it's some sort of Adsense click bot. Also, I removed the ads last evening then put them back up around 3AM and the invalid clicks started within 10 minutes. I signed up for statcounter.com to analyze the exit links on the Adsense. Then I conditionally blocked ads for the IP address of the person / bot I suspected doing this. But, I think that the bot has several proxies to choose from and can refresh IP addresses. I've notified Google through the invalid click form / email 4 times over the past two days in order to let them know I'm aware of the situation and am working on a solution. I've also temporally removed all ads on that site. How can I block a bot like this? Thank you.

    Read the article

  • SQLAuthority News – BI Quiz Question – How to Optimize Cube? – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here. The details of the quiz is as following: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here Here is a couple of hints what I am looking for in answer: How to reach to root of slow performance? Is hardware causing the problem or something else? Is slowness is due to how cube is build and its granularity? Is underlying tables require maintenance? Is there is chance to refractor the process? Are there any tool which can help diagnosis the slowness of the cube? It is not necessary to answer all the question – but something to start with. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Should I give the answer to a failed interview coding exercise?

    - by GlenH7
    We had a senior level interview candidate fail a nuance of the FizzBuzz question*. I mean, really, utterly, completely, failed the question - not even close. I even coached him through to thinking about using a loop and that 3 and 5 were really worth considering as special cases. He blew it. Just for QA purposes, I gave the same exact question to three teammates; gave them 5 minutes; and then came back to collect their pseudo-code. All of them nailed it and hadn't seen the question before. Two asked what the trick was... On a different logic exercise, the candidate showed some understanding of some of the features available within the language he chose to use (C#). So it's not as if he had never written a line of code. But his logic still stunk. My question is whether or not I should have given him the answer to the logic questions. He knew he blew them, and acknowledged it later in the interview. On the other hand, he never asked for the answer or what I was expecting to see. I know coding exercises can be used to set candidates up for failure (again, see second link from above). And I really tried to help him home in on answering the core of the question. But this was a senior level candidate and Fizz-Buzz is, frankly, ridiculously easy even with accounting for interview jitters. I felt like I should have shown him a way of solving the problem so that he could at least learn from the experience. But again, he didn't ask. What's the right way to handle that situation? *Okay, that's not the link to the actual FizzBuzz question, but it is a good P.SE discussion around FizzBuzz and links to the various aspects of it.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >