Search Results

Search found 886 results on 36 pages for 'duplicates'.

Page 26/36 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Inserting a number into a sorted array!

    - by Jay
    I would like to write a piece of code for inserting a number into a sorted array at the appropriate position (i.e. the array should still remain sorted after insertion) My data structure doesn't allow duplicates. I am planning to do something like this: 1. Find the right index where I should be putting this element using binary search 2. Create space for this element, by moving all the elements from that index down. 3. Put this element there. Is there any other better way?

    Read the article

  • T-SQL Specific Syntax problem (Simple no doubt)

    - by Yoda
    Hi guys, I have an issue with a query I'm trying to run on some data, I guess the place to start is to describe the data. Ok so I have a list of email addresses, each email address has a unique ID and an account ID Also in my tables I have a set number which auto incrememnts, this will allow me to target duplicate email addresses What I need to do is something like this. Insert into duplicates (EMAIL,ACCOUNTID,ID) SELECT Email,AccountID,ID FROM EmailAddresses Group by Email,AccountID Having Count(email)>1 Order by AccountID, Email So essentially I want to select all duplicate email addresses and insert them (and their relative fields) into a new table broken down by accountID so I can run some further querys on it. I have been battling with this for way too long and could just use a fresh perspective. Cheers in advance

    Read the article

  • How to use LINQ To Entities for filtering when many methods are not supported?

    - by Kinderchocolate
    Hi, I have a table in SQL database: ID Data Value 1 1 0.1 1 2 0.4 2 10 0.3 2 11 0.2 3 10 0.5 3 11 0.6 For each unique value in Data, I want to filter out the row with the largest ID. For example: In the table above, I want to filter out the third and fourth row because the fifth and sixth rows have the same Data values but their IDs (3) are larger (2 in the third and fourth row). I tried this in Linq to Entities: IQueryable<DerivedRate> test = ObjectContext.DerivedRates.OrderBy(d => d.Data).ThenBy(d => d.ID).SkipWhile((d, index) => (index == size - 1) || (d.ID != ObjectContext.DerivedRates.ElementAt(index + 1).ID)); Basically, I am sorting the list and removing the duplicates by checking if the next element has an identical ID. However, this doesn't work because SkipWhile(index) and ElementAt(index) aren't supported in Linq to Entities. I don't want to pull the entire gigantic table into an array before sorting it. Is there a way?

    Read the article

  • Create numbers within an array that add up to a set amount

    - by RussellDias
    I'm fairly new to PHP - programming in general. So basically what I need to accomplish is, create an array of x amount of numbers (created randomly) who's value add up to n: Lets say, I have to create 4 numbers that add up to 30. I just need the first random dataset. The 4 and 30 here are variables which will be set by the user. Essentially something like x = amount of numbers; n = sum of all x's combined; create x random numbers which all add up to n; $row = array(5, 7, 10, 8) // these add up to 30 Also, no duplicates are allowed. I need the values with an array. I have been messing around with it sometime, however, my knowledge is fairly limited. Any help will be greatly appreciated. Cheers

    Read the article

  • How can I remove a browser's cache server-side, using ASP.NET?

    - by Noman
    How can I remove a browser's cache server-side, using ASP.NET (C#)? A coupon shows by itself (I believe it comes from cache as I did also browse for other apparel sites). It breaks my JavaScript as well as my server-side code, as I am using an UpdatePanel for Ajax, and it duplicates the UpdatePanel's ID. I have renamed the UpdatePanel's ID, but it makes no difference. It generates "Invalid view State" exception. The coupon name is "FastSave" What I have tried: Response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1)); Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.Cache.SetNoStore();

    Read the article

  • Duplicate records

    - by czuroski
    Hello, I am using nHibernate for db persistence. I have a one-to-many relationship defined between 2 tables. When I query and try to get data, I am getting the correct number of rows from the "many" table, but the rows are duplicates of the first row returned. table1 (one), table2 (many). I create a criteria query to get a certain record from table1. I then expect to get all associated records from table2. ie, table1 holds orders, table2 holds items. I query table1 to get an order which has 4 items. I expect to see each of those 4 items from table2, but all I am seeing is the 1st item repeated 4 times. Does anyone have any idea what might be happening?

    Read the article

  • Looking for a few good C# interview problems.

    - by AngryHacker
    I do not want to ask candidates questions, but rather give them several problems to resolve. The reason for this is that I've seen people be excellent with theory, but when confronted by a real world c# issue, just couldn't hack it. These c# problems should be simple enough that it won't take more than 5-20 minutes to resolve, yet complicated enough that I'd be able to weed out candidates that can't code. Right now, I typically ask the applications to reverse a string and remove duplicates from a List. This alone weeds out a large number of people. Any other examples I could use?

    Read the article

  • What was your first home computer?

    - by Adam Tegen
    What was your first home computer? The one that made you "fall in love" with programming. There are 300+ entries, many (most?) of which are duplicates. As with all StackOverflow Poll type Q&As, please make certain your answer is NOT listed already before adding a new answer - searching doesn't always find it (model naming variations, I assume). If it already exists, vote that one up so we see what the most popular answer is, rather than duplicating an existing entry. If you see a duplicate, vote it down so the top entries have only one of each model listed. If you have interesting or additional information to add, use a comment or edit the original entry rather than creating a duplicate.

    Read the article

  • php random image file name

    - by bush man
    Okay im using a snippet I found on google to take a users uploaded image and put it in my directory under Content But Im worried about duplicates so I was going have it upload the image as a Random number well here is my code you can probably understand what im going for through it anyways <label for="file">Profile Pic:</label> <input type="file" name="ProfilePic" id="ProfilePic" /><br /> <input type="submit" name="submit" value="Submit" /> $ProfilePicName = $_FILES["ProfilePic"]["name"]; $ProfilePicType = $_FILES["ProfilePic"]["type"]; $ProfilePicSize = $_FILES["ProfilePic"]["size"]; $ProfilePicTemp = $_FILES["ProfilePic"]["tmp_name"]; $ProfilePicError = $_FILES["ProfilePic"]["error"]; $RandomAccountNumber = mt_rand(1, 99999); echo $RandomAccountNumber; move_uploaded_file($ProfilePicTemp, "Content/".$RandomAccountNumber.$ProfilePicType); And then basicly after all this Im going try to get it to put that random number in my database

    Read the article

  • Union - Same table, excluding previous results MySQL

    - by user82302124
    I'm trying to write a query that will: Run a query, give me (x) number of rows (limit 4) If that query didn't give me the 4 I need, run a second query limit 4-(x) and exclude the ids from the first query A third query that acts like the second I have this: (SELECT *, 1 as SORY_QUERY1 FROM xbamZ where state = 'Minnesota' and industry = 'Miscellaneous' and id != '229' limit 4) UNION (SELECT *, 2 FROM xbamZ where state = 'Minnesota' limit 2) UNION (SELECT *, 3 FROM xbamZ where industry = 'Miscellaneous' limit 1) How (or is?) do I do that? Am I close? This query gives me duplicates

    Read the article

  • Is there a %in% operator accros multiple columns

    - by RobinLovelace
    Imagine you have two data frames df1 <- data.frame(V1 = c(1, 2, 3), v2 = c("a", "b", "c")) df2 <- data.frame(V1 = c(1, 2, 2), v2 = c("b", "b", "c")) Here's what they look like, side by side: > cbind(df1, df2) V1 v2 V1 v2 1 1 a 1 b 2 2 b 2 b 3 3 c 2 c You want to know which observations are duplicates, across all variables. This can be done by pasting the cols together and then using %in%: df1Vec <- apply(df1, 1, paste, collapse= "") df2Vec <- apply(df2, 1, paste, collapse= "") df2Vec %in% df1Vec [1] FALSE TRUE FALSE The second observation is thus the only one in df2 and also in df1. Is there no faster way of generating this output - something like %IN%, which is %in% across multiple variables, or should we just be content with the apply(paste) solution?

    Read the article

  • How do I create substrings from an array using PHP?

    - by mike
    I have an array of data that looks like this: 2008, Honda, Accord, Used, Car And I'm trying to figure a way to make a number of sub strings from each item in the array. For example, I would like to loop each item and create the following substrings: 2008 2008 Honda 2008 Accord 2008 Used 2008 Car 2008 Honda Accord 2008 Honda Used 2008 Honda Car 2008 Accord Honda 2008 Accord Used 2008 Accord Car 2008 Used Honda 2008 Used Accord 2008 Used Car 2008 Car Honda 2008 Car Accord 2008 Car Used Honda Honda 2008 Honda Accord Honda Used Honda Car Honda 2008 Accord Honda 2008 Used etc ... I need to make sure that there are no duplicates created and I need to prevent it from adding the same word twice (ex: Honda Honda OR 2008 Honda 2008 - i dont want that). Has anyone wrote anything like this or know where I can find a script that works the same way?

    Read the article

  • Normalize database or not? Read only MyISAM table, performance is the main priority (MySQL)

    - by hello
    I'm importing data to a future database that will have one, static MyISAM table (will only be read from). I chose MyISAM because as far as I understand it's faster for my requirements (I'm not very experienced with MySQL / SQL at all). That table will have various columns such as ID, Name, Gender, Phone, Status... and Country, City, Street columns. Now the question is, should I create tables (e.g Country: Country_ID, Country_Name) for the last 3 columns and refer to them in the main table by ID (normalize...[?]), or just store them as VARCHAR in the main table (having duplicates, obviously)? My primary concern is speed - since the table won't be written into, data integrity is not a priority. The only actions will be selecting a specific row or searching for rows that much a certain criteria. Would searching by the Country, City and/or Street columns (and possibly other columns in the same search) be faster if I simply use VARCHAR?

    Read the article

  • Scalable stl set like container for C++

    - by Pqr
    Hi, I need to store large number of integers. There can be duplicates in the input stream of integers, I just need to store distinct amongst them. I was using stl set initially but It went OutOfMem when input number of integers went too high. I am looking for some C++ container library which would allow me to store numbers with the said requirement possibly backed by file i.e container should not try to keep all numbers in-mem. I don't need to store this data persistently, I just need to find unique values amongst it.

    Read the article

  • construct a unique number for a string in java

    - by praveen
    We have a requirement of reading/writing more than 10 million strings into a file. Also we do not want duplicates in the file. Since the strings would be flushed to a file as soon as they are read we are not maintaining it in memory. We cannot use hashcode because of collisions in the hash code due to which we might miss a string as duplicate. Two other approaches i found in my googling: 1.Use a message digest algorithm like MD5 - but it might be too costly to calculate and store. 2.Use a checksum algorithm. [i am not sure if this produces a unique key for a string- can someone please confirm] Is there any other approach avaiable. Thanks.

    Read the article

  • Sort vector<int>(n) in O(n) time using O(m) space?

    - by Adam
    I have a vector<unsigned int> vec of size n. Each element in vec is in the range [0, m], no duplicates, and I want to sort vec. Is it possible to do better than O(n log n) time if you're allowed to use O(m) space? In the average case m is much larger than n, in the worst case m == n. Ideally I want something O(n). I get the feeling that there's a bucket sort-ish way to do this: unsigned int aux[m]; aux[vec[i]] = i; Somehow extract the permutation and permute vec. I'm stuck on how to do 3. In my application m is on the order of 16k. However this sort is in the inner loops and accounts for a significant portion of my runtime.

    Read the article

  • BAM design pointers

    - by Kavitha Srinivasan
    In working recently with a large Oracle customer on SOA and BAM, I discovered that some BAM best practices are not quite well known as I had always assumed ! There is a doc bug out to formally incorporate those learnings but here are a few notes..  EMS-DO parity When using EMS (Enterprise Message Source) as a BAM feed, the best practice is to use one EMS to write to one Data Object. There is a possibility of collisions and duplicates when multiple EMS write to the same row of a DO at the same time. This customer had 17 EMS writing to one DO at the same time. Every sensor in their BPEL process writes to one topic but the Topic was read by 1 EMS corresponding to one sensor. They then used XSL within BAM to transform the payload into the BAM DO format. And hence for a given BPEL instance, 17 sensors fired, populated 1 JMS topic, was consumed by 17 EMS which in turn wrote to 1 DataObject.(You can image what would happen for later versions of the application that needs to send more information to BAM !).  We modified their design to use one Master XSL based on sensorname for all sensors relating to a DO- say Data Object 'Orders' and were able to thus reduce the 17 EMS to 1 with a master XSL. For those of you wondering about how squeaky clean this design is, you are right ! This is indeed not squeaky clean and that brings us to yet another 'inferred' best practice. (I try very hard not to state the obvious in my blogs with the hope that everytime I blog, it is very useful but this one is an exception.) Transformations and Calculations It is optimal to do transformations within an engine like BPEL. Not only does this provide modelling ease with a nice GUI XSL mapper in JDeveloper, the XSL engine in BPEL is quite efficient at runtime as well. And so, doing XSL transformations in BAM is not quite prudent.  The same is true for any non-trivial calculations as well. It is best to do all transformations,calcuations and sanitize the data in a BPEL or like layer and then send this to BAM (via JMS, WS etc.) This then delegates simply the function of report rendering and mechanics of real-time reporting to the Oracle BAM reporting tool which it is most suited to do. All nulls are not created equal Here is yet another possibly known fact but reiterated here. For an EMS with an Upsert operation: a) If Empty tags or tags with no value are sent like <Tag1/> or <Tag1></Tag1>, the DO will be overwritten with --null-- b) If Empty tags are suppressed ie not generated at all, the corresponding DO field will NOT be overwritten. The field will have whatever value existed previously.  For an EMS with an Insert operation, both tags with an empty value and no tags result in –null-- being written to the DO. Hope this helps .. Happy 4th!

    Read the article

  • ArchBeat Link-o-Rama for 2012-07-10

    - by Bob Rhubart
    Free Event Today: Virtual Developer Day: Oracle Fusion Development This free event—another in the ongoing series of OTN Virtual Developer Days—focuses on Oracle Fusion development, and features three session tracks plus hands-on labs. Agenda and session abstracts are available now so you can be ready for the live event when it kicks off today, July 10, 9am to 1pm PST / 12pm to 4pm EST / 1pm to 5pm BRT. Podcast: The Role of the Cloud Architect - Part 1/3 In part one of this three-part conversation, cloud architects Ron Batra (AT&T) and James Baty (Oracle) talk about how cloud computing is driving the supply-chaining of IT and the "democratization of the activity of architecture." Middleware and Cloud Computing Book | Tom Laszewski Cloud migration expert Tom Laszewski describes Middleware and Cloud Computing by Frank Munz as "one of only a couple books that really discuss AWS and Oracle in depth." Cloud computing moves from fad to foundation | David Linthicum "When enterprises make cloud computing work, they view the application of the technology as a trade secret of sorts, so there are no press releases or white papers," says David Linthicum. "Indeed, if you see one presentation around a successful cloud computing case study, you can bet you're not hearing about 100 more." Oracle Real-Time Decisions: Combined Likelihood Models | Lukas Vermeer Lukas Vermeer concludes his extensive series of posts on decision models with a look "an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models." Running Oracle BPM 11g PS5 Worklist Task Flow and Human Task Form on Non-SOA Domain | Andrejus Baranovskis "With a standard setup, both the BPM worklist application and the Human task form run on the same SOA domain, where the BPM process is running," says Oracle ACE Director Andrejus Baranovskis. "While this work fine, this is not what we want in the development, test and production environment." BAM design pointers | Kavitha Srinivasan "When using EMS (Enterprise Message Source) as a BAM feed, the best practice is to use one EMS to write to one Data Object," says Oracle Fusion Middleware A-Team blogger Kavitha Srinivasan. "There is a possibility of collisions and duplicates when multiple EMS write to the same row of a DO at the same time." Changes in SOA Human Task Flow (Run-Time) for Fusion Applications | Jack Desai Oracle Fusion Middleware A-Team blogger Jack Desai shares a troubleshooting tip. Thought for the Day "A program which perfectly meets a lousy specification is a lousy program." — Cem Kaner Source: SoftwareQuotes.com

    Read the article

  • Array Multiplication and Division

    - by Narfanator
    I came across a question that (eventually) landed me wondering about array arithmetic. I'm thinking specifically in Ruby, but I think the concepts are language independent. So, addition and subtraction are defined, in Ruby, as such: [1,6,8,3,6] + [5,6,7] == [1,6,8,3,6,5,6,7] # All the elements of the first, then all the elements of the second [1,6,8,3,6] - [5,6,7] == [1,8,3] # From the first, remove anything found in the second and array * scalar is defined: [1,2,3] * 2 == [1,2,3,1,2,3] But What, conceptually, should the following be? None of these are (as far as I can find) defined: Array x Array: [1,2,3] * [1,2,3] #=> ? Array / Scalar: [1,2,3,4,5] / 2 #=> ? Array / Scalar: [1,2,3,4,5] % 2 #=> ? Array / Array: [1,2,3,4,5] / [1,2] #=> ? Array / Array: [1,2,3,4,5] % [1,2] #=> ? I've found some mathematical descriptions of these operations for set theory, but I couldn't really follow them, and sets don't have duplicates (arrays do). Edit: Note, I do not mean vector (matrix) arithmetic, which is completely defined. Edit2: If this is the wrong stack exchange, tell me which is the right one and I'll move it. Edit 3: Add mod operators to the list. Edit 4: I figure array / scalar is derivable from array * scalar: a * b = c => a = b / c [1,2,3] * 3 = [1,2,3]+[1,2,3]+[1,2,3] = [1,2,3,1,2,3,1,2,3] => [1,2,3] = [1,2,3,1,2,3,1,2,3] / 3 Which, given that programmer's division ignore the remained and has modulus: [1,2,3,4,5] / 2 = [[1,2], [3,4]] [1,2,3,4,5] % 2 = [5] Except that these are pretty clearly non-reversible operations (not that modulus ever is), which is non-ideal. Edit: I asked a question over on Math that led me to Multisets. I think maybe extensible arrays are "multisets", but I'm not sure yet.

    Read the article

  • As a web designer, which language should I learn first for my feature career? (PHP or JavaScript) [closed]

    - by kdevs3
    Possible Duplicates: Best Programming Language for Web Development How can I choose a web development language? What language will you choose if you are going to build something big? What is the right option of programming languages and tools for building our website? What is the easiest web programing language at....? Well, I'm more of a basic web designer. I know the easy stuff pretty well. (Ya know, html, css) But I've been trying to take it to the next step and I'm contemplating about what I should learn that will help me out the most in my future web design/programming career, should it be JavaScript or maybe I should try to learn a back end programming language such as PHP. Lately, I have been hearing about a lot how JavaScript is so great and useful now, because of libraries such as jQuery and what possibility's it can bring by using Node.js and other frameworks. I've only learned the most basic of JavaScript and used some jQuery (mostly plugins) so i wouldn't know at all of what it can actually do. Would JS being so popular as it is now and useful, be a reason to stick with JavaScript and only learn it that for now? Or as a web designer, how important would it be to learn how to make a web application/website operate and functional, and know how to work with servers, etc? (Such as getting forms to work and sending data to the server and back) I've took a look at frameworks such as Code Igniter before, and looks really simple to get started with if I try to learn PHP, But I'm not sure how important it is for my career and what I would gain out of it. I'm asking because I can't decide what I should learn first. When I select it, I really want to take my time and learn the language. I don't want to spend time on learning multiple languages at the same time, so I need to pick wisely. I'm trying to turn the right direction so my career can hopefully be successful in the feature. (If money/gaining a job asked if its important, then its a yeah, it is a bit) I'm hoping I can get opinions and suggestions on this question, thanks for giving me your thoughts also.

    Read the article

  • Finding the Twins when Implementing Catmull-Clark subdivision using Half-Edge mesh [migrated]

    - by Ailurus
    Note: The description became a little longer than expected. Do you know a readable implementation of this algorithm using this mesh? Please let me know! I'm trying to implement Catmull-Clark subdivision using Matlab (because later on the results have to be compared with some other stuff already implemented in Matlab). First try was with a Vertex-Face mesh, the algorithm works but it is of course not very efficient (since you need neighbouring information for edges and faces). Therefore, I'm now using a Half-Edge mesh (info), see also the paper of Lutz Kettner. Wikipedia link to the idea behind Catmull-Clark SDV: Wiki. My problem lies in finding the Twin HalfEdges, I'm just not sure how to do this. Below I'm describing my thoughts on the implementation, trying to keep it concise. Half-Edge mesh (using indices to Vertices/HalfEdges/Faces): Vertex (x,y,z,Outgoing_HalfEdge) HalfEdge (HeadVertex (or TailVertex, which one should I use), Next, Face, Twin). Face (HalfEdge) To keep it simple for now, assume that every face is a quadrilateral. The actual mesh is a list of Vertices, HalfEdges and Faces. The new mesh will consist of NewVertices, NewHalfEdges and NewFaces, like this (note: Number_... is the number of ...): NumberNewVertices: Number_Faces + Number_HalfEdges/2 + Number_Vertices NumberNewHalfEdges: 4 * 4 * NumberFaces NumberNewfaces: 4 * NumberFaces Catmull-Clark: Find the FacePoint (centroid) of each Face: --> Just average the x,y,z values of the vertices, save as a NewVertex. Find the EdgePoint of each HalfEdge: --> To prevent duplicates (each HalfEdge has a Twin which would result in the same HalfEdge) --> Only calculate EdgePoints of the HalfEdge which has the lowest index of the Pair. Update old Vertices Ok, now all the new Vertices are calculated (however, their Outgoing_HalfEdge is still unknown). Next step to save the new HalfEdges and Faces. This is the part causing me problems! Loop through each old Face, there are 4 new Faces to be created (because of the quadrilateral assumption) First create the 4 new HalfEdges per New Face, starting at the FacePoint to the Edgepoint Next a new HalfEdge from the EdgePoint to an Updated Vertex Another new one from the Updated Vertex to the next EdgePoint Finally the fourth new HalfEdge from the EdgePoint back to the FacePoint. The HeadVertex of each new HalfEdge is known, the Next HalfEdge too. The Face is also known (since it is the new face you're creating!). Only the Twin HalfEdge is unknown, how should I know this? By the way, while looping through the Vertices of the new Face, assign the Outgoing_HalfEdge to the Vertices. This is probably the place to find out which HalfEdge is the Twin. Finally, after the 4 new HalfEdges are created, save the Face with the HalfVertex index the last newly created HalfVertex. I hope this is clear, if needed I can post my (obviously not-yet-finished) Matlab code.

    Read the article

  • Configurable tables in sql database

    - by dot
    I have the following tables in my database: Config Table: ====================================== Start_Range | End Range | Config_id 10 | 15 | 1 ====================================== Available_UserIDs ========================== ID | UserID | Used_YN | 1 | 10 | t | 1 | 11 | f | 1 | 12 | f | 1 | 13 | f | 1 | 14 | f | 1 | 15 | f | ========================== Users ========================== UserId | FName | LName | 10 |John | Doe | ========================== This is used in a reservation system of sorts... which lets an administrator specify a range of numbers that will be assigned to users in the config table. Once the range has been defined, the system then populates the Available_userIDs table with all the numbers in between the range, and sets the Used_YN flag to false As users sign up, they grab the next user_id number that's not in use... and reserve it. Then the system adds a record to the Users table. Once the admin has specified a range, it is possible that they can change it. For example, they can start with 10-15... and then when the range is used up, they should be able to specify another range like 16 - 99. I've put a unique constraint on the Available_UserIDs table, as well as on the Users table - to ensure that UserIds can't be duplicated. My questions are as follows: What's the best way to prevent the admins from using a range that's already in use? I thought of the following options: -- check either the Users table to see if the start range or ending range numbers are being used. If they are, assume that all the numbers in between are in use too, and reject the range. -- let them specify whatever they want, try to populate the Available_UserIDs table. If there are duplicates, just ignore that specific error message from the database and continue on. How do I find gaps in the number ranges? For example, if they specify 10-15, and then 20-25, it'd be nice to be able to somehow suggest on my web page that 16-19 is currently available. I found this article: http://stackoverflow.com/questions/1312101/how-to-find-a-gap-in-running-counter-with-sql But it only seems to return the first available number... so in my example above, it would only return the number 16. I'm sure there's a simpler way to do things that I'm overlooking!

    Read the article

  • How to optimise mesh data

    - by Wardy
    So i have some procedurally generated mesh data and i want to reduce it down to its minimum number of verts. In case it matters this is a unity project. Working on the basis of a simple example, lets assume a typical flat surface of points 2 by 3. The point / vertex at [1,1] is used in many triangles. I've generated mesh for a voxel type engine that adds verts to a list based on face visiblility and now I want to remove all the duplicates. Can anyone come up with an efficient way of doing this because what i have is sooo bad its not even funny (and i don't even think it's logically correct) ... private void Optimize() { Vector3 v; Vector3 v2; for (int i = 0; i < Vertices.Count; i++) { v = Vertices[i]; for (int j = i+1; j < Vertices.Count; j++) { v2 = Vertices[j]; if (v.x == v2.x && v.y == v2.y && v.z == v2.z) { for (int ind = 0; ind < Indices.Count; ind++) { if (Indices[ind] == j) { Indices[ind] = i; } else if (Indices[ind] > j && Indices[ind] > 0) Indices[ind]--; } Vertices.RemoveAt(j); Uvs.RemoveAt(j); Normals.RemoveAt(j); } } } } EDIT: Ok i managed to get this (code sample above updated) to render an "optimised" set of verts but the UV data is all wrong now, which would make sense because i'm basically just removing any UV Vector that represents a UV coord for a removed vert and not actually considering what I need to do to "fix the tri" so to speak. The code now seemingly does work but its quite time consuming, still looking to further optimise.

    Read the article

  • How to explain bad software to non-technical people?

    - by mtutty
    In discussing software development with non-technical people (customers, business owners, project sponsors, etc.), I often resort to analogies and metaphors. It's relatively easy and effective to use a "house" or other metaphor for describing the size and complexity of new development. However, we often inherit someone else's code or data, and this approach doesn't seem to hold up as well when trying to explain why we're gutting something that already seems to work. Of course we can point to cycle time and cost to be saved in the future but this generally means nothing to business folks. I know doctors can say "just take this pill," but I'm not sure that software devs have the same authority. Ideas? EDIT: Let me add a bit to the discussion. The specific project I'm talking about has customers that don't realize (or care) about specific aspects of the system we're retiring (i.e., they think it was just fine): The system would save a NEW RECORD every time someone updated a field The system contained tables for reference data. These tables had new records added every day, even though they were duplicates of previous records. And there was no way to tie the reference data used for a particular case at the time it was closed. This is like 99% of the data in the old system. The field NAMES also have spaces, apostrophes and other inappropriate characters in them, making everything harder to work with. In addition to the incredible amount of duplicate data, they have around 1000 XLS files with data they want added to the system. Previously, they would do a spreadsheet for each case in the database, IN ADDITION TO what they typed into the database. Getting rid of this old, unneeded information and piping in the XLS data comprises about 80% of the total project effort, and was not something we could accurately predict. I'm trying to find a concrete way to describe how bad this thing was, mostly so that the customer will understand why the migration process has been so time-consuming. The actual coding was done pretty quickly and the new system works fine, but without the old data they won't be happy. Sorry to get into the weeds, but most of the answers I've seen so far are pretty basic scope/schedule/cost things. I've been doing this for 15 years, so this really is more of a reflective, philosophical question - but without some of the details it can be difficult to really appreciate the awful beauty of this problem.

    Read the article

  • F# Project Euler Problem 1

    - by MarkPearl
    Every now and then I give project Euler a quick browse. Since I have been playing with F# I have found it a great way to learn the basics of the language. Today I thought I would give problem 1 an attempt… Problem 1 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. My F# Solution I broke this problem into two functions… 1) be able to generate a collection of numbers that are multiples of a number but but are smaller than another number. let GenerateMultiplesOfXbelowY X Y = X |> Seq.unfold (fun i -> if (i<Y) then Some(i, i+X) else None) I then needed something that generated collections for multiples of 3 & 5 and then removed any duplicates. Once this was done I would need to sum these all together to get a result. I found the Seq object to be extremely useful to achieve this… let Multiples = Seq.append (GenerateMultiplesOfXbelowY 3 1000) (GenerateMultiplesOfXbelowY 5 1000) |> Seq.distinct |> Seq.fold(fun acc a -> acc + a) 0 |> Console.WriteLine |> Console.ReadLine My complete solution was … open System let GenerateMultiplesOfXbelowY X Y = X |> Seq.unfold (fun i -> if (i<Y) then Some(i, i+X) else None) let Multiples = Seq.append (GenerateMultiplesOfXbelowY 3 1000) (GenerateMultiplesOfXbelowY 5 1000) |> Seq.distinct |> Seq.fold(fun acc a -> acc + a) 0 |> Console.WriteLine |> Console.ReadLine   Which seemed to generate the correct result in a relatively short period of time although I am sure I will get some comments from the experts who know of some intrinsic method to achieve all of this in one method call.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >