Search Results

Search found 12701 results on 509 pages for 'fulltext index'.

Page 43/509 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • How do I point a new domain to start on a page that's not index.html on separate hosting?

    - by Owen Campbell-Moore
    I'm using a service (CMS/Host) called SquareSpace to host my site, and today I'm registering the domain for it. Basically, how do I make it so when somebody types www.tedxoxford.com it points at http://www.tedxoxford.com/landing (currently http://tedxoxford.squarespace.com/landing) instead of the default index? Is this possible? Squarespace is quite a restricted CMS and means that your logos etc all point to the index so I don't want people ending up on my landing/splash page every time they want the home page, only on the first time they type in the URL. A dirty hack would be to check the refferer and redirect anyone hitting the index to the landing page, but that's a lot of loading overhead I'd rather avoid...

    Read the article

  • How to prevent the google users found my index of admin page?

    - by krish
    I am running a website but for some days i stopped it and put the under-construction page because the Index of admin page is visible to the outside world through the Google search. One of my friend told me that your websites index is visible and its one step away to access the password file and he shows me that very simply using the Google search. How can i prevent this and i am hosting my site with a hosting company and i report about this to them but they simply replied to me still its secure so you no need to worry... am i really don need to worry and continue my site with the visible index of admin page?

    Read the article

  • Create a filter to consider http://example.com/foo/bar as http://example.com/index.php/foo/bar

    - by magnetik
    I'm using URL rewriting to make my url http://example.com/foo/bar/ to http://example.com/index.php/foo/bar. I'm not linking the index.php/.. url anywhere, but for some reasons, some users arrives to the index.php url. In Google analytics, I have a lot of duplicates that are quite annoying to follow up the traffic. I've watched the Advanced filters but I'm struggling to make it works fine. Any regex and google analytics pro to help me out ?

    Read the article

  • How to prevent Google from finding my admin index page?

    - by krish
    I am running a website but for some days i stopped it and put the under-construction page because the Index of admin page is visible to the outside world through the Google search. One of my friend told me that your websites index is visible and its one step away to access the password file and he shows me that very simply using the Google search. How can i prevent this and i am hosting my site with a hosting company and i report about this to them but they simply replied to me still its secure so you no need to worry... am i really don need to worry and continue my site with the visible index of admin page?

    Read the article

  • Can you disavow a whole domain apart from the index page?

    - by Silver89
    Many years ago I may have bought a few sitewide links for some of my sites, these have now come back to haunt me and I need to sort them out. I've tried to contact the owners but they're too lazy to bother changing the sites so I figure it's time to disavow the links. But is there a way to disavow all of the sitewide links on the domain apart from on the index page and would this be a benefit to leave the index or would it still be seen as spammy? Something like ... # Contacted owner of shadyseo.com on 7/1/2012 to # ask for link removal but got no response domain:shadyseo.com !shadyseo.com/index.php

    Read the article

  • Editing Django's admin index <div id='module'> tag

    - by zen
    I am new to the Django framework. On Django's admin index page I'd like to get rid of the "s" at the end of my model names. Example: <div class="module"> <table summary="Models available in the my application."> <caption><a href="" class="section">My application</a></caption> <tr> <th scope="row"><a href="model/">Model**s**</a></th> <td><a href="model/add/" class="addlink">Add</a></td> <td><a href="model/" class="changelink">Change</a></td> </tr> </table> </div> I know of a way to do this but I am really looking for the file I should edit. Where is it and what exactly should I do? I can't seem to pinpoint where it is coming from.

    Read the article

  • std::vector iterator or index access speed question

    - by Simone Margaritelli
    Just a stupid question . I have a std::vector<SomeClass *> v; in my code and i need to access its elements very often in the program, looping them forward and backward . Which is the fastest access type between those two ? Iterator access std::vector<SomeClass *> v; std::vector<SomeClass *>::iterator i; std::vector<SomeClass *>::reverse_iterator j; // i loops forward, j loops backward for( i = v.begin(), j = v.rbegin(); i != v.end() && j != v.rend(); i++, j++ ){ // some operations on v items } Subscript access (by index) std::vector<SomeClass *> v; unsigned int i, j, size = v.size(); // i loops forward, j loops backward for( i = 0, j = size - 1; i < size && j >= 0; i++, j-- ){ // some operations on v items } And, does const_iterator offer a faster way to access vector elements in case i do not have to modify them? Thank you in advantage.

    Read the article

  • Why This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • Strange EListError occurance (when accessing variable-defined index)

    - by michal
    Hi, I have a TList which stores some objects. Now I have a function which does some operations on that list: function SomeFunct(const AIndex: integer): IInterface begin if (AIndex > -1) and (AIndex < fMgr.Windows.Count ) then begin if (fMgr.Windows[AIndex] <> nil) then begin if not Supports(TForm(fMgr.Windows[AIndex]), IMyFormInterface, result) then result:= nil; end; end else result:= nil; end; now, what is really strange is that accessing fMgr.Windows with any proper index causes EListError... However if i hard-code it (in example, replace AIndex with value 0 or 1) it works fine. I tried debugging it, the function gets called twice, with arguments 0 and 1 (as supposed). while AIndex = 0, evaluating fMgr.Windows[AIndex] results in EListError at $someAddress, while evaluating fMgr.Windws[0] instead - returns proper results ... what is even more strange, even though there is an EListError, the function returns proper data ... and doesn't show anything. Just info on two EListError memory leaks on shutdown (using FastMM) any ideas what could be wrong?! Thanks in advance michal

    Read the article

  • Access 2003 VBA: Return only the index of the last item selected in a ListBox

    - by Eric D. Johnson
    I will preface this with saying, this is my first time using listboxes and earlier posts were criticized for lacking detail. So, all help is greatly appreciated and I hope this is enough information without being overkill. Currently, I have a listbox updating a junction table with an on click event (iterates through selected items and if they are not in the table it adds them). The list box is also updated by an option group (based on the option group value a query populates the list with the appropriate items and they are selected/highlighted based on the junction table). Also, when items are a "sub-category" the "category" is also selected. This functions perfectly until I ask it to do more... Problem 1: I need to differentiate "categories" of items from each other. So, I have included a blank item to the list box to add a space between categories. When the blank items are present the listbox does not update the junction table properly and vice versa. Problem 2: My users want to be able to deselect the "category" under certain circumstances. This is fine, just de-select the "category" after the "sub-category" is selected. However, the "category" is re-selected whenever the listbox is clicked again because it iterates through all entries. Perceived solution for both problems: Return only the index of the item (de)selected and manipulate accordingly. Is this possible? If so, how? OR: Should I take a different approach?

    Read the article

  • C# String.Replace with a start/index (Added my (slow) implementation)

    - by Chris T
    I'd like an efficient method that would work something like this EDIT: Sorry I didn't put what I'd tried before. I updated the example now. // Method signature, Only replaces first instance or how many are specified in max public int MyReplace(ref string source,string org, string replace, int start, int max) { int ret = 0; int len = replace.Length; int olen = org.Length; for(int i = 0; i < max; i++) { // Find the next instance of the search string int x = source.IndexOf(org, ret + olen); if(x > ret) ret = x; else break; // Insert the replacement source = source.Insert(x, replace); // And remove the original source = source.Remove(x + len, olen); // removes original string } return ret; } string source = "The cat can fly but only if he is the cat in the hat"; int i = MyReplace(ref source,"cat", "giraffe", 8, 1); // Results in the string "The cat can fly but only if he is the giraffe in the hat" // i contains the index of the first letter of "giraffe" in the new string The only reason I'm asking is because my implementation I'd imagine getting slow with 1,000s of replaces.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • Why Does This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • How do you efficiently bulk index lookups?

    - by Liron Shapira
    I have these entity kinds: Molecule Atom MoleculeAtom Given a list(molecule_ids) whose lengths is in the hundreds, I need to get a dict of the form {molecule_id: list(atom_ids)}. Likewise, given a list(atom_ids) whose length is in the hunreds, I need to get a dict of the form {atom_id: list(molecule_ids)}. Both of these bulk lookups need to happen really fast. Right now I'm doing something like: atom_ids_by_molecule_id = {} for molecule_id in molecule_ids: moleculeatoms = MoleculeAtom.all().filter('molecule =', db.Key.from_path('molecule', molecule_id)).fetch(1000) atom_ids_by_molecule_id[molecule_id] = [ MoleculeAtom.atom.get_value_for_datastore(ma).id() for ma in moleculeatoms ] Like I said, len(molecule_ids) is in the hundreds. I need to do this kind of bulk index lookup on almost every single request, and I need it to be FAST, and right now it's too slow. Ideas: Will using a Molecule.atoms ListProperty do what I need? Consider that I am storing additional data on the MoleculeAtom node, and remember it's equally important for me to do the lookup in the molecule-atom and atom-molecule directions. Caching? I tried memcaching lists of atom IDs keyed by molecule ID, but I have tons of atoms and molecules, and the cache can't fit it. How about denormalizing the data by creating a new entity kind whose key name is a molecule ID and whose value is a list of atom IDs? The idea is, calling db.get on 500 keys is probably faster than looping through 500 fetches with filters, right?

    Read the article

  • X-Forwarded-For causing Undefined index in PHP

    - by bateman_ap
    Hi, I am trying to integrate some third party tracking code into one of my sites, however it is throwing up some errors, and their support isn't being much use, so i want to try and fix their code myself. Most I have fixed, however this function is giving me problems: private function getXForwardedFor() { $s =& $this; $xff_ips = array(); $headers = $s->getHTTPHeaders(); if ($headers['X-Forwarded-For']) { $xff_ips[] = $headers['X-Forwarded-For']; } if ($_SERVER['REMOTE_ADDR']) { $xff_ips[] = $_SERVER['REMOTE_ADDR']; } return implode(', ', $xff_ips); // will return blank if not on a web server } In my dev enviroment where I am showing all errors I am getting: Notice: Undefined index: X-Forwarded-For in /sites/webs/includes/OmnitureMeasurement.class.php on line 1129 Line 1129 is: if ($headers['X-Forwarded-For']) { If I print out $headers I get: Array ( [Host] => www.domain.com [User-Agent] => Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 [Accept] => text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 [Accept-Language] => en-gb,en;q=0.5 [Accept-Encoding] => gzip,deflate [Accept-Charset] => ISO-8859-1,utf-8;q=0.7,*;q=0.7 [Keep-Alive] => 115 [Connection] => keep-alive [Referer] => http://www10.toptable.com/ [Cookie] => PHPSESSID=nh9jd1ianmr4jon2rr7lo0g553; __utmb=134653559.30.10.1275901644; __utmc=134653559 [Cache-Control] => max-age=0 ) I can't see X-Forwarded-For in there which I think is causing the problem. Is there something I should add to the function to take this into account? I am using PHP 5.3 and Apache 2 on Fedora

    Read the article

  • SQL Server 2008 spatial index and CPU utilization with MapGuide Open Source 2.1

    - by Antonio de la Peña
    I have a SQL Server table with hundreds of thousands of geometry type parcels. I have made indexes on them trying different combinations of density and objects per cell settings. So far I'm settiling for LOW, LOW, MEDIUM, MEDIUM and 16 objects per cell and I made a SP that sets the bounding box according to the extents of the entities in the table. There is an incredible performance boost from queries taking almost minutes without index to less than seconds, it gets faster when the zoom is closer thus less objects are displayed. Yet the CPU utilization gets to 100% when querying for features, even when the queries themselves are fast. I'm worrying this will not fly in a production environment. I am using MapGuide Open Source 2.1 for this project, but I am positive the CPU load is caused by SQL Server. I wonder if my indexes are set properly. I haven't found any clear documentation on how to properly set them up. Every article I've read basically says "it depends..." but nothing specific. Do you have any recommendations for me, including books, articles? Thank you.

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • Distinct Value Array for View Controller Index Using Core Data

    - by b.dot
    Hi, I'm trying to create an index representing the first letter value of each record in a Core Data store to be used in a table view controller. I'm using a snippet of the code from Apple's documentation. I would simply like to produce an array or dictionary of distinct values as the result. My store already has the character defined within each record object. Questions: 1) I'm having a problem understanding NSDictionaryResultType. Where does the resulting dictionary object get received so that I can assign it's keys to the view controller? The code seems to only return an array. 2) If I include the line containing NSDictionaryResultType, I get no returns. 3) I realize that I could do this in a loop, but I'm hoping this will work. Thanks! NSEntityDescription *entity = [NSEntityDescription entityForName:@"People" inManagedObjectContext:managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; [request setEntity:entity]; [request setResultType:NSDictionaryResultType]; // This line causes no no results. [request setReturnsDistinctResults:YES]; [request setPropertiesToFetch :[NSArray arrayWithObject:@"alphabetIndex"]]; NSError *error; NSArray *objects = [managedObjectContext executeFetchRequest:request error:&error];

    Read the article

  • c# error for index was outside the bounds of array

    - by iliailiaey
    i have written below code but i have the error:Index was outside the bounds of the array.i cant understand its reason.how can i correct the code for preventing the error?(in the code,i want to make an array byte of size 57600 from an array byte of size 38400) int q = 0; int nbytes = 57600; byte[] gh = new byte[38400]; byte[] byte8 = new byte[nbytes]; byte[] aa = { 0xf8, 0x07, 0XE0, 0X1F }; for (int y = 0; y < nbytes-3; y += 3) { if (q < 38400-3) { byte8[y] = (byte)(gh[q] & aa[1]); byte8[y + 1] = (byte)(((gh[q] & aa[1]) << 5) | ((gh[q + 1] & aa[2]) >> 3)); byte8[y + 2] = (byte)((gh[q + 1] & aa[3]) << 3); q += 2; } }

    Read the article

  • Populate asp.net MVC Index page with data from the database

    - by Sunil Ramu
    I have a web application in which I need to fetch data from the database and display in the index page. As you know, asp.net mvc has given options to edit delete etc... I need to populate the page using the conventional DB way and it uses a stored procedure to retrieve results. I dont want to use LINQ. This is my model entity class using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace LogMVCApp.Models { public class Property { public int Id { get; set; } public string LogInId { get; set; } public string Username { get; set; } public string Action { get; set; } public string Information { get; set; } public bool Passed{get; set; } public string LogType { get; set; } } } and I need to retrieve data using something like this... var conString = ConfigurationManager.ConnectionStrings["connection"].ToString(); var conn = new SqlConnection(conString); var command = new SqlCommand("LogInsert", conn){CommandType=CommandType.StoredProcedure};

    Read the article

  • Rails Browser Detection Methods

    - by alvincrespo
    Hey Everyone, I was wondering what methods are standard within the industry to do browser detection in Rails? Is there a gem, library or sample code somewhere that can help determine the browser and apply a class or id to the body element of the (X)HTML? Thanks, I'm just wondering what everyone uses and whether there is accepted method of doing this? I know that we can get the user.agent and parse that string, but I'm not sure if that is that is an acceptable way to do browser detection. Also, I'm not trying to debate feature detection here, I've read multiple answers for that on StackOverflow, all I'm asking for is what you guys have done. [UPDATE] So thanks to faunzy on GitHub, I've sort of understand a bit about checking the user agent in Rails, but still not sure if this is the best way to go about it in Rails 3. But here is what I've gotten so far: def users_browser user_agent = request.env['HTTP_USER_AGENT'].downcase @users_browser ||= begin if user_agent.index('msie') && !user_agent.index('opera') && !user_agent.index('webtv') 'ie'+user_agent[user_agent.index('msie')+5].chr elsif user_agent.index('gecko/') 'gecko' elsif user_agent.index('opera') 'opera' elsif user_agent.index('konqueror') 'konqueror' elsif user_agent.index('ipod') 'ipod' elsif user_agent.index('ipad') 'ipad' elsif user_agent.index('iphone') 'iphone' elsif user_agent.index('chrome/') 'chrome' elsif user_agent.index('applewebkit/') 'safari' elsif user_agent.index('googlebot/') 'googlebot' elsif user_agent.index('msnbot') 'msnbot' elsif user_agent.index('yahoo! slurp') 'yahoobot' #Everything thinks it's mozilla, so this goes last elsif user_agent.index('mozilla/') 'gecko' else 'unknown' end end return @users_browser end

    Read the article

  • C# DynamicPDF Merging causing "Index out of bounds" error

    - by Dining Philanderer
    Greetings, We use DynamicPDF to merge multiple PDF documents stored in a MSSQL database. The vast majority of times it works wonderfully, but occasionally one of these documents will fail to merge generating the exception message "Index was outside the bounds of the array." I think I have isolated the problem to PDF files that are greater than 8.5 x 11.0. Does anyone know if this is a known issue with DynamicPDF? The merging code is posted here. What would be ideal is if there is a way to resize the PDF files to the correct size so this is not a concern at all... for (int docs = 0; docs < dsPDFInfo.Tables[0].Rows.Count; docs++) { byte[] bytePDFArray = (byte[])dsPDFInfo.Tables[0].Rows[docs]["Content"]; int iContentSize = Convert.ToInt32(dsPDFInfo.Tables[0].Rows[docs]["ContentSize"]); MemoryStream ms = new MemoryStream(bytePDFArray, 0, iContentSize); ceTe.DynamicPDF.Merger.PdfDocument pdfdoc = new ceTe.DynamicPDF.Merger.PdfDocument(ms); ceTe.DynamicPDF.Merger.MergeDocument mergedoc = new ceTe.DynamicPDF.Merger.MergeDocument(pdfdoc); docCombinedPDF.Append(mergedoc); } Thanks....

    Read the article

  • Normals per index?

    - by WarrenFaith
    I have a pyramid which has 5 vertex and 18 indices. As I want to add normals to each face I just found solution for normals for each vertex. That means I can't use indices to define my pyramid I need to have 18 vertex (and 3 times the same vertex for the same point in space). There must be a solution to use normals not on vertex base but on index base. Some code (javascript): var vertices = [ -half, -half, half, // 0 front left half, -half, half, // 1 front right half, -half, -half, // 2 back right -half, -half, -half, // 3 back left 0.0, Math.sqrt((size * size) - (2 * (half * half))) - half, 0.0 // 4 top ]; var vertexNormals = [ // front face normaleFront[0], normaleFront[1], normaleFront[2], normaleFront[0], normaleFront[1], normaleFront[2], normaleFront[0], normaleFront[1], normaleFront[2], // back face normaleBack[0], normaleBack[1], normaleBack[2], normaleBack[0], normaleBack[1], normaleBack[2], normaleBack[0], normaleBack[1], normaleBack[2], // left face normaleLeft[0], normaleLeft[1], normaleLeft[2], normaleLeft[0], normaleLeft[1], normaleLeft[2], normaleLeft[0], normaleLeft[1], normaleLeft[2], // right face normaleRight[0], normaleRight[1], normaleRight[2], normaleRight[0], normaleRight[1], normaleRight[2], normaleRight[0], normaleRight[1], normaleRight[2], // bottom face 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, 0.0, -1.0, 0.0, ]; var pyramidVertexIndices = [ 0, 1, 4, // Front face 2, 3, 4, // Back face 3, 0, 4, // Left face 1, 2, 4, // Right face 0, 1, 2, 2, 3, 0, // Bottom face ];

    Read the article

  • Two Objects created with the same Address in Flex

    - by James
    Hi, I have an issue in flex which is causing a bit of a headache! I am adding objects to an ArrayCollection but in doing so, another ArrayCollection is also picking up these changes even though there is no binding occurring. I can see from the debug that the two ACs have the same address but for the life of me can't figure out why. I have two Array Collections: model.index.rows //The main array collection model.index.holdRows //The array collection that imitates the above This phantom data binding occurs only for the first iteration in the loop and for all others it will just write it the once. The reason this is proving troublesome is that it creates duplicate entries in my datagrid. public override function handleMessage(message:IMessage):void { super.handleMessage(message); if (message is IndexResponse) { var response:IndexResponse = message as IndexResponse; model.index.rows.removeAll(); model.index.indexIsEmpty = response.nullIndex; if (model.index.indexIsEmpty !== true) { //Update the index model from the response. Note: each property of the row object will be shown in the UI as a column in the grid response.index.forEach(function(entry:EntryData, i:int, a:Array):void { var row:Object = { fileID: entry.fileID, dadName: entry.dadName }; entry.tags.forEach(function(tag:Tag, i:int, a:Array):void { row[tag.name] = tag.value; }); model.index.rows.addItem(row); }); if(model.index.indexForNetworkView == true){ model.index.holdRows.source = model.index.holdRows.source.concat(model.index.rows.source); model.index.indexCounter++; model.index.indexForNetworkView = false; controller.indexController.showNetwork(model.index.indexCounter); } model.index.rows.refresh(); controller.networkController.show(); } } Has anyone else who has encountered something simillar propose a solution?

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >