Search Results

Search found 16842 results on 674 pages for 'unique index'.

Page 79/674 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Is There a Good Pattern for Creating a Unique Id based on a Type?

    - by Michael Kelley
    I have a template that creates a unique identifier for each type it is instanced. Here's a streamlined version of the template: template <typename T> class arType { static const arType Id; // this will be unique for every instantiation of arType<>. } // Address of Id is used for identification. #define PA_TYPE_TAG(T) (&arType<T >::Id) This works when you have an executable made purely of static libraries. Unfortunately we're moving to an executable made up of dlls. Each dlls could potentially have its own copy of Id for a type. One obvious solution is to explicitly instantiate all instances of arType. Unfortunately this is cumbersome, and I'd like to ask if anyone can propose a better solution?

    Read the article

  • Rails - how do you create a user index page like stack overflows with multiple tabs whilst keeping t

    - by adam
    On stackoverflow in the users profile area there are many tabs which all display differing information such as questions asked and graphs. Its the same view though and im wondering hows its best to achieve this in rails whilst keeping the controller skinny and logic in the view to a minimum. def index @user = current_user case params[:tab_selected] when "questions" @data = @user.questions when "answers" @sentences = @user.answers else @sentences = @user.questions end respond_to do |format| format.html # index.html.erb nd end but how do i process this in the index view without a load of if and else statments. And if questions and answers are presented differently whats the best way to go about this.

    Read the article

  • Is it possible to preg replace unique variables into a string?

    - by Scarface
    What I want to do is use preg replace to replace matches within a string with a varying replacement, and I was wondering if anyone knew if that is possible in php or at least achievable by some means. For example, a string has two matches, then those matches will be replaced with two different variables. What I want are replacements to each be a unique id and I cannot figure out how this could possibly work or if php could even do this. For example if the match is 'a' and there is a sentence, 'put a smile on a person' then one 'a' will be unique id 98aksd00 and the other will be 09alkj08. I am retrieving my comments from a database so the preg replace is happening within while ($row=mysql_fetch_assoc($query)){ //preg replace If anyone could provide any insight into this, I would really appreciate it

    Read the article

  • Cake PHP Error: Make sure you have created index.ctp (???)

    - by Louis Stephens
    I was just starting to learn cakephp today by going through a "blog tutorial". I created my blog_controller.php and then created a folder named 'blog' with the apps/views/ structure. The next step in the tutorial was to create the index.ctp file within the blog folder under views. In the tutorial it declares that all error messages should be gone. However, I still receive an error message: Error: The view for BlogController::index() was not found. Error: Confirm you have created the file: /Users/trippstephens/Dropbox/cakephp-cakephp1x-348e5f0/app/views/blog/index.ctp For the life of me, I can not figure out what I have done wrong. I am running cakephp under MAMP and it "installed" successfully. Any help would be appreciated.

    Read the article

  • "Easiest" way to track unique visitors to a page, in real time?

    - by Cooper
    I need to record in "real time" (perhaps no more than 5 minute delay?) how many unique visitors a given page on my website has had in a given time period. I seek an "easy" way to do this. Preferably the results would be available via a database query. Two things I've tried that failed (so far): Google Analytics: Does the tracking/reporting, but not in real time - results are delayed by hours. Mint Analytics ( http://www.haveamint.com/ ): Tracks in real time, but seems to aggregate data in a way that prevents reporting of unique visitors to a single page over an arbitrary time frame. So, does anyone know how to make Mint Analytics do what I want, or can anyone recommend an analytics package or programmed approach that will do what I need?

    Read the article

  • How Can I optimize this RewriteEngine Code?

    - by Lucki Mile
    I have server overload, server admin said that this issue is caused from htaccess file This is the code: RewriteEngine On RewriteBase /here/ RewriteRule ^top/?$ index.php?mode=top [QSA] RewriteRule ^top/video/?$ index.php?mode=top&cat=vids [QSA] RewriteRule ^top/picture/?$ /index.php?mode=top&cat=pics [QSA] RewriteRule ^random$ index.php?mode=random [QSA] RewriteRule ^random/video/?$ index.php?mode=random&cat=vids [QSA] RewriteRule ^random/picture/?$ index.php?mode=random&cat=pics [QSA] RewriteRule ^new/?$ index.php [QSA] RewriteRule ^new/video/?$ index.php?mode=&cat=vids [QSA] RewriteRule ^new/picture/?$ index.php?mode=&cat=pics [QSA] RewriteRule ^video/([0-9]+)_(.*)$ item.php?cat=vids&id=$1 [QSA] RewriteRule ^picture/([0-9]+)_(.*)$ item.php?cat=pics&id=$1 [QSA] ErrorDocument 404 /item.php

    Read the article

  • In SQL Server 2008, when would I use a full text index that covered several tables?

    - by Suddy
    I wanted to do a full text search across several related tables in SQL Server 2008. From browsing this site I've realised the best option is via a view, but initially I thought I was meant to add several tables to the same full text index via Management Studio. I started to do this and realised the index would have no idea how they were related, so my question is: when would I want to have a full text index covering several tables in this way? Apologies for the vagueness, I am just trying to satisfy my curiosity after Google let me down.

    Read the article

  • Should I use a huge composite primary key or just a unique id?

    - by Jack
    I have been trying to do web scraping of a particular site and storing the results in a database. My original assumptions about the data allowed a schema where I could use fairly reasonable composite primary keys (usually containing only 2 or 3 fields) but as time went on, I realized that my original assumptions about the data were wrong and my primary keys were not as unique as I thought they were, so I have slowly been expanding them to contain more and more fields. In fact, I have recently come to believe that their database has no constraints whatsoever. Just today, I have finally expanded my a primary key for one of my tables to contain every field in that table and I thought now would be a good time to ask: is it better to add an auto-incrementing column that is just a unique id or just leave a composite primary key on the entire table?

    Read the article

  • MySQL sorting: NULL to the end & use index? Not possible?

    - by Vojto
    Hello fellow experts, I have a huge table and I want simple sorting. It could be so easy. I could just create an index and do some really fast sorting thanks to that index. But my client wants to put NULLs to the end, which is complicates the whole situation. Instead of simple: SORT BY name ASC I have to do SORT BY name IS NULL ASC, name ASC. That would be ok, but it because of that my index is useless, and the sorting is very slow. I don't know if there's a way to solve this problem, but if there is one, I desperately ask for help. :'(

    Read the article

  • How do I guarantee row uniqueness in MySQL without the use of a UNIQUE constraint?

    - by MalcomTucker
    Hi I have some fairly simple requirements but I'm not sure how I implement them: I have multiple concurrent threads running the same query The query supplies a 'string' value - if it exists in the table, the query should return the id of the matching row, if not the query should insert the 'string' value and return the last inserted id The 'string' column is (and must be) a text column (it's bigger than varchar 255) so I cannot set it as unique - uniqueness must be enforced through the access mechanism The query needs to be in stored procedure format (which doesnt support table locks in MySQL) How can I guarantee that 'string' is unique? How can I prevent other threads writing to the table after another thread has read it and found no matching 'string' item? Thanks for any advice..

    Read the article

  • Best way to have unique key over 500M varchar(255) records in mysql/innodb?

    - by taw
    I have url column with unique key over it - but its performance on updates is absolutely atrocious. I suspect that's because the index doesn't all fit in memory. So I was thinking, how about adding a column of md5(url) with 16 bytes of binary data and unique-keying that instead. What would be the best datatype for that? I'd love to be able to just see 32-character hex hash, while mysql would convert it to/from 16 binary bytes and index that, as programs using the database might have some troubles with arbitrary binary data that I'd rather avoid if possible (also I'm a bit afraid that mysql might get some strange ideas about character sets and for example overalocating storage for that by 3:1 because it thinks it might need utf8, how do I avoid that for cure?).

    Read the article

  • PHP, how can I produce a string, a unique list of values up to three items, for use after IN in a query?

    - by Jules
    I need to produce a string for use in an query e.g. SELECT whatever from Keywords.word IN (here); At the moment I have string which could be $search = "one word or four"; or $search = "one"; or $search = "one one"; I need to validate this into some acceptable for my query. I want a unique list of words, separated with a comma up to a maximum of three. This is what I have so far. $array = explode(" ",$search); $unique = array_unique ($array); I'm sure there must be a quicker way than evaluating each of the items for blank and selecting the first three.

    Read the article

  • Is it possible to have a composite index with a list property and a sort order?

    - by npdoty
    And if not, why not? The following index always fails for me, even though I had thought I could have a sort order with a list property as long as the index didn't sort or match against any other properties. - kind: Foo properties: - name: location_geocells - name: time direction: desc If such a composite index is allowed, are there any reasons that this might be failing for me? Do the existence of other indices on the same model increase the likelihood of this failure? Does the combination of a sort order with a list property require more than N entries, where N is the number of values in the list property? (If so, how many does it require?)

    Read the article

  • Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name

    - by Tapas Bose
    I have a batch file which starts the Oracle Services net start OracleOraDb11g_home1TNSListener net start OracleServiceORCL call C:\app\Edifixio\product\11.2.0\dbhome_1\BIN\emctl.bat start dbconsole pause But on executing the script I am getting: C:\windows\system32>net start OracleOraDb11g_home1TNSListener The requested service has already been started. More help is available by typing NET HELPMSG 2182. C:\windows\system32>net start OracleServiceORCL The OracleServiceORCL service is starting......... The OracleServiceORCL service was started successfully. C:\windows\system32>call C:\app\Edifixio\product\11.2.0\dbhome_1\BIN\emctl.bat start dbconsole Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name. Press any key to continue . . . I am using Windows 7 64 bit with Oracle 11gR2 64 bit. Any information will be very helpful. Thanks and Regards.

    Read the article

  • Sharepoint asks for NTLM credentials for every unique URL. How do I stop it?

    - by CamronBute
    I'm tasked with troubleshooting a problem we're having with a SP2010 site. The app is external, and there are several clients that must connect. Some clients are receiving a crazy amount of credential requests when trying to log on. It appears to ask for every unique URL (eg. every different picture, link, etc) and it won't stop. Other clients are having no problems. I cannot seem to replicate the issue, either. I'm attempting to replicate by restricting all settings (including cookies) on my own browser, but to no avail. I put the HTTP request under a microscope, and it's asking for NTLM credentials. The client is using IE8, and the browser is running in Protected Mode, but the browser settings cannot be determined... I'm guessing this is a webserver thing, simply because it appears to be an authentication thing. What might the problem be?

    Read the article

  • SQL SERVER – Select and Delete Duplicate Records – SQL in Sixty Seconds #036 – Video

    - by pinaldave
    Developers often face situations when they find their column have duplicate records and they want to delete it. A good developer will never delete any data without observing it and making sure that what is being deleted is the absolutely fine to delete. Before deleting duplicate data, one should select it and see if the data is really duplicate. In this video we are demonstrating two scripts – 1) selects duplicate records 2) deletes duplicate records. We are assuming that the table has a unique incremental id. Additionally, we are assuming that in the case of the duplicate records we would like to keep the latest record. If there is really a business need to keep unique records, one should consider to create a unique index on the column. Unique index will prevent users entering duplicate data into the table from the beginning. This should be the best solution. However, deleting duplicate data is also a very valid request. If user realizes that they need to keep only unique records in the column and if they are willing to create unique constraint, the very first requirement of creating a unique constraint is to delete the duplicate records. Let us see how to connect the values in Sixty Seconds: Here is the script which is used in the video. USE tempdb GO CREATE TABLE TestTable (ID INT, NameCol VARCHAR(100)) GO INSERT INTO TestTable (ID, NameCol) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Second' UNION ALL SELECT 4, 'Second' UNION ALL SELECT 5, 'Second' UNION ALL SELECT 6, 'Third' GO -- Selecting Data SELECT * FROM TestTable GO -- Detecting Duplicate SELECT NameCol, COUNT(*) TotalCount FROM TestTable GROUP BY NameCol HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC GO -- Deleting Duplicate DELETE FROM TestTable WHERE ID NOT IN ( SELECT MAX(ID) FROM TestTable GROUP BY NameCol) GO -- Selecting Data SELECT * FROM TestTable GO DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Delete Duplicate Records – Rows SQL SERVER – Count Duplicate Records – Rows SQL SERVER – 2005 – 2008 – Delete Duplicate Rows Delete Duplicate Records – Rows – Readers Contribution Unique Nonclustered Index Creation with IGNORE_DUP_KEY = ON – A Transactional Behavior What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • MySql query optimization help

    - by rohitgu
    I have few queries and am not able to figure out how to optimize them, QUERY 1 select * from t_twitter_tracking where classified is null and tweetType='ENGLISH' order by id limit 500; QUERY 2 Select count(*) as cnt, DATE_FORMAT(CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30'),'%Y-%m-%d') as dat from t_twitter_tracking wrdTrk where wrdTrk.word like ('dell') and CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30') between '2010-12-12 00:00:00' and '2010-12-26 00:00:00' group by dat; Both these queries run on the same table, CREATE TABLE `t_twitter_tracking` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `word` VARCHAR(200) NOT NULL, `tweetId` BIGINT(100) NOT NULL, `twtText` VARCHAR(800) NULL DEFAULT NULL, `language` TEXT NULL, `links` TEXT NULL, `tweetType` VARCHAR(20) NULL DEFAULT NULL, `source` TEXT NULL, `sourceStripped` TEXT NULL, `isTruncated` VARCHAR(40) NULL DEFAULT NULL, `inReplyToStatusId` BIGINT(30) NULL DEFAULT NULL, `inReplyToUserId` INT(11) NULL DEFAULT NULL, `rtUsrProfilePicUrl` TEXT NULL, `isFavorited` VARCHAR(40) NULL DEFAULT NULL, `inReplyToScreenName` VARCHAR(40) NULL DEFAULT NULL, `latitude` BIGINT(100) NOT NULL, `longitude` BIGINT(100) NOT NULL, `retweetedStatus` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToStatusId` BIGINT(100) NOT NULL, `statusInReplyToUserId` BIGINT(100) NOT NULL, `statusFavorited` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToScreenName` TEXT NULL, `screenName` TEXT NULL, `profilePicUrl` TEXT NULL, `twitterId` BIGINT(100) NOT NULL, `name` TEXT NULL, `location` VARCHAR(100) NULL DEFAULT NULL, `bio` TEXT NULL, `url` TEXT NULL COLLATE 'latin1_swedish_ci', `utcOffset` INT(11) NULL DEFAULT NULL, `timeZone` VARCHAR(100) NULL DEFAULT NULL, `frenCnt` BIGINT(20) NULL DEFAULT '0', `createdAt` DATETIME NULL DEFAULT NULL, `createdOnGMT` VARCHAR(40) NULL DEFAULT NULL, `createdOnServerTime` DATETIME NULL DEFAULT NULL, `follCnt` BIGINT(20) NULL DEFAULT '0', `favCnt` BIGINT(20) NULL DEFAULT '0', `totStatusCnt` BIGINT(20) NULL DEFAULT NULL, `usrCrtDate` VARCHAR(200) NULL DEFAULT NULL, `humanSentiment` VARCHAR(30) NULL DEFAULT NULL, `replied` BIT(1) NULL DEFAULT NULL, `replyMsg` TEXT NULL, `classified` INT(32) NULL DEFAULT NULL, `createdOnGMTDate` DATETIME NULL DEFAULT NULL, `locationDetail` TEXT NULL, `geonameid` INT(11) NULL DEFAULT NULL, `country` VARCHAR(255) NULL DEFAULT NULL, `continent` CHAR(2) NULL DEFAULT NULL, `placeLongitude` FLOAT NULL DEFAULT NULL, `placeLatitude` FLOAT NULL DEFAULT NULL, PRIMARY KEY (`id`), INDEX `id` (`id`, `word`), INDEX `createdOnGMT_index` (`createdOnGMT`) USING BTREE, INDEX `word_index` (`word`) USING BTREE, INDEX `location_index` (`location`) USING BTREE, INDEX `classified_index` (`classified`) USING BTREE, INDEX `tweetType_index` (`tweetType`) USING BTREE, INDEX `getunclassified_index` (`classified`, `tweetType`) USING BTREE, INDEX `timeline_index` (`word`, `createdOnGMTDate`, `classified`) USING BTREE, INDEX `createdOnGMTDate_index` (`createdOnGMTDate`) USING BTREE, INDEX `locdetail_index` (`country`, `id`) USING BTREE, FULLTEXT INDEX `twtText_index` (`twtText`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM ROW_FORMAT=DEFAULT AUTO_INCREMENT=12608048; The table has more than 10 million records. How can I optimize it?

    Read the article

  • SqlDataAdaptor why get a different query

    - by Murat
    Hi All, I have a database reader class. But i want to use this class, sometimes SqlDataAdaptor.Fill() method get another query to Datatable. I look at Query in SqlCommand Instance and my query is correct but returning values from different table? What is the problem? This My Code public static DataTable GetLatestRecords(string TableName, string FilterFieldName, int RecordCount, params DBFieldAndValue[] FilterFields) { DataTable DT = new DataTable(); Type ClassType = typeof(T); string SelectString = string.Format("SELECT TOP ({0}) * FROM {1}", RecordCount, TableName); if (FilterFields.Length > 0) { SelectString += " WHERE 1 = 1 "; for (int index = 0; index < FilterFields.Length; index++) SelectString += FilterFields[index].ToString(index); } SelectString += string.Format(" ORDER BY {0} DESC", FilterFieldName); try { SqlConnection Connection = GetConnection<T>(); SqlCommand Command = new SqlCommand(SelectString, Connection); for (int index = 0; index < FilterFields.Length; index++) Command.Parameters.AddWithValue("@" + FilterFields[index].Field + "_" + index, FilterFields[index].Value); if (Connection.State == ConnectionState.Closed) Connection.Open(); Command.CommandText = SelectString; SqlDataAdapter DA = new SqlDataAdapter(Command); DT.Dispose(); DT = new DataTable(); DA.Fill(DT); if (Connection.State == ConnectionState.Open) Connection.Close(); } catch (Exception ex) { WriteLogStatic(ex.Message); throw new Exception(@"Veritabani islemleri sirasinda hata olustu!\nAyrintilar 'C:\Temp' dizini altindadir.\n" + ex.Message); } return DT; }

    Read the article

  • Python performance improvement request for winkler

    - by Martlark
    I'm a python n00b and I'd like some suggestions on how to improve the algorithm to improve the performance of this method to compute the Jaro-Winkler distance of two names. def winklerCompareP(str1, str2): """Return approximate string comparator measure (between 0.0 and 1.0) USAGE: score = winkler(str1, str2) ARGUMENTS: str1 The first string str2 The second string DESCRIPTION: As described in 'An Application of the Fellegi-Sunter Model of Record Linkage to the 1990 U.S. Decennial Census' by William E. Winkler and Yves Thibaudeau. Based on the 'jaro' string comparator, but modifies it according to whether the first few characters are the same or not. """ # Quick check if the strings are the same - - - - - - - - - - - - - - - - - - # jaro_winkler_marker_char = chr(1) if (str1 == str2): return 1.0 len1 = len(str1) len2 = len(str2) halflen = max(len1,len2) / 2 - 1 ass1 = '' # Characters assigned in str1 ass2 = '' # Characters assigned in str2 #ass1 = '' #ass2 = '' workstr1 = str1 workstr2 = str2 common1 = 0 # Number of common characters common2 = 0 #print "'len1', str1[i], start, end, index, ass1, workstr2, common1" # Analyse the first string - - - - - - - - - - - - - - - - - - - - - - - - - # for i in range(len1): start = max(0,i-halflen) end = min(i+halflen+1,len2) index = workstr2.find(str1[i],start,end) #print 'len1', str1[i], start, end, index, ass1, workstr2, common1 if (index > -1): # Found common character common1 += 1 #ass1 += str1[i] ass1 = ass1 + str1[i] workstr2 = workstr2[:index]+jaro_winkler_marker_char+workstr2[index+1:] #print "str1 analyse result", ass1, common1 #print "str1 analyse result", ass1, common1 # Analyse the second string - - - - - - - - - - - - - - - - - - - - - - - - - # for i in range(len2): start = max(0,i-halflen) end = min(i+halflen+1,len1) index = workstr1.find(str2[i],start,end) #print 'len2', str2[i], start, end, index, ass1, workstr1, common2 if (index > -1): # Found common character common2 += 1 #ass2 += str2[i] ass2 = ass2 + str2[i] workstr1 = workstr1[:index]+jaro_winkler_marker_char+workstr1[index+1:] if (common1 != common2): print('Winkler: Wrong common values for strings "%s" and "%s"' % \ (str1, str2) + ', common1: %i, common2: %i' % (common1, common2) + \ ', common should be the same.') common1 = float(common1+common2) / 2.0 ##### This is just a fix ##### if (common1 == 0): return 0.0 # Compute number of transpositions - - - - - - - - - - - - - - - - - - - - - # transposition = 0 for i in range(len(ass1)): if (ass1[i] != ass2[i]): transposition += 1 transposition = transposition / 2.0 # Now compute how many characters are common at beginning - - - - - - - - - - # minlen = min(len1,len2) for same in range(minlen+1): if (str1[:same] != str2[:same]): break same -= 1 if (same > 4): same = 4 common1 = float(common1) w = 1./3.*(common1 / float(len1) + common1 / float(len2) + (common1-transposition) / common1) wn = w + same*0.1 * (1.0 - w) return wn

    Read the article

  • Segmentation fault on writing char to char* address

    - by Lukas Dojcak
    hi guys, i've got problem with my little C program. Maybe you could help me. char* shiftujVzorku(char* text, char* pattern, int offset){ char* pom = text; int size = 0; int index = 0; while(*(text + size) != '\0'){ size++; } while(*(pom + index) != '\0'){ if(overVzorku(pom + index, pattern)){ while(*pattern != '\0'){ //vyment *pom s *pom + offset if(pom + index + offset < text + size){ char x = *(pom + index + offset); char y = *(pom + index); int adresa = *(pom + index + offset); *(pom + index + offset) = y; <<<<<< SEGMENTATION FAULT *(pom + index) = x; //*pom = *pom - *(pom + offset); //*(pom + offset) = *(pom + offset) + *pom; //*pom = *(pom + offset) - *pom; } else{ *pom = *pom - *(pom + offset - size); *(pom + offset - size) = *(pom + offset - size) + *pom; *pom = *(pom + offset - size) - *pom; } pattern++; } break; } index++; } return text; } Isn't important what's the programm doing. Mayby there's lot of bugs. But, why do I get SEGMENTATION FAULT (for destination see code) at this line? I'm, trying to write some char value to memory space, with help of address "pom + offset + index". Thanks for everything helpful. :)

    Read the article

  • Javascript: Whitespace Characters being Removed in Chrome (but not Firefox)

    - by Matrym
    Why would the below eliminate the whitespace around matched keyword text when replacing it with an anchor link? Note, this error only occurs in Chrome, and not firefox. For complete context, the file is located at: http://seox.org/lbp/lb-core.js To view the code in action (no errors found yet), the demo page is at http://seox.org/test.html. Copy/Pasting the first paragraph into a rich text editor (ie: dreamweaver, or gmail with rich text editor turned on) will reveal the problem, with words bunched together. Pasting it into a plain text editor will not. // Find page text (not in links) -> doxdesk.com function findPlainTextExceptInLinks(element, substring, callback) { for (var childi= element.childNodes.length; childi-->0;) { var child= element.childNodes[childi]; if (child.nodeType===1) { if (child.tagName.toLowerCase()!=='a') findPlainTextExceptInLinks(child, substring, callback); } else if (child.nodeType===3) { var index= child.data.length; while (true) { index= child.data.lastIndexOf(substring, index); if (index===-1 || limit.indexOf(substring.toLowerCase()) !== -1) break; // don't match an alphanumeric char var dontMatch =/\w/; if(child.nodeValue.charAt(index - 1).match(dontMatch) || child.nodeValue.charAt(index+keyword.length).match(dontMatch)) break; // alert(child.nodeValue.charAt(index+keyword.length + 1)); callback.call(window, child, index) } } } } // Linkup function, call with various type cases (below) function linkup(node, index) { node.splitText(index+keyword.length); var a= document.createElement('a'); a.href= linkUrl; a.appendChild(node.splitText(index)); node.parentNode.insertBefore(a, node.nextSibling); limit.push(keyword.toLowerCase()); // Add the keyword to memory urlMemory.push(linkUrl); // Add the url to memory } // lower case (already applied) findPlainTextExceptInLinks(lbp.vrs.holder, keyword, linkup); Thanks in advance for your help. I'm nearly ready to launch the script, and will gladly comment in kudos to you for your assistance.

    Read the article

  • PyQt QAbstractListModel seems to ignore tristate flags

    - by mcieslak
    I've been trying for a couple days to figure out why my QAbstractLisModel won't allow a user to toggle a checkable item in three states. The model returns the Qt.IsTristate and Qt.ItemIsUserCheckable in the flags() method, but when the program runs only Qt.Checked and Qt.Unchecked are toggled on edit. class cboxModel(QtCore.QAbstractListModel): def __init__(self, parent=None): super(cboxModel, self).__init__(parent) self.cboxes = [ ['a',0], ['b',1], ['c',2], ['d',0] ] def rowCount(self,index=QtCore.QModelIndex()): return len(self.cboxes) def data(self,index,role): if not index.isValid: return QtCore.QVariant() myname,mystate = self.cboxes[index.row()] if role == QtCore.Qt.DisplayRole: return QtCore.QVariant(myname) if role == QtCore.Qt.CheckStateRole: if mystate == 0: return QtCore.QVariant(QtCore.Qt.Unchecked) elif mystate == 1: return QtCore.QVariant(QtCore.Qt.PartiallyChecked) elif mystate == 2: return QtCore.QVariant(QtCore.Qt.Checked) return QtCore.QVariant() def setData(self,index,value,role=QtCore.Qt.EditRole): if index.isValid(): self.cboxes[index.row()][1] = value.toInt()[0] self.emit(QtCore.SIGNAL("dataChanged(QModelIndex,QModelIndex)"), index, index) print self.cboxes return True return False def flags(self,index): if not index.isValid(): return QtCore.Qt.ItemIsEditable return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsEditable | QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsTristate You can test it with this, class MainForm(QtGui.QMainWindow): def __init__(self, parent=None): super(MainForm, self).__init__(parent) model = cboxModel(self) self.view = QtGui.QListView() self.view.setModel(model) self.setCentralWidget(self.view) app = QtGui.QApplication(sys.argv) form = MainForm() form.show() app.exec_() and see that only 2 states are available. I'm assuming there's something simple I'm missing. Any ideas? Thanks!

    Read the article

  • Javascript indexOf() always returns -1

    - by Thomas
    I try to retrieve the index of en element in an array. This works perfectly var onCreate = function (event) { console.assert(markers[markerId] === undefined); var markerId = event.markerId; markers[markerId] = {}; var marker = markers[markerId]; // create the container object marker.object3d = new THREE.Object3D(); marker.object3d.matrixAutoUpdate = false; scene.add(marker.object3d); //UPDATE ARRAY HOLDING CURRENT MARKERS console.log("ON CREATE"); console.log("current detected markers: " + currentMarkers); var idx = currentMarkers.indexOf(markerId); // Find the index console.log("marker " + event.markerId + " has index: " + idx); if(idx==-1) // if does not exist currentMarkers.push(markerId); But this doesnt... var onDelete = function (event) { console.assert(markers[event.markerId] !== undefined); var markerId = event.markerId; //UPDATE ARRAY HOLDING CURRENT MARKERS console.log("ON DELETE"); console.log("current detected markers: " + currentMarkers); var idxx = currentMarkers.indexOf(markerId); // Find the index console.log("marker " + markerId + " has index: " + idxx); if(idxx != -1) {// if DOES exist currentMarkers.splice(idxx, 1); //Delete var idxxx = currentMarkersCoverChecked.indexOf(markerId); // Find the index if(idxxx == -1) // if does NOT exist, so if not being checked checkMarkerCovered(markerId); } onDeleteRandom(markerId); var marker = markers[markerId]; scene.remove(marker.object3d); delete markers[markerId]; Look at my console output: ON CREATE current detected markers: 2,1,4 marker 3 has index: -1 ON CREATE main.js:266 current detected markers: 2,1,4,3 marker 4 has index: 2 ON DELETE current detected markers: 2,1,4,3 marker 2 has index: -1 ON CREATE current detected markers: 2,1,4,3 marker 2 has index: 0 ON DELETE current detected markers: 2,1,4,3 marker 1 has index: -1

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >