Search Results

Search found 29949 results on 1198 pages for 'large scale website'.

Page 225/1198 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Safe to KILL a mysql process REPLACEing records in a large myisam table?

    - by threecheeseopera
    I have a REPLACE query running for a few days now on a few MyISAM tables, the largest having 20+million records. I need it to stop. It is, basically: REPLACE INTO really_large_table (a,b,c,d) SELECT e,f,g,h FROM big_table INNER JOIN huge_table ON big_table.x LIKE CONCAT('%', huge_table.y, '%'); I need to KILL it, and I am worried that I may corrupt really_large_table. Because the sub-query itself takes a significant amount of time, the REPLACEing probably occurs (relatively) infrequently; if this is true, does this make it less likely for the data to become corrupted? For the curious, here is the SO question asked about the query I am trying to kill.

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • Ruby: Why is Array.sort slow for large objects?

    - by David Waller
    A colleague needed to sort an array of ActiveRecord objects in a Rails app. He tried the obvious Array.sort! but it seemed surprisingly slow, taking 32s for an array of 3700 objects. So just in case it was these big fat objects slowing things down, he reimplemented the sort by sorting an array of small objects, then reordering the original array of ActiveRecord objects to match - as shown in the code below. Tada! The sort now takes 700ms. That really surprised me. Does Ruby's sort method end up copying objects about the place rather than just references? He's using Ruby 1.8.6/7. def self.sort_events(events) event_sorters = Array.new(events.length) {|i| EventSorter.new(i, events[i])} event_sorters.sort! event_sorters.collect {|es| events[es.index]} end private # Class used by sort_events class EventSorter attr_reader :sqn attr_reader :time attr_reader :index def initialize(index, event) @index = index @sqn = event.sqn @time = event.time end def <=>(b) @time != b.time ? @time <=> b.time : @sqn <=> b.sqn end end

    Read the article

  • ADO program to list members of a large group.

    - by AlexGomez
    Hi everyone, I'm attempting to list all the members in a Active Directory group using ADO. The problem I have is that many of these groups have over 1500 members and ADSI cannot handle more than 1500 items in a multi-valued attribute. Fortunately I came across Richard Muller's wonderful VBScript that handles more than 1500 members at http://www.rlmueller.net/DocumentLargeGroup.htm I modified his code as shown below so that I can list ALL the groups and its memberships in a certain OU. However, I'm keeping getting the exception shown below: "ADODB.Recordset: Item cannot be found in the collection corresponding to the requested name or ordinal." My program appears to get stuck at: strPath = adoRecordset.Fields("ADsPath").Value Set objGroup = GetObject(strPath) All I am doing above is issuing the query to get back a recordset consisting of the ADsPath for each group in the OU. It then walks through the recordset and grabs the ADsPath for the first group and store its in a variable named strPath; we then use the value of that variable to bind to the group account for that group. It really should work! Any idea why the code below doesn't work for me? Any pointers will be great appreciated. Thanks. Option Explicit Dim objRootDSE, strDNSDomain, adoCommand Dim adoConnection, strBase, strAttributes Dim strFilter, strQuery, adoRecordset Dim strDN, intCount, blnLast, intLowRange Dim intHighRange, intRangeStep, objField Dim objGroup, objMember, strName ' Determine DNS domain name. Set objRootDSE = GetObject("LDAP://RootDSE") 'strDNSDomain = objRootDSE.Get("DefaultNamingContext") strDNSDomain = "XXXXXXXX" ' Use ADO to search Active Directory. Set adoCommand = CreateObject("ADODB.Command") Set adoConnection = CreateObject("ADODB.Connection") adoConnection.Provider = "ADsDSOObject" adoConnection.Open = "Active Directory Provider" adoCommand.ActiveConnection = adoConnection adoCommand.Properties("Page Size") = 100 adoCommand.Properties("Timeout") = 30 adoCommand.Properties("Cache Results") = False ' Specify base of search. strBase = "<LDAP://" & strDNSDomain & ">" ' Specify the attribute values to retrieve. strAttributes = "member" ' Filter on objects of class "group" strFilter = "(&(objectClass=group)(samAccountName=*))" ' Enumerate direct group members. ' Use range limits to handle more than 1000/1500 members. ' Setup to retrieve 1000 members at a time. blnLast = False intRangeStep = 999 intLowRange = 0 IntHighRange = intLowRange + intRangeStep Do While True If (blnLast = True) Then ' If last query, retrieve remaining members. strQuery = strBase & ";" & strFilter & ";" _ & strAttributes & ";range=" & intLowRange _ & "-*;subtree" Else ' If not last query, retrieve 1000 members. strQuery = strBase & ";" & strFilter & ";" _ & strAttributes & ";range=" & intLowRange & "-" _ & intHighRange & ";subtree" End If adoCommand.CommandText = strQuery Set adoRecordset = adoCommand.Execute adoRecordset.MoveFirst intCount = 0 Do Until adoRecordset.EOF strPath = adoRecordset.Fields("ADsPath").Value Set objGroup = GetObject(strPath) For Each objField In adoRecordset.Fields If (VarType(objField) = (vbArray + vbVariant)) _ Then For Each strDN In objField.Value ' Escape any forward slash characters, "/", with the backslash ' escape character. All other characters that should be escaped are. strDN = Replace(strDN, "/", "\/") ' Check dictionary object for duplicates. 'If (objGroupList.Exists(strDN) = False) Then ' Add to dictionary object. 'objGroupList.Add strDN, True ' Bind to each group member, to find member's samAccountName Set objMember = GetObject("LDAP://" & strDN) ' Output group cn, group samaAccountName and group member's samAccountName. Wscript.Echo objMember.samAccountName intCount = intCount + 1 'End if Next End If Next adoRecordset.MoveNext Loop adoRecordset.Close ' If this is the last query, exit the Do While loop. If (blnLast = True) Then Exit Do End If ' If the previous query returned no members, then the previous ' query for the next 1000 members failed. Perform one more ' query to retrieve remaining members (less than 1000). If (intCount = 0) Then blnLast = True Else ' Setup to retrieve next 1000 members. intLowRange = intHighRange + 1 intHighRange = intLowRange + intRangeStep End If Loop

    Read the article

  • Why use hashing to create pathnames for large collections of files?

    - by Stephen
    Hi, I noticed a number of cases where an application or database stored collections of files/blobs using a has to determine the path and filename. I believe the intended outcome is a situation where the path never gets too deep, or the folders ever get too full - too many files (or folders) in a folder making for slower access. EDIT: Examples are often Digital libraries or repositories, though the simplest example I can think of (that can be installed in about 30s) is the Zotero document/citation database. Why do this? EDIT: thanks Mat for the answer - does this technique of using a hash to create a file path have a name? Is it a pattern? I'd like to read more, but have failed to find anything in the ACM Digital Library

    Read the article

  • Alternate User select interface in django admin to reduce page size on large site?

    - by David Eyk
    I have a Django-based site with roughly 300,000 User objects. Admin pages for objects with a ForeignKey field to User take a very long time to load as the resulting form is about 6MB in size. Of course, the resulting dropdown isn't particularly useful, either. Are there any off-the-shelf replacements for handling this case? I've been googling for a snippet or a blog entry, but haven't found anything yet. I'd like to have a smaller download size and a more usable interface.

    Read the article

  • how to gzip-compress large Ajax responses (HTML only) in Coldfusion?

    - by frequent
    I'm running Coldfusion8 and jquery/jquery-mobile on the front-end. I'm playing around with an Ajax powered search engine trying to find the best tradeoff between data-volume and client-side processing time. Currently my AJAX search returns 40k of (JQM-enhanced markup), which avoids any client-side enhancement. This way I'm getting by without the page stalling for about 2-3 seconds, while JQM enhances all elements in the search results. What I'm curious is whether I can gzip Ajax responses sent from Coldfusion. If I check the header of my search right now, I'm having this: RESPONSE-header Connection Keep-Alive Content-Type text/html; charset=UTF-8 Date Sat, 01 Sep 2012 08:47:07 GMT Keep-Alive timeout=5, max=95 Server Apache/2.2.21 (Win32) mod_ssl/2.2.21 ... Transfer-Encoding chunked REQUEST-header Accept */* Accept-Encoding gzip, deflate Accept-Language de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Connection keep-alive Cookie CFID= ; CFTOKEN= ; resolution=1143 Host www.host.com Referer http://www.host.com/dev/users/index.cfm So, my request would accept gzip, deflate, but I'm getting back chunked. I'm generating the AJAX response in a cfsavecontent (called compressedHTML) and run this to eliminate whitespace <cfrscipt> compressedHTML = reReplace(renderedResults, "\>\s+\<", "> <", "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(13), "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(09), "ALL"); </cfscript> before sending the compressedHTML in a response object like this: {"SUCCESS":true,"DATA": compressedHTML } Question If I know I'm sending back HTML in my data object via Ajax, is there a way to gzip the response server-side before returning it vs sending chunked? If this is at all possible? If so, can I do this inside my response object or would I have to send back "pure" HTML? Thanks! EDIT: Found this on setting a 'web.config' for dynamic compression - doesn't seem to work EDIT2: Found thi snippet and am playing with it, although I'm not sure this will work. <cfscript> compressedHTML = reReplace(renderedResults, "\>\s+\<", "> <", "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(13), "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(09), "ALL"); if ( cgi.HTTP_ACCEPT_ENCODING contains "gzip" AND not showRaw ){ cfheader name="Content-Encoding" value="gzip"; bos = createObject("java","java.io.ByteArrayOutputStream").init(); gzipStream = createObject("java","java.util.zip.GZIPOutputStream"); gzipStream.init(bos); gzipStream.write(compressedHTML.getBytes("utf-8")); gzipStream.close(); bos.flush(); bos.close(); encoder = createObject("java","sun.misc. outStr= encoder.encode(bos.toByteArray()); compressedHTML = toString(bos.toByteArray()); } </cfscript> Probably need to try this on the response object and not the compressedTHML variable

    Read the article

  • How important is it to use SSL on every page of your website?

    - by Mark
    Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it? Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service. It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that. What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.

    Read the article

  • How can I store large amount of data from a database to XML (memory problem)?

    - by Andrija
    First, I had a problem with getting the data from the Database, it took too much memory and failed. I've set -Xmx1500M and I'm using scrolling ResultSet so that was taken care of. Now I need to make an XML from the data, but I can't put it in one file. At the moment, I'm doing it like this: while(rs.next()){ i++; xmlStringBuilder.append("\n\t<row>"); xmlStringBuilder.append("\n\t\t<ID>" + Util.transformToHTML(rs.getInt("id")) + "</ID>"); xmlStringBuilder.append("\n\t\t<JED_ID>" + Util.transformToHTML(rs.getInt("jed_id")) + "</JED_ID>"); xmlStringBuilder.append("\n\t\t<IME_PJ>" + Util.transformToHTML(rs.getString("ime_pj")) + "</IME_PJ>"); //etc. xmlStringBuilder.append("\n\t</row>"); if (i%100000 == 0){ //stores the data to a file with the name i.xml storeKBR(xmlStringBuilder.toString(),i); xmlStringBuilder= null; xmlStringBuilder= new StringBuilder(); } and it works; I get 12 100 MB files. Now, what I'd like to do is to do is have all that data in one file (which I then compress) but if just remove the if part, I go out of memory. I thought about trying to write to a file, closing it, then opening, but that wouldn't get me much since I'd have to load the file to memory when I open it. P.S. If there's a better way to release the Builder, do let me know :)

    Read the article

  • facebook open graph meta property og:type of 'website'. The property 'object-name' requires an object of og:type 'object-name'

    - by chinmayahd
    in cake php 1.3 in view ctp i have follow code: $url = 'http://example.com/exmp/explus/books/view/'.$book['Book']['id']; echo $this->Html->meta(array('property' => 'fb:app_id', 'content' => '*******'),'',array('inline'=>false)); echo $this->Html->meta(array('property' => 'og:type', 'content' => 'book'),'',array('inline'=>false)); echo $this->Html->meta(array('property' => 'og:url', 'content' => $url ),'',array('inline'=>false)); echo $this->Html->meta(array('property' => 'og:title', 'content' => $book['Book']['title']),'',array('inline'=>false)); echo $this->Html->meta(array('property' => 'og:description', 'content' => $book['Book']['title']),'',array('inline'=>false)); $imgurl = '../image/'.$book['Book']['id']; echo $this->Html->meta(array('property' => 'og:image', 'content' => $imgurl ),'',array('inline'=>false)); ?> and it gives the following error when i am posting it' { "error": { "message": "(#3502) Object at URL http://example.com/exmp/explus/books/view/234' has og:type of 'website'. The property 'book' requires an object of og:type 'book'. ", "type": "OAuthException", "code": 3502 } } is any one know how to solve it?

    Read the article

  • How to map (large) integer on (small in size( alphanumeric string with PHP? (Cantor?)

    - by Glooh
    Dear all, I can't figure out how to optimally do the following in PHP: In a database, I have messages with a unique ID, like 19041985. Now, I want to refer to these messages in a short-url service but not using generated hashes but by simply 'calculate' the original ID. In other words, for example: http://short.url/sYsn7 should let me calculate the message ID the visitor would like to request. To make it more obvious, I wrote the following in PHP to generate these 'alphanumeric ID versions' and of course, the other way around will let me calculate the original message ID. The question is: Is this the optimal way of doing this? I hardly think so, but can't think of anything else. $alphanumString = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-_'; for($i=0;$i < strlen($alphanumString);$i++) { $alphanumArray[$i] = substr($alphanumString,$i,1); } $id = 19041985; $out = ''; for($i=0;$i < strlen($id);$i++) { if(isset($alphanumString["".substr($id,$i,2).""]) && strlen($alphanumString["".substr($id,$i,2).""]) 0) { $out.=$alphanumString["".substr($id,$i,2).""]; } else { $out.=$alphanumString["".substr($id,$i,1).""]; $out.=$alphanumString["".substr($id,($i+1),1).""]; } $i++; } print $out;

    Read the article

  • What is the impact/limitation of oracle select with large number of bind variables?

    - by Igal Serban
    We had our oracle server chocking during processing a select statement with close to 3500(!!) bind variables. This select is, obviously, build dynamically by code that we can't change. During the execution of this select the db server went to 100% cpu usage and our system almost halted. We know how to reproduce this problem. So we can prevent this specific condition. But I am wondering if there is a way to protect the db ( by configuration) from this type of problems.

    Read the article

  • Loading a datagrid with large amounts of data in silverlight?

    - by JD
    Hi I am breaking up my project in small sections and one of the sections involves loading a grid with possibily lots of records (could be up to 1000s of records in the database). Ideally I would like some sort of mechanism where as the users scrolls the grid, more data is retrieved. I have read that certain controls (datapager with RIA) do this but I would like to know how I could implement this myself or do something similiar? I was thinking about first loading 50 records at a time and when the user gets to scroll near the 50th record, then get another 50 as a start and so on. Not sure how I do this but this does not feel right or whether I should load ids of records in the grid and then get each row to load itself via an async thread but then I am hitting my database for each record? Thanks JD.

    Read the article

  • What if a large number of objects are passed to my SwingWorker.process() method?

    - by Trejkaz
    I just found an interesting situation. Suppose you have some SwingWorker (I've made this one vaguely reminiscent of my own): public class AddressTreeBuildingWorker extends SwingWorker<Void, NodePair> { private DefaultTreeModel model; public AddressTreeBuildingWorker(DefaultTreeModel model) { } @Override protected Void doInBackground() { // Omitted; performs variable processing to build a tree of address nodes. } @Override protected void process(List<NodePair> chunks) { for (NodePair pair : chunks) { // Actually the real thing inserts in order. model.insertNodeInto(parent, child, parent.getChildCount()); } } private static class NodePair { private final DefaultMutableTreeNode parent; private final DefaultMutableTreeNode child; private NodePair(DefaultMutableTreeNode parent, DefaultMutableTreeNode child) { this.parent = parent; this.child = child; } } } If the work done in the background is significant then things work well - process() is called with relatively small lists of objects and everything is happy. Problem is, if the work done in the background is suddenly insignificant for whatever reason, process() receives a huge list of objects (I have seen 1,000,000, for instance) and by the time you process each object, you have spent 20 seconds on the Event Dispatch Thread, exactly what SwingWorker was designed to avoid. In case it isn't clear, both of these occur on the same SwingWorker class for me - it depends on the input data, and the type of processing the caller wanted. Is there a proper way to handle this? Obviously I can intentionally delay or yield the background processing thread so that a smaller number might arrive each time, but this doesn't feel like the right solution to me.

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • Why does File::Find finished short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system.

    Read the article

  • Is is possible to parse a web page from the client side for a large number of words and if so, how?

    - by Technoh
    I have a list of keywords, about 25,000 of them. I would like people who add a certain < script tag on their web page to have these keywords transformed into links. What would be the best way to go and achieve this? I have tried the simple javascript approach (an array with lots of elements and regexping/replacing each) and it obviously slows down the browser. I could always process the content server-side if there was a way, from the client, to send the page's content to a cross-domain server script (I'm partial to PHP but it could be anything) but I don't know of any way to do this. Any other working solution is also welcome.

    Read the article

  • CoreGraphics taking a while to show on a large view - can i get it to repeat pixels?

    - by Andrew
    This is my coregraphics code: void drawTopPaperBackground(CGContextRef context, CGRect rect) { CGRect paper3 = CGRectMake(10, 14, 300, rect.size.height - 14); CGRect paper2 = CGRectMake(13, 12, 294, rect.size.height - 12); CGRect paper1 = CGRectMake(16, 10, 288, rect.size.height - 10); //Shadow CGContextSetShadowWithColor(context, CGSizeMake(0,0), 10, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(paper3, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //Layers of paper //CGContextSaveGState(context); drawPaper(context, paper3); drawPaper(context, paper2); drawPaper(context, paper1); //CGContextRestoreGState(context); } void drawPaper(CGContextRef context, CGRect rect) { //Shadow CGContextSaveGState(context); CGContextSetShadowWithColor(context, CGSizeMake(0,0), 1, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(rect, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //CGContextRestoreGState(context); //Gradient //CGContextSaveGState(context); CGColorRef startColor = [UIColor colorWithWhite:0.92 alpha:1.0].CGColor; CGColorRef endColor = [UIColor colorWithWhite:0.94 alpha:1.0].CGColor; CGRect firstHalf = CGRectMake(rect.origin.x, rect.origin.y, rect.size.width / 2, rect.size.height); CGRect secondHalf = CGRectMake(rect.origin.x + (rect.size.width / 2), rect.origin.y, rect.size.width / 2, rect.size.height); drawVerticalGradient(context, firstHalf, startColor, endColor); drawVerticalGradient(context, secondHalf, endColor, startColor); //CGContextRestoreGState(context); //CGContextSaveGState(context); CGRect redRect = rectForRectWithInset(rect, -1); CGMutablePathRef redPath = createRoundedRectForRect(redRect, 0); //CGContextSaveGState(context); CGContextSetStrokeColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextClip(context); CGContextAddPath(context, redPath); CGContextSetShadowWithColor(context, CGSizeMake(0, 0), 15.0, [[UIColor colorWithWhite:0 alpha:0.1] CGColor]); CGContextStrokePath(context); CGContextRestoreGState(context); } The view is a UIScrollView, which contains a textview. Every time the user types something and goes onto a new line, I call [self setNeedsDisplay]; and it redraws the code. But when the view starts to get long - around 1000 height, it has very noticeable lag. How can i make this code more efficient? Can i take a line of pixels and make it just repeat that, or stretch it, all the way down?

    Read the article

  • Communication with different social networks, strategy pattern?

    - by bclaessens
    Hi For the last few days I've been thinking how I can solve the following programming problem and find the ideal, flexible programming structure. (note: I'm using Flash as my platform technology but that shouldn't matter since I'm just looking for the ideal design pattern). Our Flash website has multiple situations in which it has to communicate with different social networks (Facebook, Netlog and Skyrock). Now, the communication strategy doesn't have to change multiple times over one "run". The strategy should be picked once (at launch time) for that session. The real problem is the way the communication works between each social network and our website. Some networks force us to ask for a token, others force us to use a webservice, yet another forces us to set up its communication through javascript. The problem becomes more complicated when our website has to run in each network's canvas. Which results in even more (different) ways of communicating. To sum up, our website has to work in the following cases: standalone on the campaign website url (user chooses their favourite network) communicate with netlog OR communicate with facebook OR communicate with skyrock run in a netlog canvas and log in automatically (website checks for netlog parameters) run in a facebook canvas and log in automatically (website checks for facebook params) run in a skyrock canvas and log in automatically (website checks for skyrock params) As you can see, our website needs 6 different ways to communicate with a social network. To be honest, the actual significant difference between all communication strategies is the way they have to connect to their individual network (as stated above in my example). Posting an image, make a comment, ... is the same whether it runs standalone or in the canvas url. WARNING: posting an image, posting a comment DOES differ from network to network. Should I use the strategy pattern and make 6 different communication strategies or is there a better way? An example would be great but isn't required ;) Thanks in advance

    Read the article

  • How do i integrate paypal in my website? Java

    - by Nitesh Panchal
    Hello, I want to integrate paypal in my java web application. I saw that they offer many methods to accomplish this. But i want my visitors to remain on my site only. My friend told me that Paypal offers webservice. But i can't seem to find any documentation on Paypal site. If anybody could help me with this, i would be really very grateful. Please offer me the relevant links on Paypal where i could read and get my things done. Secondly, my friend also told me that we need to give location to paypal where my visitors would be redirected once paypal payment is complete. But i am confused. I am working on localhost. How would Paypal know about my localhost? I have already created my sandbox testing account. What should be my next step. Please explain me in detail. I don't know anything about Paypal. Once i created a demo application of Express checkout where they give a simple button of Pay Now and on clicking on it shopping cart etc appears. But now i want my visitors to stay on same website. Thanks in advance :)

    Read the article

  • Does anybody have any suggestions on which of these two approaches is better for large delete?

    - by RPS
    Approach #1: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE TOP (@count) FROM ProductOrderInfo WHERE ProductId = @product_id AND bCopied = 1 AND FileNameCRC = @localNameCrc SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' Approach #2: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE FROM ProductOrderInfo WHERE ProductId = @product_id AND FileNameCRC IN ( SELECT TOP(@count) FileNameCRC FROM ProductOrderInfo WITH (NOLOCK) WHERE bCopied = 1 AND FileNameCRC = @localNameCrc ) SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' END

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >