Search Results

Search found 8250 results on 330 pages for 'dunn less'.

Page 266/330 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Aggregating and displaying content from hundreds of RSS feeds

    - by Andrew LeClair
    I'd like to build a website that aggregates and displays content from hundreds of RSS feeds. The feeds will be from different sites: Twitter, Flickr, Tumblr, etc, so the content will be very heterogenous. In a perfect world — and this is more of a side issue — I would like to allow other people to help manage the list of feeds and assign tags to the content from each individual feed so that you can filter the items that are displayed. What I've tried so far: Google Feeds API – I thought this would be the answer, but unless I'm missing something, the FeedController will only output the collected feed content as separate lists. Is there any way to ask the Google Feeds API to aggregate and sort the content from many RSS feeds before displaying? Yahoo! Pipes – This also seemed like a good solution at first. I setup a Pipe that accesses a list of RSS feeds stored in a Google Doc spreadsheet and then aggregates the content. However, the output leaves a lot to be desired; Tumblr video posts, for example, only show a title and a permalink to the post, the embedded Youtube video is lost. PHP – I've seen this question, which looks like a good approach. I'm less proficient in PHP, so although I'm willing to learn, I'd ideally like to find a different approach. Any thoughts? Thanks.

    Read the article

  • Question on SQL Grouping

    - by Lijo
    Hi Team, I am trying to achieve the following without using sub query. For a funding, I would like to select the latest Letter created date and the ‘earliest worklist created since letter created’ date for a funding. FundingId Leter (1, 1/1/2009 )(1, 5/5/2009) (1, 8/8/2009) (2, 3/3/2009) FundingId WorkList (1, 5/5/2009 ) (1, 9/9/2009) (1, 10/10/2009) (2, 2/2/2009) Expected Result - FundingId Leter WorkList (1, 8/8/2009, 9/9/2009) I wrote a query as follows. It has a bug. It will omit those FundingId for which the minimum WorkList date is less than latest Letter date (even though it has another worklist with greater than letter created date). CREATE TABLE #Funding( [Funding_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_No] [int] NOT NULL, CONSTRAINT [PK_Center_Center_ID] PRIMARY KEY NONCLUSTERED ([Funding_ID] ASC) ) ON [PRIMARY] CREATE TABLE #Letter( [Letter_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_Letter_Letter_ID] PRIMARY KEY NONCLUSTERED ([Letter_ID] ASC) ) ON [PRIMARY] CREATE TABLE #WorkList( [WorkList_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_WorkList_WorkList_ID] PRIMARY KEY NONCLUSTERED ([WorkList_ID] ASC) ) ON [PRIMARY] SELECT F.Funding_ID, Funding_No, MAX (L.CreatedDt), MIN(W.CreatedDt) FROM #Funding F INNER JOIN #Letter L ON L.Funding_ID = F.Funding_ID LEFT OUTER JOIN #WorkList W ON W.Funding_ID = F.Funding_ID GROUP BY F.Funding_ID,Funding_No HAVING MIN(W.CreatedDt) MAX (L.CreatedDt) How can I write a correct query without using subquery? Please help Thanks Lijo

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • Need help implementing this algorithm with map reduce(hadoop)

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Best way to keep a .net client app updated with status of another application

    - by rwmnau
    I have a Windows service that's running all the time, and takes some action every 15 minutes. I also have a client WinForms app that displays some information about what the service is doing. I'd like the forms application to keep itself updated with a recent status, but I'm not sure if polling every second is a good move performance-wise. When it starts, my Windows Service opens a WCF named pipe to receive queries (from my client form) Every second, a timer on the winform sends a query to the pipe, and then displays the results. If the pipe isn't there, the form displays that the service isn't running. Is that the best way to do this? If my service opens the pipe when it starts, will it always stay open (until I close it or my service stops)? In addition to polling the service, maybe there's some way for the service to notify any watching applications of certain events, like starting and stopping processing? That way, I could poll less, since I'd presumably know about big events already, and would only be polling for progress. Anything that I'm missing?

    Read the article

  • MySQL + Joomla + remote c# access

    - by Jimmy
    Hello, I work on a Joomla web site, installed on a MySQL database and running on IIS7. It's all working fine. I now need to add functionality that lets (Joomla-)registered users change some configuration data. Though I haven't done this yet, it looks straightforward enough to do with Joomla. The data is private so all external access will be done through HTTPS. I also need an existing c# program, running on another machine, to read that configuration data. Sure enough, this data access needs to be as fast as possible. The data will be small (and filtered by query), but the latency should be kept to a minimum. A short-term, client-side cache (less than a minute, in case a user updates his configuration data) seems like a good idea. I have done practically zero database/asp programming so far, so what's the best way of doing that last step? Should the c# program access the database 'directly' (using what? LINQ?) or setup some sort of Facade (SOAP?) service? If a service should be used, should it be done through Joomla or with ASP on IIS? Thanks

    Read the article

  • How can I change jtable height at runtime

    - by wniroshan
    I hava a JFrame with multiple JPanels of similar width aligned one below other. I use one of these JPanels to display a JTable which is the last JPanel of the lot. This JPanel has a JScrollpane as a child component. This is where I try to add my table dynamically. Initial height of this JScrollpane is set to 40. I designed above template using Netbeans 6.8 Now I'm trying to add the table to the JPanel. When a button is pressed below code snippet is called. The class which includes this code extends javax.swing.JFrame class. I am expecting below code would adjust table height according to the row count and display the table. SearchTable = new JTable(RowData, DisplayNames) { @Override public boolean isCellEditable(int rowIndex, int vColIndex) { return false; } }; // if row count is less than 10 then display all the rows without a scroll bar if (SearchTable.getRowCount() < 10) { pnl_tblpanel.setPreferredSize(new Dimension(625, SearchTable.getRowHeight() * (SearchTable.getRowCount() + 4))); scr_tblholder.setPreferredSize(new Dimension(625, SearchTable.getRowHeight() * (SearchTable.getRowCount() + 4))); } else {// if row count is more than 10 display first 10 rows and add a scroll bar pnl_tblpanel.setPreferredSize(new Dimension(625, SearchTable.getRowHeight() * (10 + 2))); scr_tblholder.setAutoscrolls(true); } //pnl_tblpanel.add(scr_tblholder); scr_tblholder.setViewportView(SearchTable); //pnl_tblpanel.repaint(); pnl_tblpanel.validate(); this.validate(); //this.repaint(); pnl_tblpanel.setVisible(true); this.pack(); The table displays, but the table height is not changed according to the row count. It stays its default value. I have been trying many combinations of validate and repaint but nothing worked. (More in desperation) Can anyone shed some light on this Thank you

    Read the article

  • How often should network traffic/collisions cause SNMP Sets to fail?

    - by A. Levy
    My team has a situation where an SNMP SET will fail once every two weeks or so. Since this set happens automatically, we don't necessarily notice it immediately when it fails, and this can result in an inconsistent configuration and associated wailing and gnashing of teeth. The plan is to fix this by having our software automatically retry the SET when it fails. The problem is, we aren't sure why the failure is happening. My (extremely limited) knowledge of SNMP isn't particularly helpful in diagnosing this problem, so I thought I'd ask StackOverflow for some advice. We think that every so often a spike in network traffic will cause the SET to fail. Since SNMP uses UDP for communication, I would think it would be relatively easy for a command to be drowned out if traffic was high for a short period of time. However, I have no idea how common this is. We have a small network with a single cisco router and there are less than a dozen SNMP controlled devices on that network. In addition to the SNMP traffic, there are some status web pages being loaded from the various devices. In case it makes a difference, I believe we are using the AdventNet SNMP API version 4.0.4 for Java. Does it sound reasonable that there will be some SET commands dropped occasionally, or should we be looking for other causes?

    Read the article

  • Criteria for triggering garbage collection in .Net

    - by Kennet Belenky
    I've come across some curious behavior with regard to garbage collection in .Net. The following program will throw an OutOfMemoryException very quickly (after less than a second on a 32-bit, 2GB machine). The Foo finalizer is never called. class Foo { static Dictionary<Guid, WeakReference> allFoos = new Dictionary<Guid, WeakReference>(); Guid guid = Guid.NewGuid(); byte[] buffer = new byte[1000000]; static Random rand = new Random(); public Foo() { // Uncomment the following line and the program will run forever. // rand.NextBytes(buffer); allFoos[guid] = new WeakReference(this); } ~Foo() { allFoos.Remove(guid); } static public void Main(string args[]) { for (; ; ) { new Foo(); } } } If the rand.nextBytes line is uncommented, it will run ad infinitum, and the Foo finalizer is regularly invoked. Why is that? My best guess is that in the former case, either the CLR or the Windows VMM is lazy about allocating physical memory. The buffer never gets written to, so the physical memory is never used. When the address space runs out, the system crashes. In the latter case, the system runs out of physical memory before it runs out of address space, the GC is triggered and the objects are collected. However, here's the part I don't get. Assuming my theory is correct, why doesn't the GC trigger when the address space runs low? If my theory is incorrect, then what's the real explanation?

    Read the article

  • Flex Sprite xy Coordinates

    - by Ian
    Hi, I have a drawing that looks more or less like the attached image. image The orange square is the currently selected selected sprite. The sprites are all draws from coordinates that i receive from XML. var sprObject:Sprite = new Sprite(); sprObject.graphics.beginFill(itemList.c.toString()); sprObject.name = strName; sprObject.graphics.moveTo(iX, iY); sprObject.graphics.lineTo(iX2, iY2); sprObject.graphics.lineTo(iX3, iY3); sprObject.graphics.lineTo(iX4, iY4); sprObject.graphics.lineTo(iX, iX); sprObject.graphics.endFill(); mainUI.addChild(sprObject); // mainUI is a mx:UIComponent g_Sprite.push(sprObject); // array of sprites. What I want to do is the following. If I'm currently on the orange square and I use my keyboard direction buttons (up/down/left/right). I want to deselect the current sprite and select the next sprite in the appropriate direction. The problem I'm having is that I cannot get the x and y coordinates of the drawn sprites. If I look in the array, the x and y coordinates of the sprites are all 0. If I can retrieve that I can write an algorithm to determine the next sprite to select. Any help would be appreciated.

    Read the article

  • Faster or more memory-efficient solution in Python for this Codejam problem.

    - by jeroen.vangoey
    I tried my hand at this Google Codejam Africa problem (the contest is already finished, I just did it to improve my programming skills). The Problem: You are hosting a party with G guests and notice that there is an odd number of guests! When planning the party you deliberately invited only couples and gave each couple a unique number C on their invitation. You would like to single out whoever came alone by asking all of the guests for their invitation numbers. The Input: The first line of input gives the number of cases, N. N test cases follow. For each test case there will be: One line containing the value G the number of guests. One line containing a space-separated list of G integers. Each integer C indicates the invitation code of a guest. Output For each test case, output one line containing "Case #x: " followed by the number C of the guest who is alone. The Limits: 1 = N = 50 0 < C = 2147483647 Small dataset 3 = G < 100 Large dataset 3 = G < 1000 Sample Input: 3 3 1 2147483647 2147483647 5 3 4 7 4 3 5 2 10 2 10 5 Sample Output: Case #1: 1 Case #2: 7 Case #3: 5 This is the solution that I came up with: with open('A-large-practice.in') as f: lines = f.readlines() with open('A-large-practice.out', 'w') as output: N = int(lines[0]) for testcase, i in enumerate(range(1,2*N,2)): G = int(lines[i]) for guest in range(G): codes = map(int, lines[i+1].split(' ')) alone = (c for c in codes if codes.count(c)==1) output.write("Case #%d: %d\n" % (testcase+1, alone.next())) It runs in 12 seconds on my machine with the large input. Now, my question is, can this solution be improved in Python to run in a shorter time or use less memory? The analysis of the problem gives some pointers on how to do this in Java and C++ but I can't translate those solutions back to Python.

    Read the article

  • How to remove words based on a word count

    - by Chris
    Here is what I'm trying to accomplish. I have an object coming back from the database with a string description. This description can be up to 1000 characters long, but we only want to display a short view of this. So I coded up the following, but I'm having trouble in actually removing the number of words after the regular expression finds the total count of words. Does anyone have good way of dispalying the words which are less than the Regex.Matches? Thanks! if (!string.IsNullOrEmpty(myObject.Description)) { string original = myObject.Description; MatchCollection wordColl = Regex.Matches(original, @"[\S]+"); if (wordColl.Count < 70) // 70 words? { uxDescriptionDisplay.Text = string.Format("<p>{0}</p>", myObject.Description); } else { string shortendText = original.Remove(200); // 200 characters? uxDescriptionDisplay.Text = string.Format("<p>{0}</p>", shortendText); } }

    Read the article

  • Read large amount of data from file in Java

    - by Crozin
    Hello I've got text file that contains 1 000 002 numbers in following formation: 123 456 1 2 3 4 5 6 .... 999999 100000 Now I need to read that data and allocate it to int variables (the very first two numbers) and all the rest (1 000 000 numbers) to an array int[]. It's not a hard task, but - it's horrible slow. My first attempt was java.util.Scanner: Scanner stdin = new Scanner(new File("./path")); int n = stdin.nextInt(); int t = stdin.nextInt(); int array[] = new array[n]; for (int i = 0; i < n; i++) { array[i] = stdin.nextInt(); } It works as excepted but it takes about 7500 ms to execute. I need to fetch that data in up to several hundred of milliseconds. Then I tried java.io.BufferedReader: Using BufferedReader.readLine() and String.split() I got the same results in about 1700 ms, but it's still too many. How can I read that amount of data in less that 1 second? The final result should be equal to: int n = 123; int t = 456; int array[] = { 1, 2, 3, 4, ..., 999999, 100000 };

    Read the article

  • Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

    - by Durandal
    There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor. For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards. But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor? As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures? As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen? EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.

    Read the article

  • Visual C++ function suddenly 170 ms slower (4x longer)

    - by Mikael
    For the past few months I've been working on a Visual C++ project to take images from cameras and process them. Up until today this has taken about 65 ms to update the data but now it has suddenly increased significantly. What happens is: I launch my program and for the first 30 or so iterations it performs as expected, then suddenly the loop time increases from 65 ms to 250 ms. The odd thing is, after timing each function I found out that the part of the code which is causing the slowdown is fairly basic and has not been modified in over a month. The data which goes into it is unchanged and identical every iteration but the execution time which is initially less than 1 ms suddenly increases to 170 ms while the rest of the code is still performing as expected (time-wise). Basically, I am calling the same function over and over, for the first 30 calls it performs as it should, after that it slows down for no apparent reason. It might also be worth noting that it is a sudden change in execution time, not a gradual increase. What could be causing this? The code is leaking some memory (~50 kb/s) but not nearly enough to warrant a sudden 4x slowdown. If anyone has any ideas I'd love to hear them!

    Read the article

  • QProgressBar problem with uploading

    - by rolanddd
    Hey all! I show my code first, then I explain my problem: ... // somewhere in the constructor progressBar = new QProgressBar(this); progressBar-setMinimum(0); progressBar-setMaximum(100); ... connect(&http, SIGNAL(dataSendProgress(int, int)), this, SLOT(updateProgressBar(int, int))); ... void MainWindow::updateProgressBar(int bytesSent, int total) { progressBar-setMaximum(total); progressBar-setValue(bytesSent); } So this is how I try to make my progressBar being updated when I upload a file. The problem is, it won't do the job. When it starts uploading, I set the value of the progress bar to 0, then (thanks to this slot) it won't actually show the progress, but will jump to 100% immediately (even before it finished uploading). I already checked the HTTP Client example, and copied the progress bar part, it is for downloading, and more or less is the same as for uploading but it uses the dataReadProgress signal (needed for downloading) AND it works perfectly. Does anybody know how to solve this for uploading?

    Read the article

  • Build OpenGL model in parallel?

    - by Brendan Long
    I have a program which draws some terrain and simulates water flowing over it (in a cheap and easy way). Updating the water was easy to parallelize using OpenMP, so I can do ~50 updates per second. The problem is that even with a small amounts of water, my draws per second are very very low (starts at 5 and drops to around 2 once there's a significant amount of water). It's not a problem with the video card because the terrain is more complicated and gets drawn so quickly that boost::timer tells me that I get infinity draws per second if I turn the water off. It may be related to memory bandwidth though (since I assume the model stays on the card and doesn't have to be transfered every time). What I'm concerned about is that on every draw, I'm calling glVertex3f() about a million times (max size is 450*600, 4 vertices each), and it's done entirely sequentially because Glut won't let me call anything in parallel. So.. is if there's some way of building the list in parallel and then passing it to OpenGL all at once? Or some other way of making it draw this faster? Am I using the wrong method (besides the obvious "use less vertices")?

    Read the article

  • Retain numerical precision in an R data frame?

    - by David
    When I create a dataframe from numeric vectors, R seems to truncate the value below the precision that I require in my analysis: data.frame(x=0.99999996) returns 1 (see update 1) I am stuck when fitting spline(x,y) and two of the x values are set to 1 due to rounding while y changes. I could hack around this but I would prefer to use a standard solution if available. example Here is an example data set d <- data.frame(x = c(0.668732936336141, 0.95351462456867, 0.994620622127435, 0.999602102672081, 0.999987126195509, 0.999999955814133, 0.999999999999966), y = c(38.3026509783688, 11.5895099585560, 10.0443344234229, 9.86152339768516, 9.84461434575695, 9.81648333804257, 9.83306725758297)) The following solution works, but I would prefer something that is less subjective: plot(d$x, d$y, ylim=c(0,50)) lines(spline(d$x, d$y),col='grey') #bad fit lines(spline(d[-c(4:6),]$x, d[-c(4:6),]$y),col='red') #reasonable fit Update 1 Since posting this question, I realize that this will return 1 even though the data frame still contains the original value, e.g. > dput(data.frame(x=0.99999999996)) returns structure(list(x = 0.99999999996), .Names = "x", row.names = c(NA, -1L), class = "data.frame") Update 2 After using dput to post this example data set, and some pointers from Dirk, I can see that the problem is not in the truncation of the x values but the limits of the numerical errors in the model that I have used to calculate y. This justifies dropping a few of the equivalent data points (as in the example red line).

    Read the article

  • Freeing ImageData when deleting a Canvas

    - by user578770
    I'm writing a XUL application using HTML Canvas to display Bitmap images. I'm generating ImageDatas and imporingt them in a canvas using the putImageData function : for(var pageIndex=0;pageIndex<100;pageIndex++){ this.img = imageDatas[pageIndex]; /* Create the Canvas element */ var imgCanvasTmp = document.createElementNS("http://www.w3.org/1999/xhtml",'html:canvas'); imgCanvasTmp.setAttribute('width', this.img.width); imgCanvasTmp.setAttribute('height', this.img.height); /* Import the image into the Canvas */ imgCanvasTmp.getContext('2d').putImageData(this.img, 0, 0); /* Use the Canvas into another part of the program (Commented out for testing) */ // this.displayCanvas(imgCanvasTmp,pageIndex); } The images are well imported but there seems to be a memory leak due to the putImageData function. When exiting the "for" loop, I would expect the memory allocated for the Canvas to be freed but, by executing the code without executing putImageData, I noticed that my program at the end use 100Mb less (my images are quite big). I came to the conclusion that the putImageData function prevent the garbage collector to free the allocated memory. Do you have any idea how I could force the garbage collector to free the memory? Is there any way to empty the Canvas? I already tried to delete the canvas using the delete operator or to use the clearRect function but it did nothing. I also tried to reuse the same canvas to display the image at each iteration but the amount of memory used did not changed, as if the image where imported without deleting the existing ones...

    Read the article

  • Need help debugging a huge chunk of JSON data...

    - by meder
    I have a huge chunk, so large that I can't manually edit the file and need to read it in and do regex operations to see what's wrong. Basically - my server is PHP 5.1.6 and I can't update it. This features an older json_decode which is less featured than the 5.2/5.3 versions. json_decode returns NULL and json_last_error is being invoked but the function doesn't exist except in PHP 5.3 so I'm manually trying to see what's wrong. $regex = '#[^0-9"$a-zA-Z{:}().]#'; $json = preg_replace( $regex, '', $json ); $tree = json_decode ( $json, true ); var_dump($tree); // NULL A snippet of the JSON.. somewhere in the middle {"109":0,"103":1,"102":59,"101":70,"100":4299,"94":0,"50":51,"46":0,"45":0,"44":0,"43":0,"42":0,"23":0,"22":0,"18":0,"17":1,"16":1,"13":160,"8":4298}},"2":{"d":{"109":0,"103":92,"102":54,"101":53,"100":4301,"94":0,"50":4278,"49":328,"46":1,"45":0,"44":1,"43":0,"42":0,"26":0,"23":0,"22":0,"18":0,"17":1,"16":1,"8":4300},"m":{"94":1,"100":1,"26":1,"50":1,"8":1,"49":1,"18":1,"43":1,"42":1,"109":1},"c":{"\/":{"d":{"109":0,"100":4301,"94":0,"50":4278,"49":328,"43":0,"42":0,"26":0,"18":0,"8":4300}},"G":{"d":{"109":1,"100":4303,"94":1,"68":17,"50":64,"49":53,"43":1,"42":1,"34":0,"18":1,"13":2216,"11":0,"8":4302}}}},"3": The }}}} is suspicious but this probably just closes 4 nested object literals. Would appreciate any insight.

    Read the article

  • Is the 'var' keyword bad? Or am I just old school?

    - by WaggingSiberian
    Recently I overheard junior developer ask "why do you use 'var' so much?". The mid-level developer responded "I use VAR all the time. I love it! I don't have to figure out the type." I didn't have the time or energy to get into a religious war and hey, I'm still the new guy here :-) I understand var has its place. LINQ comes to mind. But I have also always been told the use of var represents lazy programming and I should just use the correct type to begin with. If it's an int, define it as an int, not a var. When reviewing code, seeing the type makes it easier to follow. My opinion is, it's just lazy but there are exceptions. Var also reminds me of the VB/VBA variant type. It also had its place. I recall (from many years ago) its usage being less-than-desirable type and it was rather resource hungry. Am I just being stuck in my ways? Should we start using var all the time as my co-worker does?

    Read the article

  • Chat app vs REST app - use a thread in an Activity or a thread in a Service?

    - by synic
    In Virgil Dobjanschi's talk, "Developing Android REST client applications" (link here), he said a few things that took me by surprise. Including: Don't run http queries in threads spawned by your activities. Instead, communicate with a service to do them, and store the information in a ContentProvider. Use a ContentObserver to be notified of changes. Always perform long running tasks in a Service, never in your Activity. Stop your Service when you're done with it. I understand that he was talking about a REST API, but I'm trying to make it fit with some other ideas I've had for apps. One of APIs I've been using uses long-polling for their chat interface. There is a loop http queries, most of which will time out. This means that, as long as the app hasn't been killed by the OS, or the user hasn't specifically turned off the chat feature, I'll never be done with the Service, and it will stay open forever. This seems less than optimal. Long question short: For a chat application that uses long polling to simulate push and immediate response, is it still best practice to use a Service to perform the HTTP queries, and store the information in a ContentProvider?

    Read the article

  • How do I slow down a flash game?

    - by dsaccount1
    Basically the objective is to click on certain targets, which upon doing so would destroy the target and garner you points. I've written a macro to help me until the point where its impossible to even see the target more than a mere flicker, (maybe even less than that, i cant see it with my eyes). But its possible because i believe others have done so. (Maybe on slower comps?) Anyway the question is, how would it be possible to slow down the flash game? I've thought of a couple of ways that could work but i'm not sure how to implement them. 1. Slow down the cpu speed? (smth like that? how?) 2. As the game progress the time the targets appear and stay up is reduced. Maybe theres a variable controlling all of this, isit possible to modify the address of this variable? freeze it or smth? Any ideas, suggestions and especially advice would really be appreciated, thanks!

    Read the article

  • iPhone AES encryption issue

    - by Dilshan
    Hi, I use following code to encrypt using AES. - (NSData*)AES256EncryptWithKey:(NSString*)key theMsg:(NSData *)myMessage { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [myMessage length]; //See the doc: For block ciphers, the output size will always be less than or //equal to the input size plus the size of one block. //That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL /* initialization vector (optional) */, [myMessage bytes], dataLength, /* input */ buffer, bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { //the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted]; } free(buffer); //free the buffer; return nil; } However the following code chunk returns null if I tried to print the encryptmessage variable. Same thing applies to decryption as well. What am I doing wrong here? NSData *encrData = [self AES256EncryptWithKey:theKey theMsg:myMessage]; NSString *encryptmessage = [[NSString alloc] initWithData:encrData encoding:NSUTF8StringEncoding]; Thank you

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >