Search Results

Search found 9988 results on 400 pages for 'tv less in jersey'.

Page 333/400 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • casting char[][] to char** causes segfault?

    - by Earlz
    Ok my C is a bit rusty but I figured I'd make my next(small) project in C so I could polish back up on it and less than 20 lines in I already have a seg fault. This is my complete code: #define ROWS 4 #define COLS 4 char main_map[ROWS][COLS+1]={ "a.bb", "a.c.", "adc.", ".dc."}; void print_map(char** map){ int i; for(i=0;i<ROWS;i++){ puts(map[i]); //segfault here } } int main(){ print_map(main_map); //if I comment out this line it will work. puts(main_map[3]); return 0; } I am completely confused as to how this is causing a segfault. What is happening when casting from [][] to **!? That is the only warning I get. rushhour.c:23:3: warning: passing argument 1 of ‘print_map’ from incompatible pointer type rushhour.c:13:7: note: expected ‘char **’ but argument is of type ‘char (*)[5]’ Are [][] and ** really not compatible pointer types? They seem like they are just syntax to me.

    Read the article

  • fastest way to check to see if a certain index in a linq statement is null

    - by tehdoommarine
    Basic Details I have a linq statement that grabs some records from a database and puts them in a System.Linq.Enumerable: var someRecords = someRepoAttachedToDatabase.Where(p=>true); Suppose this grabs tons (25k+) of records, and i need to perform operations on all of them. to speed things up, I have to decided to use paging and perform the operations needed in blocks of 100 instead of all of the records at the same time. The Question The line in question is the line where I count the number of records in the subset to see if we are on the last page; if the number of records in subset is less than the size of paging - then that means there are no more records left. What I would like to know is what is the fastest way to do this? Code in Question int pageSize = 100; bool moreData = true; int currentPage = 1; while (moreData) { var subsetOfRecords = someRecords.Skip((currentPage - 1) * pageSize).Take(pageSize); //this is also a System.Linq.Enumerable if (subsetOfRecords.Count() < pageSize){ moreData = false;} //line in question //do stuff to records in subset currentPage++; } Things I Have Considered subsetOfRecords.Count() < pageSize subsetOfRecords.ElementAt(pageSize - 1) == null (causes out of bounds exception - can catch exception and set moreData to false there) Converting subsetOfRecords to an array (converting someRecords to an array will not work due to the way subsetOfRecords is declared - but I am open to changing it) I'm sure there are plenty of other ideas that I have missed.

    Read the article

  • Get info from multiple files, match it and then display to end user, what is fastest?

    - by Patrick
    Hi, I need to build a website where we display data that is refreshed every 5minutes in a text file with a | separator. I currently use Java to do this. What I do now: I grab the textfile for every request through the website and process it and then display the data to the end user, this works fine, since Java can go through like 5000 lines of data fast, and when I filter it it is still extremely fast. However now the management wants the following: They added 3 textfiles with the | separator to it, and now want me to also read those files and match the information on certain fields, and if there is a match also display that information to the end user. I think soon enough, although Java is fast, I will run into trouble when 10 people want that information and I have to run through 4 total files matching the information. What can I do to make this process super fast? My Creative solutions so far: -Leave it this way, since Java is fast and end users can wait (probably less then 1second) -Have a background process that dumps new data into a MySQL database every 5minutes, since databases are extremely good at getting the same data from multiple tables. Thank you!

    Read the article

  • Communicating with a running python daemon

    - by hanksims
    I wrote a small Python application that runs as a daemon. It utilizes threading and queues. I'm looking for general approaches to altering this application so that I can communicate with it while it's running. Mostly I'd like to be able to monitor its health. In a nutshell, I'd like to be able to do something like this: python application.py start # launches the daemon Later, I'd like to be able to come along and do something like: python application.py check_queue_size # return info from the daemonized process To be clear, I don't have any problem implementing the Django-inspired syntax. What I don't have any idea how to do is to send signals to the daemonized process (start), or how to write the daemon to handle and respond to such signals. Like I said above, I'm looking for general approaches. The only one I can see right now is telling the daemon constantly log everything that might be needed to a file, but I hope there's a less messy way to go about it. UPDATE: Wow, a lot of great answers. Thanks so much. I think I'll look at both Pyro and the web.py/Werkzeug approaches, since Twisted is a little more than I want to bite off at this point. The next conceptual challenge, I suppose, is how to go about talking to my worker threads without hanging them up. Thanks again.

    Read the article

  • Sending Email to a specific address without requiring user to specify their mail server details

    - by sgmoore
    Can anyone recommend a simple and reliable method of sending email notifications and possibly log files attachments from a C# program without requiring the installer or the user to configure the program by specifying server details and email addresses etc. (Mainly because they won't know the details, but also because they could change) The program will normally be run as a service of a Windows Server, but can be run on a client. I tried connecting to our own mail server and sending a email to myself, but some ISP's are blocking Port 25 on all servers but their own, so that method isn't working reliably. Tried sending email through gmail but that was less successful as the port they used was blocked by firewalls. Ditto webservices connecting on weird ports. Trying to use the local smptservice but did not work either. It would be nice, but not essential if it was not dependant on my own Internet connection/Servers. (Don't mind them being delayed, but prefer them not to get lost). Are there any webservices on http/https that allow you to do this sort of thing? TIA

    Read the article

  • How to: StructureMap and configuration based on runtime parameters?

    - by user981375
    In a nutshell - I want to be able to instantiate object based on runtime parameters. In this particular case there are only two parameters but the problem is that I'm facing different permutations of these parameters and it gets messy. Here is the situation: I want to get an instance of an object specific to, say, given country and then, say, specific state/province. So, considering the US, there are 50 possible combinations. In reality it's less than that but that's the max. Think of it this way, I want to find out what's the penalty for smoking pot in a given country/state, I pass this information in and I get instantiated object telling me what it is. To the code (for reference only): interface IState { string Penalty { get; } } interface ICountry { IState State { get; set; } string Name { get; } } class BasePenalty : IState { virtual public string Penalty { get { return "Slap on a wrist"; } } } class USA : ICountry { public USA(IState state) { State = state; } public IState State { get; set; } public string Name { get { return "USA"; } } } class Florida: BasePenalty { public override string Penalty { get { return "Public beheading"; } } } // and so on ... I defined other states // which have penalties other than the "Slap on a wrist" How do I configure my container that when given country and state combination it will return the penalty? I tried combinations of profile and contextual binding but that configuration was directly proportional to the number of classes I've created. I have already gone thru trouble of defining different combinations. I'd like to avoid having to do the same during container configuration. I want to inject State into the Country. Also, I'd like to return UsaBasePenalty value in case state is not specified. Is that possible? Perhaps these is something wrong with the design.

    Read the article

  • Dealloc'd Predicate crashing iPhone App!

    - by DVG
    To preface, this is a follow up to an inquiry made a few days ago: http://stackoverflow.com/questions/2981803/iphone-app-crashes-when-merging-managed-object-contexts Short Version: EXC_BAD_ACCESS is crashing my app, and zombie-mode revealed the culprit to be my predicate embedded within the fetch request embedded in my Fetched Results Controller. How does an object within an object get released without an explicit command to do so? Long Version: Application Structure Platforms View Controller - Games View Controller (Predicated upon platform selection) - Add Game View Controller When a row gets clicked on the Platforms view, it sets an instance variable in Games View for that platform, then the Games Fetched Results Controller builds a fetch request in the normal way: - (NSFetchedResultsController *)fetchedResultsController{ if (fetchedResultsController != nil) { return fetchedResultsController; } //build the fetch request for Games NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Game" inManagedObjectContext:context]; [request setEntity:entity]; //predicate NSPredicate *predicate = [NSPredicate predicateWithFormat:@"platform == %@", selectedPlatform]; [request setPredicate:predicate]; //sort based on name NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; //fetch and build fetched results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [sortDescriptor release]; [sortDescriptors release]; [predicate release]; [request release]; [aFetchedResultsController release]; return fetchedResultsController; } At the end of this method, the fetchedResultsController's _fetch_request - _predicate member is set to an NSComparisonPredicate object. All is well in the world. By the time - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section gets called, the _predicate is now a Zombie, which will eventually crash the application when the table attempts to update itself. I'm more or less flummoxed. I'm not releasing the fetched results controller or any of it's parts, and the only part getting dealloc'd is the predicate. Any ideas?

    Read the article

  • Validation for textfield is not working

    - by MaheshBabu
    Hi folks, I am using 4 Textfields in my application. I need to validate those textfields like,These 4 textfields Allow only o,1,....9 and . For that i use the code like this. - (BOOL)textField:(UITextField *)textField shouldChangeCharactersInRange:(NSRange)range replacementString:(NSString *)string { static NSCharacterSet *charSet = nil; if(!charSet) { charSet = [[[NSCharacterSet characterSetWithCharactersInString:@"0123456789."] invertedSet] retain]; } NSRange location = [string rangeOfCharacterFromSet:charSet]; return (location.location == NSNotFound); And first textfield allows digits 10,second third and fourth textfields allows less than 100. For that my code is if (textField.tag == 1) { NSString *newString = [textField.text stringByReplacingCharactersInRange:range withString:string]; double homevaluedouble = [newString doubleValue]; return !(homevaluedouble > 10000000000); } if (textField.tag == 2) { NSString *newString = [textField.text stringByReplacingCharactersInRange:range withString:string]; double validate = [newString doubleValue]; return !(validate > 100); } if (textField.tag == 3) { NSString *newString = [textField.text stringByReplacingCharactersInRange:range withString:string]; double validate = [newString doubleValue]; return !(validate > 100); } I write this code in - (BOOL)textField:(UITextField *)textField shouldChangeCharactersInRange:(NSRange)range replacementString:(NSString *)string { But textfields allows 0,1,..9 and . but digit limit is not working, if i remove 0,1,...9 then limit to textfield working. i am getting confuse with it. can any one pls help me,How can i resolve it. Thank u in advance.

    Read the article

  • Aggregating and displaying content from hundreds of RSS feeds

    - by Andrew LeClair
    I'd like to build a website that aggregates and displays content from hundreds of RSS feeds. The feeds will be from different sites: Twitter, Flickr, Tumblr, etc, so the content will be very heterogenous. In a perfect world — and this is more of a side issue — I would like to allow other people to help manage the list of feeds and assign tags to the content from each individual feed so that you can filter the items that are displayed. What I've tried so far: Google Feeds API – I thought this would be the answer, but unless I'm missing something, the FeedController will only output the collected feed content as separate lists. Is there any way to ask the Google Feeds API to aggregate and sort the content from many RSS feeds before displaying? Yahoo! Pipes – This also seemed like a good solution at first. I setup a Pipe that accesses a list of RSS feeds stored in a Google Doc spreadsheet and then aggregates the content. However, the output leaves a lot to be desired; Tumblr video posts, for example, only show a title and a permalink to the post, the embedded Youtube video is lost. PHP – I've seen this question, which looks like a good approach. I'm less proficient in PHP, so although I'm willing to learn, I'd ideally like to find a different approach. Any thoughts? Thanks.

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • Question on SQL Grouping

    - by Lijo
    Hi Team, I am trying to achieve the following without using sub query. For a funding, I would like to select the latest Letter created date and the ‘earliest worklist created since letter created’ date for a funding. FundingId Leter (1, 1/1/2009 )(1, 5/5/2009) (1, 8/8/2009) (2, 3/3/2009) FundingId WorkList (1, 5/5/2009 ) (1, 9/9/2009) (1, 10/10/2009) (2, 2/2/2009) Expected Result - FundingId Leter WorkList (1, 8/8/2009, 9/9/2009) I wrote a query as follows. It has a bug. It will omit those FundingId for which the minimum WorkList date is less than latest Letter date (even though it has another worklist with greater than letter created date). CREATE TABLE #Funding( [Funding_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_No] [int] NOT NULL, CONSTRAINT [PK_Center_Center_ID] PRIMARY KEY NONCLUSTERED ([Funding_ID] ASC) ) ON [PRIMARY] CREATE TABLE #Letter( [Letter_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_Letter_Letter_ID] PRIMARY KEY NONCLUSTERED ([Letter_ID] ASC) ) ON [PRIMARY] CREATE TABLE #WorkList( [WorkList_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_WorkList_WorkList_ID] PRIMARY KEY NONCLUSTERED ([WorkList_ID] ASC) ) ON [PRIMARY] SELECT F.Funding_ID, Funding_No, MAX (L.CreatedDt), MIN(W.CreatedDt) FROM #Funding F INNER JOIN #Letter L ON L.Funding_ID = F.Funding_ID LEFT OUTER JOIN #WorkList W ON W.Funding_ID = F.Funding_ID GROUP BY F.Funding_ID,Funding_No HAVING MIN(W.CreatedDt) MAX (L.CreatedDt) How can I write a correct query without using subquery? Please help Thanks Lijo

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • Need help implementing this algorithm with map reduce(hadoop)

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • How can I change jtable height at runtime

    - by wniroshan
    I hava a JFrame with multiple JPanels of similar width aligned one below other. I use one of these JPanels to display a JTable which is the last JPanel of the lot. This JPanel has a JScrollpane as a child component. This is where I try to add my table dynamically. Initial height of this JScrollpane is set to 40. I designed above template using Netbeans 6.8 Now I'm trying to add the table to the JPanel. When a button is pressed below code snippet is called. The class which includes this code extends javax.swing.JFrame class. I am expecting below code would adjust table height according to the row count and display the table. SearchTable = new JTable(RowData, DisplayNames) { @Override public boolean isCellEditable(int rowIndex, int vColIndex) { return false; } }; // if row count is less than 10 then display all the rows without a scroll bar if (SearchTable.getRowCount() < 10) { pnl_tblpanel.setPreferredSize(new Dimension(625, SearchTable.getRowHeight() * (SearchTable.getRowCount() + 4))); scr_tblholder.setPreferredSize(new Dimension(625, SearchTable.getRowHeight() * (SearchTable.getRowCount() + 4))); } else {// if row count is more than 10 display first 10 rows and add a scroll bar pnl_tblpanel.setPreferredSize(new Dimension(625, SearchTable.getRowHeight() * (10 + 2))); scr_tblholder.setAutoscrolls(true); } //pnl_tblpanel.add(scr_tblholder); scr_tblholder.setViewportView(SearchTable); //pnl_tblpanel.repaint(); pnl_tblpanel.validate(); this.validate(); //this.repaint(); pnl_tblpanel.setVisible(true); this.pack(); The table displays, but the table height is not changed according to the row count. It stays its default value. I have been trying many combinations of validate and repaint but nothing worked. (More in desperation) Can anyone shed some light on this Thank you

    Read the article

  • Best way to keep a .net client app updated with status of another application

    - by rwmnau
    I have a Windows service that's running all the time, and takes some action every 15 minutes. I also have a client WinForms app that displays some information about what the service is doing. I'd like the forms application to keep itself updated with a recent status, but I'm not sure if polling every second is a good move performance-wise. When it starts, my Windows Service opens a WCF named pipe to receive queries (from my client form) Every second, a timer on the winform sends a query to the pipe, and then displays the results. If the pipe isn't there, the form displays that the service isn't running. Is that the best way to do this? If my service opens the pipe when it starts, will it always stay open (until I close it or my service stops)? In addition to polling the service, maybe there's some way for the service to notify any watching applications of certain events, like starting and stopping processing? That way, I could poll less, since I'd presumably know about big events already, and would only be polling for progress. Anything that I'm missing?

    Read the article

  • MySQL + Joomla + remote c# access

    - by Jimmy
    Hello, I work on a Joomla web site, installed on a MySQL database and running on IIS7. It's all working fine. I now need to add functionality that lets (Joomla-)registered users change some configuration data. Though I haven't done this yet, it looks straightforward enough to do with Joomla. The data is private so all external access will be done through HTTPS. I also need an existing c# program, running on another machine, to read that configuration data. Sure enough, this data access needs to be as fast as possible. The data will be small (and filtered by query), but the latency should be kept to a minimum. A short-term, client-side cache (less than a minute, in case a user updates his configuration data) seems like a good idea. I have done practically zero database/asp programming so far, so what's the best way of doing that last step? Should the c# program access the database 'directly' (using what? LINQ?) or setup some sort of Facade (SOAP?) service? If a service should be used, should it be done through Joomla or with ASP on IIS? Thanks

    Read the article

  • Criteria for triggering garbage collection in .Net

    - by Kennet Belenky
    I've come across some curious behavior with regard to garbage collection in .Net. The following program will throw an OutOfMemoryException very quickly (after less than a second on a 32-bit, 2GB machine). The Foo finalizer is never called. class Foo { static Dictionary<Guid, WeakReference> allFoos = new Dictionary<Guid, WeakReference>(); Guid guid = Guid.NewGuid(); byte[] buffer = new byte[1000000]; static Random rand = new Random(); public Foo() { // Uncomment the following line and the program will run forever. // rand.NextBytes(buffer); allFoos[guid] = new WeakReference(this); } ~Foo() { allFoos.Remove(guid); } static public void Main(string args[]) { for (; ; ) { new Foo(); } } } If the rand.nextBytes line is uncommented, it will run ad infinitum, and the Foo finalizer is regularly invoked. Why is that? My best guess is that in the former case, either the CLR or the Windows VMM is lazy about allocating physical memory. The buffer never gets written to, so the physical memory is never used. When the address space runs out, the system crashes. In the latter case, the system runs out of physical memory before it runs out of address space, the GC is triggered and the objects are collected. However, here's the part I don't get. Assuming my theory is correct, why doesn't the GC trigger when the address space runs low? If my theory is incorrect, then what's the real explanation?

    Read the article

  • How often should network traffic/collisions cause SNMP Sets to fail?

    - by A. Levy
    My team has a situation where an SNMP SET will fail once every two weeks or so. Since this set happens automatically, we don't necessarily notice it immediately when it fails, and this can result in an inconsistent configuration and associated wailing and gnashing of teeth. The plan is to fix this by having our software automatically retry the SET when it fails. The problem is, we aren't sure why the failure is happening. My (extremely limited) knowledge of SNMP isn't particularly helpful in diagnosing this problem, so I thought I'd ask StackOverflow for some advice. We think that every so often a spike in network traffic will cause the SET to fail. Since SNMP uses UDP for communication, I would think it would be relatively easy for a command to be drowned out if traffic was high for a short period of time. However, I have no idea how common this is. We have a small network with a single cisco router and there are less than a dozen SNMP controlled devices on that network. In addition to the SNMP traffic, there are some status web pages being loaded from the various devices. In case it makes a difference, I believe we are using the AdventNet SNMP API version 4.0.4 for Java. Does it sound reasonable that there will be some SET commands dropped occasionally, or should we be looking for other causes?

    Read the article

  • Flex Sprite xy Coordinates

    - by Ian
    Hi, I have a drawing that looks more or less like the attached image. image The orange square is the currently selected selected sprite. The sprites are all draws from coordinates that i receive from XML. var sprObject:Sprite = new Sprite(); sprObject.graphics.beginFill(itemList.c.toString()); sprObject.name = strName; sprObject.graphics.moveTo(iX, iY); sprObject.graphics.lineTo(iX2, iY2); sprObject.graphics.lineTo(iX3, iY3); sprObject.graphics.lineTo(iX4, iY4); sprObject.graphics.lineTo(iX, iX); sprObject.graphics.endFill(); mainUI.addChild(sprObject); // mainUI is a mx:UIComponent g_Sprite.push(sprObject); // array of sprites. What I want to do is the following. If I'm currently on the orange square and I use my keyboard direction buttons (up/down/left/right). I want to deselect the current sprite and select the next sprite in the appropriate direction. The problem I'm having is that I cannot get the x and y coordinates of the drawn sprites. If I look in the array, the x and y coordinates of the sprites are all 0. If I can retrieve that I can write an algorithm to determine the next sprite to select. Any help would be appreciated.

    Read the article

  • Faster or more memory-efficient solution in Python for this Codejam problem.

    - by jeroen.vangoey
    I tried my hand at this Google Codejam Africa problem (the contest is already finished, I just did it to improve my programming skills). The Problem: You are hosting a party with G guests and notice that there is an odd number of guests! When planning the party you deliberately invited only couples and gave each couple a unique number C on their invitation. You would like to single out whoever came alone by asking all of the guests for their invitation numbers. The Input: The first line of input gives the number of cases, N. N test cases follow. For each test case there will be: One line containing the value G the number of guests. One line containing a space-separated list of G integers. Each integer C indicates the invitation code of a guest. Output For each test case, output one line containing "Case #x: " followed by the number C of the guest who is alone. The Limits: 1 = N = 50 0 < C = 2147483647 Small dataset 3 = G < 100 Large dataset 3 = G < 1000 Sample Input: 3 3 1 2147483647 2147483647 5 3 4 7 4 3 5 2 10 2 10 5 Sample Output: Case #1: 1 Case #2: 7 Case #3: 5 This is the solution that I came up with: with open('A-large-practice.in') as f: lines = f.readlines() with open('A-large-practice.out', 'w') as output: N = int(lines[0]) for testcase, i in enumerate(range(1,2*N,2)): G = int(lines[i]) for guest in range(G): codes = map(int, lines[i+1].split(' ')) alone = (c for c in codes if codes.count(c)==1) output.write("Case #%d: %d\n" % (testcase+1, alone.next())) It runs in 12 seconds on my machine with the large input. Now, my question is, can this solution be improved in Python to run in a shorter time or use less memory? The analysis of the problem gives some pointers on how to do this in Java and C++ but I can't translate those solutions back to Python.

    Read the article

  • QProgressBar problem with uploading

    - by rolanddd
    Hey all! I show my code first, then I explain my problem: ... // somewhere in the constructor progressBar = new QProgressBar(this); progressBar-setMinimum(0); progressBar-setMaximum(100); ... connect(&http, SIGNAL(dataSendProgress(int, int)), this, SLOT(updateProgressBar(int, int))); ... void MainWindow::updateProgressBar(int bytesSent, int total) { progressBar-setMaximum(total); progressBar-setValue(bytesSent); } So this is how I try to make my progressBar being updated when I upload a file. The problem is, it won't do the job. When it starts uploading, I set the value of the progress bar to 0, then (thanks to this slot) it won't actually show the progress, but will jump to 100% immediately (even before it finished uploading). I already checked the HTTP Client example, and copied the progress bar part, it is for downloading, and more or less is the same as for uploading but it uses the dataReadProgress signal (needed for downloading) AND it works perfectly. Does anybody know how to solve this for uploading?

    Read the article

  • How to remove words based on a word count

    - by Chris
    Here is what I'm trying to accomplish. I have an object coming back from the database with a string description. This description can be up to 1000 characters long, but we only want to display a short view of this. So I coded up the following, but I'm having trouble in actually removing the number of words after the regular expression finds the total count of words. Does anyone have good way of dispalying the words which are less than the Regex.Matches? Thanks! if (!string.IsNullOrEmpty(myObject.Description)) { string original = myObject.Description; MatchCollection wordColl = Regex.Matches(original, @"[\S]+"); if (wordColl.Count < 70) // 70 words? { uxDescriptionDisplay.Text = string.Format("<p>{0}</p>", myObject.Description); } else { string shortendText = original.Remove(200); // 200 characters? uxDescriptionDisplay.Text = string.Format("<p>{0}</p>", shortendText); } }

    Read the article

  • Read large amount of data from file in Java

    - by Crozin
    Hello I've got text file that contains 1 000 002 numbers in following formation: 123 456 1 2 3 4 5 6 .... 999999 100000 Now I need to read that data and allocate it to int variables (the very first two numbers) and all the rest (1 000 000 numbers) to an array int[]. It's not a hard task, but - it's horrible slow. My first attempt was java.util.Scanner: Scanner stdin = new Scanner(new File("./path")); int n = stdin.nextInt(); int t = stdin.nextInt(); int array[] = new array[n]; for (int i = 0; i < n; i++) { array[i] = stdin.nextInt(); } It works as excepted but it takes about 7500 ms to execute. I need to fetch that data in up to several hundred of milliseconds. Then I tried java.io.BufferedReader: Using BufferedReader.readLine() and String.split() I got the same results in about 1700 ms, but it's still too many. How can I read that amount of data in less that 1 second? The final result should be equal to: int n = 123; int t = 456; int array[] = { 1, 2, 3, 4, ..., 999999, 100000 };

    Read the article

  • Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

    - by Durandal
    There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor. For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards. But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor? As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures? As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen? EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >