Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 120/145 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • How do I get C# to garbage collect aggressively?

    - by mmr
    I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected. Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off?

    Read the article

  • How do I keep users from spoofing data through a form?

    - by Jonathan
    I have a site which has been running for some time now that uses a great deal of user input to build the site. Naturally there are dozens of forms on the site. When building the site, I often used hidden form fields to pass data back to the server so that I know which record to update. an example might be: <input type="hidden" name="id" value="132" /> <input type="text" name="total_price" value="15.02" /> When the form is submitted, these values get passed to the server and I update the records based on the data passed (i.e. the price of record 132 would get changed to 15.02). I recently found out that you can change the attributes and values via something as simple as firebug. So...I open firebug and change the id value to "155" and the price value to "0.00" and then submit the form. Viola! I view product number 155 on the site and it now says that it's $0.00. This concerns me. How can I know which record to update without either a query string (easily modified) or a hidden input element passing the id to the server? And if there's no better way (I've seen literally thousands of websites that pass the data this way), then how would I make it so that if a user changes these values, the data on the server side is not executed (or something similar to solve the issue)? I've thought about encrypting the id and then decrypting it on the other side, but that still doesn't protect me from someone changing it and just happening to get something that matches another id in the database. I've also thought about cookies, but I've heard that those can be manipulated as well. Any ideas? This seems like a HUGE security risk to me.

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

  • Is it OK to write code after [super dealloc]? (Objective-C)

    - by Richard J. Ross III
    I have a situation in my code, where I cannot clean up my classes objects without first calling [super dealloc]. It is something like this: // Baseclass.m @implmentation Baseclass ... -(void) dealloc { [self _removeAllData]; [aVariableThatBelongsToMe release]; [anotherVariableThatBelongsToMe release]; [super dealloc]; } ... @end This works great. My problem is, when I went to subclass this huge and nasty class (over 2000 lines of gross code), I ran into a problem: when I released my objects before calling [super dealloc] I had zombies running through the code that were activated when I called the [self _removeAllData] method. // Subclass.m @implementation Subclass ... -(void) deallloc { [super dealloc]; [someObjectUsedInTheRemoveAllDataMethod release]; } ... @end This works great, and It didn't require me to refactor any code. My question Is this: Is it safe for me to do this, or should I refactor my code? Or maybe autorelease the objects? I am programming for iPhone if that matters any.

    Read the article

  • Begin game programming basics

    - by AJ
    My 14 year old kid brother wants to learn to program games. He has never programmed but would like to learn programming. His interest lies with games and game programming and he understands that it can be difficult but he wants to do that. So, obviously, I turned to SO folks to know what you feel on how he should go about it. Remember, please suggest on Areas that beginners can choose, how to begin in that area, what to read in the beginning, initial languages in the beginning etc. Once the beginning part is taken care of, you may also suggest the intermediate and advanced stuff but this question is about very beginning level. If there are areas like Web games Vs. console games Vs generic computer games, then please advice on the areas. As I said he has never programmed, he might want to try all the areas and choose the one he likes the best. I hope this is not too much to ask for someone who is in this field but if this question is huge, please advice on how to break it into multiple questions. ~Thanks.

    Read the article

  • What's the best way to "shuffle" a table of database records?

    - by Darth
    Say that I have a table with a bunch of records, which I want to randomly present to users. I also want users to be able to paginate back and forth, so I have to perserve some sort of order, at least for a while. The application is basically only AJAX and it uses cache for already visited pages, so even if I always served random results, when the user tries to go back, he will get the previous page, because it will load from the local cache. The problem is, that if I return only random results, there might be some duplicates. Each page contains 6 results, so to prevent this, I'd have to do something like WHERE id NOT IN (1,2,3,4 ...) where I'd put all the previously loaded IDs. Huge downside of that solution is that it won't be possible to cache anything on the server side, as every user will request different data. Alternate solution might be to create another column for ordering the records, and shuffle it every insert time unit here. The problem here is, I'd need to set random number out of a sequence to every record in the table, which would take as many queries as there are records. I'm using Rails and MySQL if that's of any relevance.

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Multiple leaf methods problem in composite pattern

    - by Ondrej Slinták
    At work, we are developing an PHP application that would be later re-programmed into Java. With some basic knowledge of Java, we are trying to design everything to be easily re-written, without any headaches. Interesting problem came out when we tried to implement composite pattern with huge number of methods in leafs. What are we trying to achieve (not using interfaces, it's just an example): class Composite { ... } class LeafOne { public function Foo( ); public function Moo( ); } class LeafTwo { public function Bar( ); public function Baz( ); } $c = new Composite( Array( new LeafOne( ), new LeafTwo( ) ) ); // will call method Foo in all classes in composite that contain this method $c->Foo( ); It seems like pretty much classic Composite pattern, but problem is that we will have quite many leaf classes and each of them might have ~5 methods (of which few might be different than others). One of our solutions, which seems to be the best one so far and might actually work, is using __call magic method to call methods in leafs. Unfortunately, we don't know if there is an equivalent of it in Java. So the actual question is: Is there a better solution for this, using code that would be eventually easily re-coded into Java? Or do you recommend any other solution? Perhaps there's some different, better pattern I could use here. In case there's something unclear, just ask and I'll edit this post.

    Read the article

  • Better way of coding this if statment?

    - by HadlowJ
    Hi. I have another question for you very helpful people. I use a lot of if statements many of which are just repeated and im sure could be shortened. This is my current bit of code if (Globals.TotalStands <= 1) { ScoreUpdate.StandNo2.Visible = false; ScoreUpdate.ScoreStand2.Visible = false; ScoreUpdate.ScoreOutOf2.Visible = false; } if (Globals.TotalStands <= 2) { ScoreUpdate.StandNo3.Visible = false; ScoreUpdate.ScoreStand3.Visible = false; ScoreUpdate.ScoreOutOf3.Visible = false; } if (Globals.TotalStands <= 3) { ScoreUpdate.StandNo4.Visible = false; ScoreUpdate.ScoreStand4.Visible = false; ScoreUpdate.ScoreOutOf4.Visible = false; } if (Globals.TotalStands <= 4) { ScoreUpdate.StandNo5.Visible = false; ScoreUpdate.ScoreStand5.Visible = false; ScoreUpdate.ScoreOutOf5.Visible = false; } if (Globals.TotalStands <= 5) { ScoreUpdate.StandNo6.Visible = false; ScoreUpdate.ScoreStand6.Visible = false; ScoreUpdate.ScoreOutOf6.Visible = false; } if (Globals.TotalStands <= 6) { ScoreUpdate.StandNo7.Visible = false; ScoreUpdate.ScoreStand7.Visible = false; ScoreUpdate.ScoreOutOf7.Visible = false; } if (Globals.TotalStands <= 7) { ScoreUpdate.StandNo8.Visible = false; ScoreUpdate.ScoreStand8.Visible = false; ScoreUpdate.ScoreOutOf8.Visible = false; } as you can see there is a huge amount of code to do something simple (which i do on a few other forms as well and im sure there must be a better way of coding this that gets the same result? Im a code noob so please be gentle, code is c# and software is visual studio 2008 pro. Thanks

    Read the article

  • PHP XMLREADER - QUESTION

    - by Matias
    Hi Guys, I am pretty new to xml parsing and I am using XMLREADER to parse a huge file. Given the following XML (sample): <hotels> <hotel> <name>Bla1</name> </hotel> <hotel> <name>Bla2</name> </hotel> </hotels> And then the XMLREADER in PHP which pulls up the values to my Database: $reader = new XMLReader(); $reader->open('hotels.xml'); while ($reader->read()) { if ($reader->name == "name") { $reader->read(); mysql_query("INSERT INTO MyHotels (hotelName) VALUES (\"$reader->value\")"); } } $reader->close(); The problem is that I've got a single empty row between each of the parsed nodes!! Not sure why this happens! | ID | hotelName | | 1 | Bla1 | | 2 | | | 3 | Bla2 | | 4 | | Help is greatly appreciated.

    Read the article

  • architecture - centraled location for different modules (cms, webapplications, ...) - best practise

    - by NicoJuicy
    Let's just say that i want to create a cms + other online applications. I want them all to integrate into a central location, but they also have to be available seperately (not everyone want's more than the cms solution). Would i create a huge central application that contains all the database, which communicates through a webserice with the "standalone - integrated" modules? Or would i create them seperately and the only thing that the "central" application would do is syncing the information (eg. the cms and another solution can have the same tables (eg. clients or employees). Or do you have another idea? (i know i'm a little vague, but i can't "give" a lot of details because of work - contract). If someone has all the "packages" it should be possible for the central application to integrate all the modules at one place! Or if someone has more than 1 module, it should combine this on the website. What i thought is best, is that the central location contains only the users and their rights (eg. cms - all rights, ...), and the information get synced with every change. (module cms, adding a new client - store locally and send data to the central location, central location - send to modules = table clients updated everywhere) This way it is easy if someone only "bought" a module, they can sync it easily through the complete architecture. I hope i made myself clear!

    Read the article

  • java number exceeds long.max_value - how to detect?

    - by jurchiks
    I'm having problems detecting if a sum/multiplication of two numbers exceeds the maximum value of a long integer. Example code: long a = 2 * Long.MAX_VALUE; System.out.println("long.max * smth > long.max... or is it? a=" + a); This gives me -2, while I would expect it to throw a NumberFormatException... Is there a simple way of making this work? Because I have some code that does multiplications in nested IF blocks or additions in a loop and I would hate to add more IFs to each IF or inside the loop. Edit: oh well, it seems that this answer from another question is the most appropriate for what I need: http://stackoverflow.com/a/9057367/540394 I don't want to do boxing/unboxing as it adds unnecassary overhead, and this way is very short, which is a huge plus to me. I'll just write two short functions to do these checks and return the min or max long. Edit2: here's the function for limiting a long to its min/max value according to the answer I linked to above: /** * @param a : one of the two numbers added/multiplied * @param b : the other of the two numbers * @param c : the result of the addition/multiplication * @return the minimum or maximum value of a long integer if addition/multiplication of a and b is less than Long.MIN_VALUE or more than Long.MAX_VALUE */ public static long limitLong(long a, long b, long c) { return (((a > 0) && (b > 0) && (c <= 0)) ? Long.MAX_VALUE : (((a < 0) && (b < 0) && (c >= 0)) ? Long.MIN_VALUE : c)); } Tell me if you think this is wrong.

    Read the article

  • How to fix this simple SQL query?

    - by morpheous
    I have a database with three tables: user_table country_table city_table I want to write ANSI SQL which will allow me to fetch all the user data (i.e. user details including the name of the country of the last school and the name of the city they live in now). The problem I am having is that I have to use a self join, and I am getting slightly confused. The schema is shown below: CREATE TABLE user_table (id int, first_name varchar(16), last_school_country_id int, city_id int); CREATE TABLE country_table (id int, name varchar(32)); CREATE TABLE city_table (id int, country_id int, name varchar(32)); This is the query I have come up with so far, but the results are wrong, and sometimes, the db engine (mySQL), asks me if I want to show all [HUGE NUMBER HERE] results - which makes me suspect that I am unintentionally creating a cartesian product somewhere. Can someone explain what is wrong with this SQL statement, and what I need to do to fix it? SELECT usr.id AS id, usr.first_name, ctry1.name as loc_country_name, ctry2.name as school_country_name, city.name as loc_city_name FROM user_table usr, country_table ctry1, country_table ctry2, city_table city WHERE usr.last_school_country_id=ctry2.id AND usr.city_id=city.id AND city.country_id=ctry1.id AND ctry1.id=ctry2.id;

    Read the article

  • Debugging instance of another thread altering my data

    - by Mick
    I have a huge global array of structures. Some regions of the array are tied to individual threads and those threads can modify their regions of the array without having to use critical sections. But there is one special region of the array which all threads may have access to. The code that accesses these parts of the array needs to carefully use critical sections (each array element has its own critical section) to prevent any possibility of two threads writing to the structure simultaneously. Now I have a mysterious bug I am trying to chase, it is occurring unpredictably and very infrequently. It seems that one of the structures is being filled with some incorrect number. One obvious explanation is that another thread has accidentally been allowed to set this number when it should be excluded from doing so. Unfortunately it seems close to impossible to track this bug. The array element in which the bad data appears is different each time. What I would love to be able to do is set some kind of trap for the bug as follows: I would enter a critical section for array element N, then I know that no other thread should be able to touch the data, then (until I exit the critical section) set some kind of flag to a debugging tool saying "if any other thread attempts to change the data here please break and show me the offending patch of source code"... but I suspect no such tool exists... or does it? Or is there some completely different debugging methodology that I should be employing.

    Read the article

  • How to "wrap" implementation in C#

    - by igor
    Hello, I have these classes in C# (.NET Framework 3.5) described below: public class Base { public int State {get; set;} public virtual int Method1(){} public virtual string Method2(){} ... public virtual void Method10(){} } public class B: Base { // some implementation } public class Proxy: Base { private B _b; public class Proxy(B b) { _b = b; } public override int Method1() { if (State == Running) return _b.Method1(); else return base.Method1(); } public override string Method2() { if (State == Running) return _b.Method2(); else return base.Method2(); } public override void Method10() { if (State == Running) _b.Method10(); else base.Method10(); } } I want to get something this: public Base GetStateDependentImplementation() { if (State == Running) // may be some other rule return _b; else return base; // compile error } and my Proxy's implementation will be: public class Proxy: Base { ... public override int Method1() { return GetStateDependentImplementation().Method1(); } public override string Method2() { return GetStateDependentImplementation().Method2(); } ... } Of course, I can do this (aggregation of base implementation): public RepeaterOfBase: Base // no any overrides, just inheritance { } public class Proxy: Base { private B _b; private RepeaterOfBase _Base; public class Proxy(B b, RepeaterOfBase aBase) { _b = b; _base = aBase; } } ... public Base GetStateDependentImplementation() { if (State == Running) return _b; else return _Base; } ... But instance of Base class is very huge and I have to avoid to have other additional copy in memory. So I have to simplify my code have to "wrap" implementation have to avoid a code duplication have to avoid aggregation of any additional instance of Base class (duplication) Is it possible to reach these goals?

    Read the article

  • Using Mercurial in a Large Organization

    - by Kristopher Johnson
    I've been using Mercurial for my own personal projects for a while, and I love it. My employer is considering a switch from CVS to SVN, but I'm wondering whether I should push for Mercurial (or some other DVCS) instead. One wrinkle with Mercurial is that it seems to be designed around the idea of having a single repository per "project". In this organization, there are dozens of different executables, DLLs, and other components in the current CVS repository, hierarchically organized. There are a lot of generic reusable components, but also some customer-specific components, and customer-specific configurations. The current build procedures generally get some set of subtrees out of the CVS repository. If we move from CVS to Mercurial, what is the best way to organize the repository/repositories? Should we have one huge Mercurial repository containing everything? If not, how fine-grained should the smaller repositories be? I think people will find it very annoying if they have to pull and push updates from a lot of different places, but they will also find it annoying if they have to pull/push the entire company codebase. Anybody have experience with this, or advice?

    Read the article

  • How do I put my return data from an asmx into JSON?

    - by jphenow
    I want to return an array of javascript objects from my asp.net asmx file. ie. variable = [ { *value1*: 'value1', *value2*: 'value2', ..., }, { . . } ]; I seem have been having trouble reaching this. I'd put this into code but I've been hacking away at it so much it'd probably do more harm than good in having this answered. Basically I am using a web service to find names as people type the name. I'd use a regular text file or something but its a huge database that's always changing - and don't worry I've indexed the names so searching can be a little snappier - but I would really prefer to stick with this method and just figure out how to get usable JSON back to javascript. I've seen a few that sort of attempt to describe how one would approach this but I honestly think microsofts articles are damn near unreadable. Thanks in advance for assistance. EDIT: I'm using the $.ajax() function from jQuery - I've had it working but it seems like I was doing it in bad practice not returning and using actual JSON. Previously I'd take a string back and insert it into html to use the variable it set - very roundabout.

    Read the article

  • g-wan - reproducing the performance claims

    - by user2603628
    Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan then pointing wrk at it (using a null file null.html) wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s, 32.08MB read Socket errors: connect 0, read 37, write 0, timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB .. very poor performance, in fact there seems to be some kind of huge latency issue. During the test gwan is 200% busy and wrk is 67% busy. Pointing at nginx, wrk is 200% busy and nginx is 45% busy: wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s, 540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB Pointing weighttpd at nginx gives even faster results: /usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests, 666667 total requests spawning thread #2: 167 concurrent requests, 666667 total requests spawning thread #3: 166 concurrent requests, 666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data The server is a virtual 8 core dedicated server (bare metal), under KVM Where do I start looking to identify the problem gwan is having on this platform ? I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • When is Googling it wrong?

    - by Drahcir
    I've been going through Stack Overflow for quite a bit now and noticed certain people (usually experienced programmers) frown upon Googling (researching) certain problems. Since I myself tend to use Google quite a bit to solve certain programming related issues I found certain comments rather demoralising. Now some of you may have come here trigger happy to delete this post but I needed some clarification. I usually Google things that usually syntax related that I would have never figured out on my own. For example I once wondered how to access the properties of a class that I didn't have a direct relationship to. So after a bit of research I discovered reflection and got what I wanted. Now in another scenario is learning a new language, in my case Silverlight were it differs in certain aspects of .NET compared to say ASP.NET. A few weeks ago I had no idea how to load another Silverlight page (usercontrol) and had to Google my way to the solution which I found wasn't as simple as I had imagined. In scenario three is were I myself frown up, that is just stealing a huge chunk of code to avoid doing the work yourself, for example paging a HTML table using JavaScript, where one just copies and pastes the JavasSript code without as much as trying to understand how it works. I do admit I have done this once or twice before for trivial tasks that had very little time limit and weren't all that important but most of the time still have to throw away what I found because it took too much time to adapt it and get what I wanted out of it. In the last scenario, I sometimes have a piece of code that I would be really unhappy about, as in I find it sloppy or too overcomplicated and try to look on the Internet to see other ways to tackle the same problem, let's say filtering through a table. With the knowledge I acquire I learned new coding practices that help me work more efficiently like "Do not repeat yourself" and such. Now in your opinion when do you find it wrong to use Google (or any other researching tool) to find a solution to your problem?

    Read the article

  • How can I debug a session

    - by Organ Grinding Monkey
    I have been asked to work of a very large web application and deploy it. The problem that I'm facing here is that when I deploy the application and more that 1 user logs into the system, the sessions seem to cross over i.e: Person A logs in and works on the site, all good. When person B logs in, person A will then be logged in as person B as well. I have been asked to work of a very large web application and deploy it. The problem that I'm facing here is that when I deploy the application and more that 1 user logs into the system, the sessions seem to cross over i.e: Person A logs in and works on the site, all good. When person B logs in, person A will then be logged in as person B as well. If anyone has experienced this behaviour before and can steer me in the right direction, that would be first prize, Second prize would be to show me how I can debug this situation so that I can find out where the problem is and fix it. Some information about the application. From what I've been told and what I've seen within the app is that it started as a .Net 1.1 application and got upgraded to .Net 2 and that's why the log in system was done the way it is. (The application is huge and now complete and that's why I cant rewrite the whole user authentication process, it will just take to long and I don't know what effect it might have) All the Logged in User information is stored in properties that have been added in the Global.asax.vb file. (could this be the problem?) Any help here would be greatly appreciated

    Read the article

  • Self organizing layouts

    - by user613326
    Quite a while ago i was more in websites building then i am now. In my time there where huge debates about what to use; tables or pure CSS alternatives. I went out of the webdesigning, but now an old question re-surfaces. What i would like to create is a web page design that depending on screensize, would self organize the page into columns, so that for example on a PDA it would show 1 column On an old computer monitor, it would show 2 colomns, and on a widescreen laptop it would show 3 columns. I forgot how this was called and how it was done in the past, it had to do with XML and storing data seperate from design (if i remember well), perhaps these days better methods exist to do that, does this, anyone ring this a bell ? Also i note a lot is possible with Jquery and and brouwser depending webkits. But i need to make sure that it would run on all (modern) brouwsers : Iexplorer, Firefox, chrome And Jquery is nice too, but i am kinda woried that some day one of these brouwser vendors decides that jscript like java isnt enabled by default (or is that very unlikely ?)11 Perhaps someone can point me to a method that is the prefered way to do this.

    Read the article

  • Checking when two headers are included at the same time.

    - by fortran
    Hi, I need to do an assertion based on two related macro preprocessor #define's declared in different header files... The codebase is huge and it would be nice if I could find a place to put the assertion where the two headers are already included, to avoid polluting namespaces unnecessarily. Checking just that a file includes both explicitly might not suffice, as one (or both) of them might be included in an upper level of a nesting include's hierarchy. I know it wouldn't be too hard to write an script to check that, but if there's already a tool that does the job, the better. Example: file foo.h #define FOO 0xf file bar.h #define BAR 0x1e I need to put somewhere (it doesn't matter a lot where) something like this: #if (2*FOO) != BAR #error "foo is not twice bar" #endif Yes, I know the example is silly, as they could be replaced so one is derived from the other, but let's say that the includes can be generated from different places not under my control and I just need to check that they match at compile time... And I don't want to just add one include after the other, as it might conflict with previous code that I haven't written, so that's why I would like to find a file where both are already present. In brief: how can I find a file that includes (direct or indirectly) two other files? Thanks!

    Read the article

  • Android CursorAdapters, ListViews and background threads

    - by MattC
    This application I've been working on has databases with multiple megabytes of data to sift through. A lot of the activities are just ListViews descending through various levels of data within the databases until we reach "documents", which is just HTML to be pulled from the DB(s) and displayed on the phone. The issue I am having is that some of these activities need to have the ability to search through the databases by capturing keystrokes and re-running the query with a "like %blah%" in it. This works reasonably quickly except when the user is first loading the data and when the user first enters a keystroke. I am using a ResourceCursorAdapter and I am generating the cursor in a background thread, but in order to do a listAdapter.changeCursor(), I have to use a Handler to post it to the main UI thread. This particular call is then freezing the UI thread just long enough to bring up the dreaded ANR dialog. I'm curious how I can offload this to a background thread totally so the user interface remains responsive and we don't have ANR dialogs popping up. Just for full disclosure, I was originally returning an ArrayList of custom model objects and using an ArrayAdapter, but (understandably) the customer pointed out it was bad memory-manangement and I wasn't happy with the performance anyways. I'd really like to avoid a solution where I'm generating huge lists of objects and then doing a listAdapter.notifyDataSetChanged/Invalidated() Here is the code in question: private Runnable filterDrugListRunnable = new Runnable() { public void run() { if (filterLock.tryLock() == false) return; cur = ActivityUtils.getIndexItemCursor(DrugListActivity.this); if (cur == null || forceRefresh == true) { cur = docDb.getItemCursor(selectedIndex.getIndexId(), filter); ActivityUtils.setIndexItemCursor(DrugListActivity.this, cur); forceRefresh = false; } updateHandler.post(new Runnable() { public void run() { listAdapter.changeCursor(cur); } }); filterLock.unlock(); updateHandler.post(hideProgressRunnable); updateHandler.post(updateListRunnable); } };

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >