Search Results

Search found 9128 results on 366 pages for 'big theta'.

Page 302/366 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • What should a teen dev do for practical experience in development?

    - by aviraldg
    What should a teen dev do for practical experience? If you want more details , then read on: I learnt programming when I was 9 , with GWBASIC (which I now hate) , which was what was taught @ school. That was done in a month. After that I learnt C++ and relearnt it (as I didn't know of templates and the STL before that) Recently I learnt PHP , SQL and Python. This was around the time I switched over to Ubuntu. I'd always loved the "GNUish" style of software development so I jumped right in. However , most of the projects that I found required extensive knowledge of their existing codebase. So , right now I'm this guy who knows a couple of languages and has written a couple of small programs ... but hasn't gone "big", if you get it. I would love suggestions of projects that are informal and small to medium sized , and do not require much knowledge of the codebase. Also note that I've looked at things like Google Summer of Code and sites like savannah.gnu.org and the first doesn't apply , since I'm still in school and the latter either has infeasable projects , or things that are too hard.

    Read the article

  • How to separate sets of numbers onto separate lines

    - by Fred
    About the script: The script below will create 300 sets of random characters. What is presently happening, is that it creates them but shows them all on one line, in one big chunk. With all the searching and testing I've done to try and achieve this, I have had no success. I would like to know which code and where to put it, so that each SET (300) of 15 characters long, will show and be saved to file. Here is my script: <?php function GetID($x){ $characters = array_merge(range('A','Z'),range('a','z'),range(2,9)); shuffle($characters); for($x=0;$x<=299;$x++){ } for (; strlen($ReqID)<$x;){ $ReqID .= $characters[mt_rand(0, count($characters))]; } return $ReqID; } $ReqID .= GetID(5); $ReqID .= "-"; $ReqID .= GetID(5); $ReqID .= "-"; $ReqID .= GetID(5); echo $ReqID; $fh = fopen("file.txt","a+"); fwrite($fh, ("$ReqID")."\n"); fclose($fh); ?>

    Read the article

  • Python: problem with tiny script to delete files

    - by Rosarch
    I have a project that used to be under SVN, but now I'm moving it to Mercurial, so I want to clear out all the .svn files. It's a little too big to do it by hand, so I decided to write a python script to do it. But it isn't working. def cleandir(dir_path): print "Cleaning %s\n" % dir_path toDelete = [] files = os.listdir(dir_path) for filedir in files: print "considering %s" % filedir # continue if filedir == '.' or filedir == '..': print "skipping %s" % filedir continue path = dir_path + os.sep + filedir if os.path.isdir(path): cleandir(path) else: print "not dir: %s" % path if 'svn' in filedir: toDelete.append(path) print "Files to be deleted:" for candidate in toDelete: print candidate print "Delete all? [y|n]" choice = raw_input() if choice == 'y': for filedir in toDelete: if os.path.isdir(filedir): os.rmdir(filedir) else: os.unlink(filedir) exit() if __name__ == "__main__": cleandir(dir) The print statements show that it's only "considering" the filedirs whose names start with ".". However, if I uncomment the continue statement, all the filedirs are "considered". Why is this? Or is there some other utility that already exists to recursively de-SVN-ify a directory tree?

    Read the article

  • How to pass data to a C++0x lambda function that will run in a different thread?

    - by Dimitri C.
    In our company we've written a library function to call a function asynchronously in a separate thread. It works using a combination of inheritance and template magic. The client code looks as follows: DemoThread thread; std::string stringToPassByValue = "The string to pass by value"; AsyncCall(thread, &DemoThread::SomeFunction, stringToPassByValue); Since the introduction of lambda functions I'd like to use it in combination with lambda functions. I'd like to write the following client code: DemoThread thread; std::string stringToPassByValue = "The string to pass by value"; AsyncCall(thread, [=]() { const std::string someCopy = stringToPassByValue; }); Now, with the Visual C++ 2010 this code doesn't work. What happens is that the stringToPassByValue is not copied. Instead the "capture by value" feature passes the data by reference. The result is that if the function is executed after stringToPassByValue has gone out of scope, the application crashes as its destructor is called already. So I wonder: is it possible to pass data to a lambda function as a copy? Note: One possible solution would be to modify our framework to pass the data in the lambda parameter declaration list, as follows: DemoThread thread; std::string stringToPassByValue = "The string to pass by value"; AsyncCall(thread, [=](const std::string stringPassedByValue) { const std::string someCopy = stringPassedByValue; } , stringToPassByValue); However, this solution is so verbose that our original function pointer solution is both shorter and easier to read. Update: The full implementation of AsyncCall is too big to post here. In short, what happens is that the AsyncCall template function instantiates a template class holding the lambda function. This class is derived from a base class that contains a virtual Execute() function, and upon an AsyncCall() call, the function call class is put on a call queue. A different thread then executes the queued calls by calling the virtual Execute() function, which is polymorphically dispatched to the template class which then executes the lambda function.

    Read the article

  • Image creation performance / image caching

    - by Kilnr
    Hello, I'm writing an application that has a scrollable image (used to display a map). The map background consists of several tiles (premade from a big JPG file), that I draw on a Graphics object. I also use a cache (Hashtable), to prevent from having to create every image when I need it. I don't keep everything in memory, because that would be too much. The problem is that when I'm scrolling through the map, and I need an image that wasn't cached, it takes about 60-80 ms to create it. Depending on screen resolution, tile size and scroll direction, this can occur multiple times in one scroll operation (for different tiles). In my case, it often happens that this needs to be done 4 times, which introduces a delay of more than 300 ms, which is extremely noticeable. The easiest thing for me would be that there's some way to speed up the creation of Images, but I guess that's just wishful thinking... Besides that, I suppose the most obvious thing to do is to load the tiles predictively (e.g. when scrolling to the right, precache the tiles to the right), but then I'm faced with the rather difficult task of thinking up a halfway decent algorithm for this. My actual question then is: how can I best do this predictive loading? Maybe I could offload the creation of images to a separate thread? Other things to consider? Thanks in advance.

    Read the article

  • mysql_affected_rows() returns 0 for UPDATE statement even when an update actually happens

    - by Alex Moore
    I am trying to get the number of rows affected in a simple mysql update query. However, when I run this code below, PHP's mysql_affected_rows() always equals 0. No matter if foo=1 already (in which case the function should correctly return 0, since no rows were changed), or if foo currently equals some other integer (in which case the function should return 1). $updateQuery = "UPDATE myTable SET foo=1 WHERE bar=2"; mysql_query($updateQuery); if (mysql_affected_rows() > 0) { echo "affected!"; } else { echo "not affected"; // always prints not affected } The UPDATE statement itself works. The INT gets changed in my database. I have also double-checked that the database connection isn't being closed beforehand or anything funky. Keep in mind, mysql_affected_rows doesn't necessarily require you to pass a connection link identifier, though I've tried that too. Details on the function: mysql_affected_rows Any ideas? SOLUTION The part I didn't mention turned out to be the cause of my woes here. This PHP file was being called ten times consecutively in an AJAX call, though I was only looking at the value returned on the last call, ie. a big fat 0. My apologies!

    Read the article

  • Installing SVN plugin for Eclipse on Ubuntu

    - by Zac
    I am a brand new Linux user configuring my first-ever dev sandbox in Ubuntu. I have installed Java and Eclipse and am trying to get either Subversive or Subclipse (I don't have a preference either way) but have a few questions before I start that process. I just opened Synaptic and downloaded subversion through it. (1) I'm not really sure how SVN deploys locally. My understanding is that SVN has a client and a server; the server manages the repository(ies) and the clieent just sends commands to the server. Is this correct? If so, then what did I download through Synaptic? The client, and/or the server? (2) Do these Eclipse plugins come with SVN (client or server...?) or do you have to pre-install SVN prior to installing these plugins? Basically: is SVN a pre-req for Subclipse or Subversive? Looking back at these 2 questions if someone could first explain to me the architecture of SVN, then explain how that architecture translates to downloading SVN via Synaptic, and then how it translates to downloading/installing either Eclipse plugin, I would see the "big picture" a lot better. Thanks for any and all help!

    Read the article

  • optimize output value using a class and public member

    - by wiso
    Suppose you have a function, and you call it a lot of times, every time the function return a big object. I've optimized the problem using a functor that return void, and store the returning value in a public member: #include <vector> const int N = 100; std::vector<double> fun(const std::vector<double> & v, const int n) { std::vector<double> output = v; output[n] *= output[n]; return output; } class F { public: F() : output(N) {}; std::vector<double> output; void operator()(const std::vector<double> & v, const int n) { output = v; output[n] *= n; } }; int main() { std::vector<double> start(N,10.); std::vector<double> end(N); double a; // first solution for (unsigned long int i = 0; i != 10000000; ++i) a = fun(start, 2)[3]; // second solution F f; for (unsigned long int i = 0; i != 10000000; ++i) { f(start, 2); a = f.output[3]; } } Yes, I can use inline or optimize in an other way this problem, but here I want to stress on this problem: with the functor I declare and construct the output variable output only one time, using the function I do that every time it is called. The second solution is two time faster than the first with g++ -O1 or g++ -O2. What do you think about it, is it an ugly optimization?

    Read the article

  • Python vs all the major professional languages [closed]

    - by Matt
    I've been reading up a lot lately on comparisons between Python and a bunch of the more traditional professional languages - C, C++, Java, etc, mainly trying to find out if its as good as those would be for my own purposes. I can't get this thought out of my head that it isn't good for 'real' programming tasks beyond automation and macros. Anyway, the general idea I got from about two hundred forum threads and blog posts is that for general, non-professional-level progs, scripts, and apps, and as long as it's a single programmer (you) writing it, a given program can be written quicker and more efficiently with Python than it could be with pretty much any other language. But once its big enough to require multiple programmers or more complex than a regular person (read: non-professional) would have any business making, it pretty much becomes instantly inferior to a million other languages. Is this idea more or less accurate? (I'm learning Python for my first language and want to be able to make any small app that I want, but I plan on learning C eventually too, because I want to get into driver writing eventually. So I've been trying to research each ones strengths and weaknesses as much as I can.) Anyway, thanks for any input

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • PHP loop hanging/interspersed/threaded through HTML

    - by sandyv
    I can't figure out how to say what I'm talking about which is a big part of why I'm stuck. In PHP I often see code like this html <?php language construct that uses brackets { some code; ?> more html <?php some more code; } ?> rest of html Is there any name for this? Having seen this lead me to try it out so here is a concrete example whose behavior doesn't make sense to me <div id="content"> <ul id="nav"> <?php $path = 'content'; $dir = dir($path); while(false !== ($file = $dir->read())) { if(preg_match('/.+\.txt/i', $file)) { echo "<li>$file</li>"; ?> </ul> <?php echo file_get_contents($path . '/' . $file); } } ?> </div> Which outputs roughly <div><ul><li></li></ul><li></li>...</div> instead of <div><ul><li></li>...</ul></div> which is what I thought would happen and what I want to happen.

    Read the article

  • How to check a file saving is complete using Python?

    - by indrajithk
    I am trying to automate a downloading process. In this I want to know, whether a particular file's save is completed or not. The scenario is like this. Open a site address using either Chrome or Firefox (any browser) Save the page to disk using 'Crtl + S' (I work on windows) Now if the page is very big, then it takes few seconds to save. I want to parse the html once the save is complete. Since I don't have control on the browser save functionality, I don't know whether the save has completed or not. One idea I thought, is to get the md5sum of the file using a while loop, and check against the previous one calculated, and continue the while loop till the md5 sum from the previous and current one matches. This doesn't works I guess, as it seems browser first attempts to save the file in a tmp file and then copies the content to the specified file (or just renames the file). Any ideas? I use python for the automation, hence any idea which can be implemented using python is welcome. Thanks Indrajith

    Read the article

  • Javascript document.open asynchronous?

    - by Alex Schneider
    So on my site there is a Javascript function that will load a new site from the server via XMLHttpRequest. After that it replaces the current page with the new one: var post = new XMLHttpRequest(); post.open('POST', data); post.onload = function() { var do = document.open("text/html", "replace"); do.write(post.responseText); do.close(); goOn(); } function goOn() { console.log($('img:visible')); } Some could assume that after do.close() the document has changed and is ready. But it is not, e.g. if i load very much/big data/responseText the function goOn() only logs an empty result. Obviously goOn() gets in that case called before the DOM is ready to be read! Unfortunately the is no "ready" event fired after write() finished.... How can i be sure it is finished? /EDIT: goOn() logs this to Chrome Console: [prevObject: p.fn.p.init[1], context: #document, selector: "img:visible"] context: #document length: 0 prevObject: p.fn.p.init[1] selector: "img:visible" __proto__: Object[0] But if i right after that type $('img:visible') into console manually it shows me all images....

    Read the article

  • How expensive is a context switch? Is it better to implement a manual task switch than to rely on OS

    - by Vilx-
    The title says it all. Imagine I have two (three, four, whatever) tasks that have to run in parallel. Now, the easy way to do this would be to create separate threads and forget about it. But on a plain old single-core CPU that would mean a lot of context switching - and we all know that context switching is big, bad, slow, and generally simply Evil. It should be avoided, right? On that note, if I'm writing the software from ground up anyway, I could go the extra mile and implement my own task-switching. Split each task in parts, save the state inbetween, and then switch among them within a single thread. Or, if I detect that there are multiple CPU cores, I could just give each task to a separate thread and all would be well. The second solution does have the advantage of adapting to the number of available CPU cores, but will the manual task-switch really be faster than the one in the OS core? Especially if I'm trying to make the whole thing generic with a TaskManager and an ITask, etc?

    Read the article

  • When is it safe to use a broken hash function?

    - by The Rook
    It is trivial to use a secure hash function like SHA256 and continuing to use md5 is reckless behavior. However, there are some complexities to hash function vulnerabilities that I would like to better understand. Collisions have been generated for md4 and md5. According to NIST md5() is not a secure hash function. It only takes 2^39th operations to generate a collision and should never be used for passwords. However SHA1 is vulnerable to a similar collision attack in which a collision can be found in 2^69 operations, where as brute force is 2^80th. No one has generated a sha1 collision and NIST still lists sha1 as a secure message digest function. So when is it safe to use a broken hash function? Even though a function is broken it can still be "big enough". According to Schneier a hash function vulnerable to a collsion attack can still be used as an HMAC. I believe this is because the security of an HMAC is Dependant on its secret key and a collision cannot be found until this key is obtained. Once you have the key used in a HMAC its already broken, so its a moot point. What hash function vulnerabilities would undermine the security of an HMAC? Lets take this property a bit further. Does it then become safe to use a very weak message digest like md4 for passwords if a salt is perpended to the password? Keep in mind the md4 and md5 attacks are prefixing attacks, and if a salt is perpended then an attacker cannot control the prefix of the message. If the salt is truly a secret, and isn't known to the attacker, then does it matter if its a appended to the end of the password? Is it safe to assume that an attacker cannot generate a collision until the entire message has been obtained? Do you know of other cases where a broken hash function can be used in a security context without introducing a vulnerability? (Please post supporting evidence because it is awesome!)

    Read the article

  • Should I go to school and get my degree in computer science?

    - by ryan
    I'll try and keep this short and simple. I've always enjoyed programming and I've been doing it since high school. Right after I graduated from high school (2002), I opted to skip college because I was offered a software engineer position. I quit after a couple of years later to team up on various startup companies. However, most of them did not launch as well as expected. But it honestly did not matter to me because I've learned so much from that experience. So fast forwarding to today, now turned 25, I need a job due to this tough economic climate. Looking on Craigslist, a lot of the listings require computer science degrees. It's evident now that programming is what I want to do because I seem to never get enough of it. But just the thought of having to push 2 years without attending any real computer class for an Associates at age 25 is very, very discouraging. And the thought of having to learn from basic (Hello WOOOOORRLLLD) just does not seem exciting. I guess I have 3 questions to wrap this up: Should I just suck it up and go back to school while working at McDonalds at age 25? Is there a way where I can just skip all the boring stuff and just get tested with what I know? From your experience, how many jobs use computer science degrees as prerequisites? Or am I screwed and better pray that my next startup will be the next big thing?

    Read the article

  • Combining two queries on same table

    - by user1830856
    I've looked through several previous questions but I am struggling to apply the solutions to my specific example. I am having trouble combining query 1 and query 2. My query originally returned (amongst other details) the values "SpentTotal" and "UnderSpent" for all members/users for the current month. My issue has been adding two additional columns to this original quert that will return JUST these two columns (Spent and Overspent) but for the previous months data Original Query #1: set @BPlanKey = '##CURRENTMONTH##' EXECUTE @RC = Minimum_UpdateForPeriod @BPlanKey SELECT cm.clubaccountnumber, bp.Description , msh.PeriodMinObligation, msh.SpentTotal, msh.UnderSpent, msh.OverSpent, msh.BilledDate, msh.PeriodStartDate, msh.PeriodEndDate, msh.OverSpent FROM MinimumSpendHistory msh INNER JOIN BillPlanMinimums bpm ON msh.BillingPeriodKey = @BPlanKey and bpm.BillPlanMinimumKey = msh.BillPlanMinimumKey INNER JOIN BillPlans bp ON bp.BillPlanKey = bpm.BillPlanKey INNER JOIN ClubMembers cm ON cm.parentmemberkey is null and cm.ClubMemberKey = msh.ClubMemberKey order by cm.clubaccountnumber asc, msh.BilledDate asc Query #2, query of all columns for PREVIOUS month, but I only need two (spent and over spent), added to the query from above, joined on the customer number: set @BPlanKeyLastMo = '##PREVMONTH##' EXECUTE @RCLastMo = Minimum_UpdateForPeriod @BPlanKeyLastMo SELECT cm.clubaccountnumber, bp.Description , msh.PeriodMinObligation, msh.SpentTotal, msh.UnderSpent, msh.OverSpent, msh.BilledDate, msh.PeriodStartDate, msh.PeriodEndDate, msh.OverSpent FROM MinimumSpendHistory msh INNER JOIN BillPlanMinimums bpm ON msh.BillingPeriodKey = @BPlanKeyLastMo and bpm.BillPlanMinimumKey = msh.BillPlanMinimumKey INNER JOIN BillPlans bp ON bp.BillPlanKey = bpm.BillPlanKey INNER JOIN ClubMembers cm ON cm.parentmemberkey is null and cm.ClubMemberKey = msh.ClubMemberKey order by cm.clubaccountnumber asc, msh.BilledDate asc Big thank you to any and all that are willing to lend their help and time. Cheers! AJ CREATE TABLE MinimumSpendHistory( [MinimumSpendHistoryKey] [uniqueidentifier] NOT NULL, [BillPlanMinimumKey] [uniqueidentifier] NOT NULL, [ClubMemberKey] [uniqueidentifier] NOT NULL, [BillingPeriodKey] [uniqueidentifier] NOT NULL, [PeriodStartDate] [datetime] NOT NULL, [PeriodEndDate] [datetime] NOT NULL, [PeriodMinObligation] [money] NOT NULL, [SpentTotal] [money] NOT NULL, [CurrentSpent] [money] NOT NULL, [OverSpent] [money] NULL, [UnderSpent] [money] NULL, [BilledAmount] [money] NOT NULL, [BilledDate] [datetime] NOT NULL, [PriorPeriodMinimum] [money] NULL, [IsCommitted] [bit] NOT NULL, [IsCalculated] [bit] NOT NULL, [BillPeriodMinimumKey] [uniqueidentifier] NOT NULL, [CarryForwardCounter] [smallint] NULL, [YTDSpent] [money] NOT NULL, [PeriodToAccumulateCounter] [int] NULL, [StartDate] [datetime] NOT NULL,

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • jQuery hide all table rows which contain a hidden field matching a value

    - by Famous Nerd
    Though I don't doubt this has been answered I cannot find a great match for my question. I have a table for which I'd like to filter rows based on whether or not they contain a hidden field matching a value. I understand that the technique tends to be "show all rows", "filter the set", "show/hide that filtered set" I have the following jquery but I'm aweful with filter and my filtered set seems to always contain no elements. my table is the usual <table> <tr><td>header></td><td>&nbsp;</tr> <tr> <td>a visible cell</td><td><input type='hidden' id='big-asp.net-id' value='what-im-filtering-on' /> </td> </tr> </table> My goal is to be able to match on tr who's descendent contains a hidden input containing either true or false. this is how I've tried the selector (variations of this) and I'm not even testing for the value yet. function OnFilterChanged(e){ //debugger; var checkedVal = $("#filters input[type='radio']:checked").val(); var allRows = $("#match-grid-container .tabular-data tr"); if(checkedVal=="all"){ allRows.show(); } else if(checkedVal=="matched"){ allRows.show(); allRows.filter(function(){$(this).find("input[type='hidden'][id~='IsAutoMatchHiddenField']")}).hide(); } else if(checkedVal=="unmatched"){ } } Am I way off with the filter? is the $(this) required in the filter so that i can do the descendant searching? Thanks kindly

    Read the article

  • Nested and complicated select statement

    - by Selase
    What i want to do here is simple...display an ivestigators ID and him corresponding name... That can be easily done from the users table by selecting based on the user type. However i want to select only some type of investigators. The analogy here is investigators are assigned to an exhibit for them to investigate. One investigator can be assigned to a maximum of 3 cases only. Now during the assigning of investigators, i want to write a select statement that would retrieve only investigatorID's that have been assigned to less than or equal to 2 cases. I have included exhibit and users table that shows sample data below. Now i sort of have an idea that i will have to first of all pick out all the investigators by their ID from the users list and then filter them through the exhibit table by dropping those assigned to 3 cases and leaving just those with two cases. then afterwards i use this IDs to select the Investigators name. the big questions is how do i write the statement??

    Read the article

  • Who needs singletons?

    - by sexyprout
    Imagine you access your MySQL database via PDO. You got some functions, and in these functions, you need to access the database. The first thing I thought of is global, like: $db = new PDO('mysql:host=127.0.0.1;dbname=toto', 'root', 'pwd'); function some_function() { global $db; $db->query('...'); } But it's considered as a bad practice. So, after a little search, I ended up with the Singleton pattern, which "applies to situations in which there needs to be a single instance of a class." According to the example of the manual, we should do this: class Database { private static $instance, $db; private function __construct(){} static function singleton() { if(!isset(self::$instance)) self::$instance = new __CLASS__; return self:$instance; } function get() { if(!isset(self::$db)) self::$db = new PDO('mysql:host=127.0.0.1;dbname=toto', 'user', 'pwd') return self::$db; } } function some_function() { $db = Database::singleton(); $db->get()->query('...'); } some_function(); But I just can't understand why you need that big class when you can do it merely with: class Database { private static $db; private function __construct(){} static function get() { if(!isset(self::$rand)) self::$db = new PDO('mysql:host=127.0.0.1;dbname=toto', 'user', 'pwd'); return self::$db; } } function some_function() { Database::get()->query('...'); } some_function(); This last one works perfectly and I don't need to worry about $db anymore. But maybe I'm forgetting something. So, who's wrong, who's right?

    Read the article

  • controlling css with javascript works with mozilla but not with webkit based browsers

    - by GlassGhost
    Im having problems with applying css text variable in this javascript with webkit based browsers(Chrome & Safari) but it works in firefox 3.6 the function: function addGlobalStyle(sCss) { var head = document.getElementsByTagName('head')[0]; if( !head || head == null ) { return false; } var oStyle = document.createElement('style'); oStyle.type = 'text/css'; oStyle.rel = 'stylesheet'; oStyle.media = 'screen'; if ( is_gecko ) { // firefox WORKING !!! oStyle.href = 'FireFox.css'; oStyle.innerHTML = sCss; head.appendChild(oStyle); return true; } else {//nothing but firefox works oStyle.href = 'FireFox.css'; oStyle.innerHTML = sCss; head.appendChild(oStyle); return true; } } the use of the function: var NewSyleText = //The page styling "h1, h2, h3, h4, h5 {font-family: 'Verdana','Helvetica',sans-serif; font-style: normal; font-weight:normal;}" + "body, b {background: #fbfbfb; font-style: normal; font-family: 'Cochin','GaramondNo8','Garamond','Big Caslon','Georgia','Times',serif;font-size: 11pt;}" + "p { margin: 0pt; text-indent:2.5em; margin-top: 0.3em; }" + "a { text-decoration: none; color: Navy; background: none;}" + "a:visited { color: #500050;}" + "a:active { color: #faa700;}" + "a:hover { text-decoration: underline;}"; addGlobalStyle(NewSyleText);//inserts the page styling

    Read the article

  • Loading GWT Messages from a Database

    - by Lars Tackmann
    In GWT one typically loads i18n strings using a interface like this: public interface StatusMessage extends Messages { String error(String username); : } which then loads the actual strings from a StatusMessage.property file: error=User: {0} does not have access to resource This is a great solution, however my client is unbendable in his demand for putting the i18n strings in a database so they can be changed at runtime (though its not a requirement that they be changed realtime). One solution is to create a async service which takes a message ID and user locale and returns a string. I have implemented this and find it terribly ugly (and it introduces a huge amount of extra communication with the server, plus it makes property placeholder replacement rather complicated). So my question is this, can I in some nice way implement a custom message provider that loads the messages from the backend in one big swoop (for the current user session). If it can also hook into the default GWT message mechanism, then I would be completely happy (i.e. so I can create a interface like above and keep using the the nice {0}, {1}... property replacement format). Other suggestions for clean database driven messages in GWT are also welcome.

    Read the article

  • Animate screen while loading textures

    - by Omega
    My RPG-like game has random battles. When the player enters a random battle, it is necessary for my game to load the textures used within that battle (animated monsters, animations, etc). The textures are quite a lot, and rather big (the battles are very graphical intensive). Such process consumes significant time. And while it is loading, the whole screen freezes. The game's map freezes, and the wait time is significant - I personally find it annoying. I can't afford to preload the textures because, after doing some math, I realized: If I preload all the textures at the beginning of the game, the application will definitely crash. If I preload the textures that are used in a specific map when the player enters the map, the application is very likely to crash as well. I can only afford to load the textures when I need them, and dispose of them as soon as the battle ends. I'd prefer to not use a "loading screen" image because it affects my game's design and concept. I want to avoid this approach. If I could do some kind of animation while loading the textures, it would be great, which leads to my question: is that possible? What kind of animation, you ask? Well, how about... you remember when Final Fantasy used to distort the screen while apparently loading the textures? Something like that. But well, distorting is quite a time-consuming process as well, so maybe just a cool frame-by-frame animation or something. While writing this, I realized that I could make small pauses between textures (there are multiple textures), and during such pauses, I update the screen to represent the animation's state. However, this is very unlikely to happen, because each texture is 2048x2048, so the animation would be refreshed at a rather laggy (and annoying) rate. I'd prefer to avoid this as well.

    Read the article

  • jQuery enclose text before and after anchor tag in separate spans.

    - by Devashish Bahri
    hey dere, first of all, thnx a ton for taking out time to see my post. i have a big problem with jQuery. i have this code: <p>Hi. I am your friend. you are my friend.<br> we <a href="both.html">both</a> are friends.</p> My aim is to enclose the text before the anchor tag as well as after the anchor tag into separate spans. Thus, i want something like this in the DOM: <p><span>Hi. I am your friend. you are my friend.<br> we </span><a href="both.html">both</a><span> are friends.</span></p> Can anybody please help me and tell me how to do it in jQuery. PLease... its very important..!! Thnx in advance...

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >