Search Results

Search found 19539 results on 782 pages for 'pretty print'.

Page 402/782 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • Moving away from PHP and running towards server-side JavaScript [on hold]

    - by Sosukodo
    I've decided to start moving away from PHP and server-side JavaScript looks like an attractive replacement. However, I'm having a hard time wrapping my head around how others are using Node.js for web applications. I'm currently using Lighttpd with FastCGI PHP. The one thing I like about PHP is that I can "inline" my scripts in the document like so: <?php echo 'Hello, World!'; ?> My question is: Is there any server-side JavaScript solution that I can use in this manor? For instance, I'd love to be able to do this: <?js print('Hello, World!'); ?> Is there such a thing? I'm not looking for opinions about "which is better". I just want to know what's out there and I'll explore each of them on my own. The important thing is that I'd like to use it like I demonstrated above. Links to the software along with implementation examples will be considered above other answers.

    Read the article

  • In MVC, why can't a model create a view?

    - by MUY Belgium
    I have a web application written in Perl with a controller, some "views" and some "Models". Each "Model" is corresponding to one "View". The controller (one file) creates an Model object corresponding to each view (view is a CGI argument) then retrieve the view from the module it has just created. Indeed, this should be bad thing but can you argue a bit more about it. My first idea was that since the object "Model" depends upon the "view", then the "model" is actually a view. But also the fact that ALL the cgi parameters are passed to the Model causes the "Model" to become not truelly a view but to loose all interest, since it is only related to the current implementation of the web apps. On other words, that the "Model" keep model but loose its "comprehensiveness" ("Model" is not easily understandable). I'm am quite new in project analysis, so please do not be too harsh. Why is this bad? I have made a prototype with the main structures I have understood of this web application, made as short as possible. #Model.pm package Model; import { # this requires an attribute called "view" # and this require an argument which is the cgi params } ... #View1.pm package View1; ... #Model1.pm package ModelView1 ; base Model; use View1; sub new { my $class = shift; my $arg = shift; Model::DoSomething($arg); $self->view = new View1($arg); ... } #controller.cgi my $model = 0; ... $model = new Model1( cgi_param => params() ); #there is severall models here ... print $model->get_view()->get_html();

    Read the article

  • Should I learn the easier framework as a start? [closed]

    - by gunbuster363
    I've been a programmer for 2 years. I learned Java SE, C from college and learned Cobol from the workplace. I've noticed that there is a hype about framework and I actually don't know what is a framework. It is so cool that my colleague once said you cannot find a new job without knowing something like struct spring hibernate. And we should know Java EE too because it was aimed for enterprise application. I've never code something such as server-client web application, and I think I need to try it out. But which language should I code in? I can't decide between the following 2: 1) Java. It is heavily used by many company so I could easily reuse the experience gained. But Java and its related framework are pretty heavy (for the machine and operation). It is on-demand. 2) ROR. It is cool. The syntax of ruby is simple. I can get a better hand on it. And maybe I can learn the concept easily and possibly correctly. However, not much company here would use it. All the job ads are about J2EE/C#. Should I learn the easy one or the difficult one? Not to mention there are a lot of frameworks out there for Java, which makes the decision much more difficult.

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • So my employer wants me to do less programming and focus on IT support

    - by Rich
    I was hired into a non tech company's IT department as a programmer a few years back, and after several rounds of lay offs, we're down to a skeleton crew. I've saved the company hundreds of thousands of dollars with my projects and management has been happy with them (although most of the stakeholders have since left the company). Management now wants me to limit the programming that I do and spend most of my time on IT support: putting out fires, dealing with vendors, outsourced contractors, supporting company systems, managing projects, etc. I am a little burnt out on programming since I've been pushed pretty hard for the past several years. However, I'm not sure if this is a good career move in the long run. I'm a decent programmer (and also good with databases) but not obsessed with it to the point of coding outside of work. I'm approaching my mid 30s and there's potential ageism to deal with down the line. While I'm fortunate to have survived the lay offs, it sorta feels like my job is being "dumbed down". I have both good technical skills and people skills...but it doesn't take a genius to do what I'm doing now. And my success is being increasingly linked to others' performance rather than my own... Just looking for some advice. Is it time to move on? That's not really an easy thing to do since I'd likely have to move to another area to find another comparable tech job. Should I go after another pure technical role? Or should I stay and try to make this work? People say do what you "enjoy" but it doesn't really matter to me as long as I'm getting paid. Also the ageism thing is on the horizon and could be an issue eventually. I'm making a decent (but not great) salary. Should I chase money and maximize my income while I still have a chance? Or be happy with a moderate salary and 40 hour work week?

    Read the article

  • Access a PLESK website before propagation?

    - by RCNeil
    My web host uses Plesk and I want to know if there is anyway to access and view a website (with PHP and other processes being functional) without propagation of the domain name? I have found countless forums on this but they are all pretty old (circa 01-04) and involve either tricking your localhost or SSH commands and some even result in terrible security risks. I would like to access a web page directory through a browser and see it's contents while having the PHP processes carry out... before I propagate it's potential domain name. People claim this is pointless but during a site migration why on earth would you not test a site before propagating it? I'm looking for something similar to what cPanel offers i.e. http://IP.ADDRESS./~mydomain.com The only solution I could think of is storing the site in a new directory of an already functional site and then setting up databases and testing the site once it's complete. Once tested and working I should be easily be able to migrate the files to the "new" domain name's root directory and just setup a new databases and then propagate the domain name. I can't believe that Plesk V10+ still does not have a site preview method that includes PHP, JS, and Flash ability.

    Read the article

  • TraceTune supports uploading Zip files

    - by Bill Graziano
    I’ve been using the online version of ClearTrace more and more lately.  When I get to a new client it’s just much easier to upload a trace file rather than install ClearTrace. That means I’ve finally been adding more features to it.  The two latest features are around ease of use. You can now upload a ZIP file that contains a trace file.  Trace files are already somewhat compressed.  Putting it in a ZIP file further compresses it by a factor of 8X or 9X in my testing. That means you can start with a 100MB trace and end up with a 10Mb-12MB ZIP file to upload.  I’m consistently able to get over 150,000 events in a 100MB ZIP file.  That gives me a pretty good look at a system. The second part of this is that files are now processed asynchronously.  After you upload a file you’ll be taken to a processing page that updates every few seconds with the number of rows processed.  It generally takes under a minute to process a 100MB trace file but I *hated* staring at a blank screen. Give TraceTune a try.  It’s getting easier to use every day.

    Read the article

  • A better alternative to incompatible implementations for the same interface?

    - by glenatron
    I am working on a piece of code which performs a set task in several parallel environments where the behaviour of the different components in the task are similar but quite different. This means that my implementations are quite different but they are all based on the relationships between the same interfaces, something like this: IDataReader -> ContinuousDataReader -> ChunkedDataReader IDataProcessor -> ContinuousDataProcessor -> ChunkedDataProcessor IDataWriter -> ContinuousDataWriter -> ChunkedDataWriter So that in either environment we have an IDataReader, IDataProcessor and IDataWriter and then we can use Dependency Injection to ensure that we have the correct one of each for the current environment, so if we are working with data in chunks we use the ChunkedDataReader, ChunkedDataProcessor and ChunkedDataWriter and if we have continuous data we have the continuous versions. However the behaviour of these classes is quite different internally and one could certainly not go from a ContinuousDataReader to the ChunkedDataReader even though they are both IDataProcessors. This feels to me as though it is incorrect ( possibly an LSP violation? ) and certainly not a theoretically correct way of working. It is almost as though the "real" interface here is the combination of all three classes. Unfortunately in the project I am working on with the deadlines we are working to, we're pretty much stuck with this design, but if we had a little more elbow room, what would be a better design approach in this kind of scenario?

    Read the article

  • Can't find new.h - getting gcc-4.2 on Quantal?

    - by Suyo
    I've been trying to compile the Valve Source SDK (2007) on my machine, but I keep running into the same error: In file included from ../public/tier1/interface.h:50:0, from ../utils/serverplugin_sample/serverplugin_empty.cpp:13: ../public/tier0/platform.h:46:17: new.h: No such file or directory I'm pretty new to C++ coding and compiling, but using apt-file search I tried to use every single suggestion for the required files in the Makefile (libstdc++.a and libgcc_eh.a), and none worked. I then found a note in the Makefile saying gcc 4.2.2 is recommended - I assume the older code won't work with the newer version, but gcc-4.2 is unavailable in 12.10. So my question/s is/are: If my assumption is right - how do I get gcc 4.2.2 on Quantal? If my assumption is wrong - what else could be the problem here? Relevant portion of the Makefile: # compiler options (gcc 3.4.1 will work - 4.2.2 recommended) CC=/usr/bin/gcc CPLUS=/usr/bin/g++ CLINK=/usr/bin/gcc CPP_LIB="/usr/lib/gcc/x86_64-w64-mingw32/4.6/libstdc++.a /usr/lib/gcc/x86_64-w64-mingw32/4.6/libgcc_eh.a" # GCC 4.2.2 optimization flags, if you're using anything below, don't use these! OPTFLAGS=-O1 -fomit-frame-pointer -ffast-math -fforce-addr -funroll-loops -fthread-jumps -fcrossjumping -foptimize-sibling-calls -fcse-follow-jumps -fcse-skip-blocks -fgcse -fgcse-lm -fexpensive-optimizations -frerun-cse-after-loop -fcaller-saves -fpeephole2 -fschedule-insns2 -fsched-interblock -fsched-spec -fregmove -fstrict-overflow -fdelete-null-pointer-checks -freorder-blocks -freorder-functions -falign-functions -falign-jumps -falign-loops -falign-labels -ftree-vrp -ftree-pre -finline-functions -funswitch-loops -fgcse-after-reload #OPTFLAGS= # put any compiler flags you want passed here USER_CFLAGS=-m32

    Read the article

  • System in low graphics, deleted linux, grub rescue, can't access windows

    - by First timer
    So I'm pretty new to Ubuntu but I managed to install it with no big problems on both my desktop and netbook. When I installed it on my brother's netbook everything went horribly wrong and now I fear the system is close to beyond repair. The problem was first that it said it did not have any space left (seemed ridiculous since it had a lot). Then Ubuntu began booting into a "System is running in low graphics mode error" which I then tried to fix, using all the tips I could find in here but nothing helped. I think the graphics error and lack of space might have been related but I can't be sure. Finally I gave up repairing Ubuntu and went for a reinstall. Shouldn't have done that! I read that I should simply open Ubuntu through a live usb and choose GParted to delete the Linux partitions so I did and rebooted accordingly. Next, I was to install Ubuntu but now I am only given the option to wipe the whole disk for Ubuntu, not install along with windows 7. If I access GParted I can still see the ntfs partitions that hold windows 7 (there are 2: one labeled RECOVERY and another labeled OS and boot) so why can't I access them? Btw. the OS and boot has a little red mark with a warning that 1 cluster is referenced to multiple times, don't know what that means. If I boot without the live usb I am sent directly into a grub rescue "black screen of the computer will follow no orders". Please, I know that the easiest might be to simply wipe the whole thing clean but there are important files and programs on windows 7. Is there a way to just access windows? It is a dell inspiron 1018 mini netbook, so I have no cd input and no windows 7 installation cd.

    Read the article

  • How to keep the trunk stable when tests take a long time?

    - by Oak
    We have three sets of test suites: A "small" suite, taking only a couple of hours to run A "medium" suite that takes multiple hours, usually ran every night (nightly) A "large" suite that takes a week+ to run We also have a bunch of shorter test suites, but I'm not focusing on them here. The current methodology is to run the small suite before each commit to the trunk. Then, the medium suite runs every night, and if in the morning it turned out it failed, we try to isolate which of yesterday's commits was to blame, rollback that commit and retry the tests. A similar process, only at a weekly instead of nightly frequency, is done for the large suite. Unfortunately, the medium suite does fail pretty frequently. That means that the trunk is often unstable, which is extremely annoying when you want to make modifications and test them. It's annoying because when I check out from the trunk, I cannot know for certain it's stable, and if a test fails I cannot know for certain if it's my fault or not. My question is, is there some known methodology for handling these kinds of situations in a way which will leave the trunk always in top shape? e.g. "commit into a special precommit branch which will then periodically update the trunk every time the nightly passes". And does it matter if it's a centralized source control system like SVN or a distributed one like git? By the way I am a junior developer with a limited ability to change things, I'm just trying to understand if there's a way to handle this pain I am experiencing.

    Read the article

  • C++ program...overshoots? [migrated]

    - by Zdrok
    I'm decent at C++, but I may have missed some nuance that applies here. Or maybe I completely missed a giant concept, I have no idea. My program was instantly crashing ("blah.exe is not responding") about 1/5 times it was run (other times it ran completely fine) and I tracked the problem down to a constructor for a world class that was called once in the beginning of the main function. Here is the code (in the constructor) that causes the problem: int ii; for(ii=0;ii<=255;ii++) { cout<<"ent "<<ii<<endl; entity_list[ii]=NULL; } for(ii=0;ii<=255;ii++) { cout<<"sec "<<ii<<endl; sector_list[ii]=NULL; } entity_list[0] = new Entity(0,0); entity_list[0]->_world = this; Specifically the second for loop. The cout references are new for the sake of telling where it is having trouble. It would print the entire "ent 1" to "ent 255" and then "sec 1" to "sec 255" and then crash right after, as if it was going for a 257th run through of the second for loop. I set the second for loop to go until "ii<=254" which stopped all crashes. Does C++ code tend to "overshoot" for loops or something? What is causing it to crash at this specific loop seemingly at random? By the way, entity_list and sector_list point to classes called Entity and Sector, respectively, but they are not constructing anything so I didn't think it would be relevant. I also have a forward declaration for the Entity class in a header for this, but since none were being constructed I didn't think it was relevant either. EDIT: It was due to the new Entity line, I assumed wrongly that the fact that altering the for statement to 254 fixed the crashes meant that it had to be there. I still don't understand why the for loop is related, though.

    Read the article

  • Good books or tutorials on building projects without an IDE?

    - by CodexArcanum
    While I'm certainly under no illusions that building software without an IDE is possible, I don't actually know much about doing it well. I've been using graphical tools like Visual Studio or Code::Blocks so long that I'm pretty well lost without them. And that really stinks when I want to change environments or languages. I couldn't really do anything in D until someone made a Visual Studio plugin, and now that I'm trying to do more development on Mac, I can't use D again because the XCode plugins don't work. I'm sick of being lost when I see a .make file and having no idea what I'm supposed to do with a folder full of source files. People can't be compiling them one by one using the console and then linking them one by one. You'd spend more time typing file names than code. So what are the automation and productivity tools of the non-IDE user? How do you manage a project when you're writing all the code in emacs or vim or nano or whatever? I would love it if there was a book or a guide online that spells some of this out.

    Read the article

  • Persisting high score table in flash game without a network. (Featuring: HttpListenerException)

    - by bearcdp
    Hi everyone, this question is very programming-centric, but it's for a game so I figured I might as well post it here. I'm doing polishing work on a GGJ '11 game because it will be shown at an indie arcade tomorrow afternoon, and they're expecting our final build in the morning. We'd like to have a high score table that displays during attract mode, but since it's Flash (Flixel) it would require some networking, Mochi, or something to keep a record of these scores. Only problem is the machine we'd be running on probably won't have network access. As a quick solution, I thought I'd just write up a dinky little high score server in C#/.NET that could take basic GET requests for submitting scores and getting the score list. We're talking REAL basic, like blocking while waiting for an incoming request, run & forget console app, etc. There's no guarantee that our .swf won't get reloaded, and we'd like the scores to persist, so this server would pretty much exists to keep a safe copy of the scores that the game can add to and request, and occasionally the server will write the scores to a flat text file. But, HttpListener is giving me Error Code 87 'The parameter is incorrect.' Have any idea what I'm doing wrong? Or better yet, am I barking up the wrong tree and missing an obviously simpler solution? This is all I've got so far in my Main(): HttpListener listener = new HttpListener(); listener.Prefixes.Add("http://localhost:66666/"); listener.Start(); The exception happens at listener.Start(); and the stack trace is: at System.Net.HttpListener.AddAllPrefixes() at System.Net.HttpListener.Start() at WOSEBCE_ScoreServer.Program.Main(String[] args) in C:\Users\Michael\Documents\Visual Studio 2010\VS2010 Projects\WOSEBCE_ScoreServer\WOSEBCE_ScoreServer\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32 (Mounted Drives) ???

    - by Microkernel
    Hi guys, I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two windows-xp machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is = Shared folders on Ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (even if I provide admin login details!!!). I changed workgroup value to the domain_name in office laptop, but still the problem persists... Here is the smdb.conf I am using... [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup Was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail Thanks in advance Regards, Microkernel

    Read the article

  • MVC: Nasty __o not declared

    - by xamlnotes
    I ran into this little error with MVC where a bunch of errors showed up about  __o  not declared. This was driving me nuts. Then I ran across this link that solved it. http://stackoverflow.com/questions/750902/how-do-i-get-rid-of-o-is-not-declared So, the solution is to put this into the top of the page like VS does for your site.master. <%-- The following line works around an ASP.NET compiler warning --%> <%: ""%> But what about other pages? Lets say you have a view that’s using your site master and that view is throwing this error. Just add the items into the content section where the error occurs like so: <asp:Content ID="indexContent" ContentPlaceHolderID="MainContent" runat="server"> <%-- The following line works around an ASP.NET compiler warning --%> <%: ""%>   Then add the rest of your code. That seems to fix it and its pretty simple too.

    Read the article

  • Weekend Project: Build a Fireball Launcher

    - by Jason Fitzpatrick
    What’s more fun than playing with fire? Shooting it from your hands. Put on your robe and wizard hat, make a stop at the hardware store, and spend the weekend trying to convince your friends you’ve acquired supernatural powers. Over at MAKE Magazine, Joel Johnson explains the impetus for his project: A stalwart of close-quarter magicians for years, the electronic flash gun is a simple device: a battery-powered, hand-held ignitor that uses a “glo-plug” to light a bit of flash paper and cotton, shooting a fireball a few feet into the air. You can buy one from most magic shops for around $50, but if you build one on your own, you’ll not only save a few bucks, you’ll also learn how easy it is to add fire effects to almost any electronics project. (And what gadget couldn’t stand a little more spurting flame?) The parts list is minimal but the end effect is pretty fantastic. Hit up the link below for the full build guide, plenty of warnings, and a weekend project that’s sure to impress. How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • SSD I/O extremely slow installing/booting Ubuntu 12.04

    - by Menda
    These are some useful specs: Macbook Pro 7,1 OWC Mercury Extreme Pro 2,5" SATA SSD (120 GB). Has SandForce driver. Ubuntu 12.04 Desktop 32 bits. One 18 GB partition for GNU/Linux and 1.5 GB for SWAP. MD5 for the Ubuntu install CD is OK. I tried to install Ubuntu. It seems that everything is recognized, but there's a big problem: read and writes to the SSD are extremely slow. For example, the install process, which shouldn't take more than 20 minutes, it takes 7 hours. Then, booting up the computer takes about 20 minutes. I checked and the problem is definitely the SSD. Every access to any file is like 10 times slower than normal. I have tried to format the partition as Ext4 and Ext3 with the same problem. Trying to install other distros like Fedora 17, I have a similar problem. There's a "lag" with the SSD, but not so accused as in Ubuntu. Surprisingly, Debian 6.0 installs and works without any problem. Mac OS works pretty good as well in the other partition, so I discard it's an SSD problem. Thanks for your help!

    Read the article

  • I am not the most logically-organized person. Do I have any chance at being a good 'low-level' programmer?

    - by user217902
    Background: I am entering college next year. I really enjoy making stuff and solving logical problems, so I'm thinking of majoring in compsci and working in software development. I hope to have the kind of job where I can work with implementing / improving algorithms and data structures on a regular basis.. as opposed to, say, a job that's purely concerned with mashing different libraries together, or 'finding the right APIs for the job'. (Hence the word 'low-level' in the title. No, I don't wish to write assembly all day.) Thing is, I've never been the most logically-sharp person. Thus far I have only worked on hobby projects, but I find that I make the silliest of errors ever so often, and it can take me ages to find it. Like anywhere between three hours to a day to locate a simple segfault, off-by-one error, or other logical mistake. (Of course, I do other things in the meantime, like browsing SO, reddit, and the like..) It's not like I'm 'new' to programming either; I first tried C++ maybe five years ago. My question is: is this normal? Should a programmer with any talent solve it in less time? Having read Spolsky's Smart and gets things done, where he talks about the large variance in programming speed, am I near the bottom of the curve, and therefore destined to work in companies that cannot afford to hire quality programmers? I'd like to think that conceptually I'm okay -- I can grasp algorithms and concepts pretty well, I do fine in math and science, although I probably drop signs in my equations more often than the next guy. Still, grokking concepts makes me happy, and is the reason why I want to work with algorithms. I'm hoping to hear from those of you with real-world programming experience. TL;DR: I make many careless mistakes, should I not consider programming as a career?

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • Is learning C# as a first language a mistake?

    - by JuniorDeveloper1208
    I know there are similar questions on here, which I've read, but I recently read this post by Joel Spolsky: How can I teach a bright person, with no programming experience, how to program? And it got me thinking about my way of learning and whether it might actually be harmful in the long run. I've dabbled with various languages but C# is my first serious one, I've read "Head First C#" and created a few projects. But after reading the post above I've found it a bit disheartening that I may be going about it all wrong, obviously I respect Joel's opinion which is what has thrown me a bit. I've started reading "Code" as recommended in the reading list and I'm finding it pretty hard going, although enjoyable. I feel like it's taken the shine off of my "noobish hacking about" in Visual Studio. So now I'm unsure as to what path I should take? Should I take a step back and follow Joel's advice and start reading? I guess my main aim is just to become a good programmer, like everyone else, but I don't want to be going into bad practice by learning a .NET language when someone who's opinion I respect thinks that it is harmful. Thoughts?

    Read the article

  • Windows Defender Update KB915597 (Definition 1.135.415.0)? Killed My Live Discs

    - by user88311
    Here's my problem, for those willing to read for about 2 minutes here's the entire story, http://answers.microsoft.com/en-us/windows/forum/windows_vista-windows_update/bsod-after-windows-defender-update-kb915597/a4b5fca3-0274-47b4-97c4-61b34c4c4599, for those who want the short version here's what happened. After windows update automatically updated windows defender to kb915597, my computer starting getting bsods on shut down and start up and started experiencing problems withe the usb ports. So I decided to go to the microsoft answers site for help, I know that was probably my first mistake, so I followed their advice and they turned my computer into a large paper weight. Luckily I make physical backups of my c drive every few months and I have one from back in july, so I figured I'd boot up a ubuntu live disc, copy all my files from the past 2 months to a external drive and just copy the backup back to the c drive, that's where I ran into this problem. When I put in either a ubuntu or kubuntu disc, everything goes well, until it finishes the loading bar then when the OS would presumably start up, the computer resets, I've tried with ubuntu, kubuntu and gparted, and only gparted is able to get to the point where it starts up, but even then when I try to access the internet from it, the computer resets, and when I tried to copy the entire C drive partition to a blank external I wasn't able to. So I figured somehow maybe the C drive had something to do with it, so I unplugged the C drive so my computer was just a 2.8 ghz processor and 2 gigs of ram, should have had no problem starting a live disc but the problem still continues. After doing some googling around I've found whenever windows gets a update with the title KB915597 it's pretty much the kill switch for windows, I've tried contacting microsoft tech support and even managed to directly contact a software engineer but as soon as I mention KB915597 they all just blew me off. I hope anybody who reads this has any idea how to fix this, I'm going to attempt to install ubuntu or kubuntu to a external drive using the same computer and see what happens now.

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • How to handle the different frame rate on different devices?

    - by Fenwick
    I am not quite sure how frame per second works on a web page. I have a Canvas game that involves in moving an image from point A to B, and measuring the time elapsed. The code can be as simple as: var timeStamp = Date.now(); function update(){ obj.y += obj.speed; text = "Time: "+ (Date.now() - timeStamp) + "ms"; } The function update() is called every frame. The problem is that the time elapsed is different from device to device. It is pretty short on my PC, but get longer on my iPad, and is much longer on my cell phone. I thought it is because the FPS is smaller on mobile devices, so instead of calling update() every frame, I call it every 1ms by using a setInterval. But this does not solve the problem. In my understanding, the function for setInterval is invoked based on the increment in system time, other than frame rate, so it should fix the problem. Am I missing anything here? If the setInterval function is called based on FPS, is there any way to get around with the FPS difference across devices? On a side note, I have sort of a "water simulator" on the same canvas. It involves in redrawing about 60 objects which can be 600x600 pixels for every frame, so it could be a frame rate killer. I am using Phaser.js but not really using much of its functionalities, if that helps.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >