Search Results

Search found 10543 results on 422 pages for 'big bang theory'.

Page 39/422 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • Why do I get a "Day too big" error from Perl?

    - by azp74
    I have been helping someone debug some code where the error message was "Day too big". I know that this springs from localtime and the Y2038 bug (most google results appear to be people dealing with cookies expiring well into the future). We appear to have 'fixed' the problem by using time to get the current date. However, given that none of our original dates should have hit the 2038 issue I'm sceptical that we've actually fixed the problem ... Are there other instances that anyone knows of where one would hit "day too big"? OS is Solaris. Sample code - the actual code is quite large and the person I'm working with hasn't actually isolated the offending part (which is why I'm worried the 'fix' is not actually a fix). If I can put together something concise which reproduces the issue I will post!

    Read the article

  • Are GUID primary keys bad in theory, or just practice?

    - by Yarin
    Whenever I design a database I automatically start with an auto-generating GUID primary key for each of my tables (excepting look-up tables) I know I'll never lose sleep over duplicate keys, merging tables, etc. To me it just makes sense philosophically that any given record should be unique across all domains, and that that uniqueness should be represented in a consistent way from table to table. I realize it will never be the most performant option, but putting performance aside, I'd like to know if there are philosophical arguments against this practice?

    Read the article

  • How do I submit really big amounts of data to a form?

    - by William Calleja
    I have an HTML from that's posting a really big amount of data which is eventually being saved into an SQL Server 2005, the form is as follows: <form name="frmForm" method="post" action="saveData.aspx"> the target page takes the content of a control within the form and saves it to the database through a normal SQL insert statement. However only a portion of the data is being saved. The field in the database is an ntext. Should I use a different field? Or is something happening while I'm transferring from one page to another? Or even still there's something happening when I'm sending the really big SQL statement through c# in saveData.aspx?

    Read the article

  • How to Split a Big Postscript file (3000 pages) into one individual file per page (using Windows 7)?

    - by Pablo
    Hi, I'm having trouble doing the following: I have a big PDF file that I converted to postscript (for commercial printing). The resulting file is too big to be processed by the printer (machine). I've been trying to find a way to either: Convert from the original (many pages) PDF file to many Postscript file (one postcript file per PDF page in original PDF file(. Convert from PDF to PS (or even EPS). - I managed to do this Then split the PS file into a collection of smaller files. I've tried using Ghostscript, but it is all gibberish to me. Thanks. PS. If you have a good GS tutorial (for dummies?), please share the link.

    Read the article

  • A Guided Tour of Complexity

    - by JoshReuben
    I just re-read Complexity – A Guided Tour by Melanie Mitchell , protégé of Douglas Hofstadter ( author of “Gödel, Escher, Bach”) http://www.amazon.com/Complexity-Guided-Tour-Melanie-Mitchell/dp/0199798109/ref=sr_1_1?ie=UTF8&qid=1339744329&sr=8-1 here are some notes and links:   Evolved from Cybernetics, General Systems Theory, Synergetics some interesting transdisciplinary fields to investigate: Chaos Theory - http://en.wikipedia.org/wiki/Chaos_theory – small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible. System Dynamics / Cybernetics - http://en.wikipedia.org/wiki/System_Dynamics – study of how feedback changes system behavior Network Theory - http://en.wikipedia.org/wiki/Network_theory – leverage Graph Theory to analyze symmetric  / asymmetric relations between discrete objects Algebraic Topology - http://en.wikipedia.org/wiki/Algebraic_topology – leverage abstract algebra to analyze topological spaces There are limits to deterministic systems & to computation. Chaos Theory definitely applies to training an ANN (artificial neural network) – different weights will emerge depending upon the random selection of the training set. In recursive Non-Linear systems http://en.wikipedia.org/wiki/Nonlinear_system – output is not directly inferable from input. E.g. a Logistic map: Xt+1 = R Xt(1-Xt) Different types of bifurcations, attractor states and oscillations may occur – e.g. a Lorenz Attractor http://en.wikipedia.org/wiki/Lorenz_system Feigenbaum Constants http://en.wikipedia.org/wiki/Feigenbaum_constants express ratios in a bifurcation diagram for a non-linear map – the convergent limit of R (the rate of period-doubling bifurcations) is 4.6692016 Maxwell’s Demon - http://en.wikipedia.org/wiki/Maxwell%27s_demon - the Second Law of Thermodynamics has only a statistical certainty – the universe (and thus information) tends towards entropy. While any computation can theoretically be done without expending energy, with finite memory, the act of erasing memory is permanent and increases entropy. Life & thought is a counter-example to the universe’s tendency towards entropy. Leo Szilard and later Claude Shannon came up with the Information Theory of Entropy - http://en.wikipedia.org/wiki/Entropy_(information_theory) whereby Shannon entropy quantifies the expected value of a message’s information in bits in order to determine channel capacity and leverage Coding Theory (compression analysis). Ludwig Boltzmann came up with Statistical Mechanics - http://en.wikipedia.org/wiki/Statistical_mechanics – whereby our Newtonian perception of continuous reality is a probabilistic and statistical aggregate of many discrete quantum microstates. This is relevant for Quantum Information Theory http://en.wikipedia.org/wiki/Quantum_information and the Physics of Information - http://en.wikipedia.org/wiki/Physical_information. Hilbert’s Problems http://en.wikipedia.org/wiki/Hilbert's_problems pondered whether mathematics is complete, consistent, and decidable (the Decision Problem – http://en.wikipedia.org/wiki/Entscheidungsproblem – is there always an algorithm that can determine whether a statement is true).  Godel’s Incompleteness Theorems http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems  proved that mathematics cannot be both complete and consistent (e.g. “This statement is not provable”). Turing through the use of Turing Machines (http://en.wikipedia.org/wiki/Turing_machine symbol processors that can prove mathematical statements) and Universal Turing Machines (http://en.wikipedia.org/wiki/Universal_Turing_machine Turing Machines that can emulate other any Turing Machine via accepting programs as well as data as input symbols) that computation is limited by demonstrating the Halting Problem http://en.wikipedia.org/wiki/Halting_problem (is is not possible to know when a program will complete – you cannot build an infinite loop detector). You may be used to thinking of 1 / 2 / 3 dimensional systems, but Fractal http://en.wikipedia.org/wiki/Fractal systems are defined by self-similarity & have non-integer Hausdorff Dimensions !!!  http://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_dimension – the fractal dimension quantifies the number of copies of a self similar object at each level of detail – eg Koch Snowflake - http://en.wikipedia.org/wiki/Koch_snowflake Definitions of complexity: size, Shannon entropy, Algorithmic Information Content (http://en.wikipedia.org/wiki/Algorithmic_information_theory - size of shortest program that can generate a description of an object) Logical depth (amount of info processed), thermodynamic depth (resources required). Complexity is statistical and fractal. John Von Neumann’s other machine was the Self-Reproducing Automaton http://en.wikipedia.org/wiki/Self-replicating_machine  . Cellular Automata http://en.wikipedia.org/wiki/Cellular_automaton are alternative form of Universal Turing machine to traditional Von Neumann machines where grid cells are locally synchronized with their neighbors according to a rule. Conway’s Game of Life http://en.wikipedia.org/wiki/Conway's_Game_of_Life demonstrates various emergent constructs such as “Glider Guns” and “Spaceships”. Cellular Automatons are not practical because logical ops require a large number of cells – wasteful & inefficient. There are no compilers or general program languages available for Cellular Automatons (as far as I am aware). Random Boolean Networks http://en.wikipedia.org/wiki/Boolean_network are extensions of cellular automata where nodes are connected at random (not to spatial neighbors) and each node has its own rule –> they demonstrate the emergence of complex  & self organized behavior. Stephen Wolfram’s (creator of Mathematica, so give him the benefit of the doubt) New Kind of Science http://en.wikipedia.org/wiki/A_New_Kind_of_Science proposes the universe may be a discrete Finite State Automata http://en.wikipedia.org/wiki/Finite-state_machine whereby reality emerges from simple rules. I am 2/3 through this book. It is feasible that the universe is quantum discrete at the plank scale and that it computes itself – Digital Physics: http://en.wikipedia.org/wiki/Digital_physics – a simulated reality? Anyway, all behavior is supposedly derived from simple algorithmic rules & falls into 4 patterns: uniform , nested / cyclical, random (Rule 30 http://en.wikipedia.org/wiki/Rule_30) & mixed (Rule 110 - http://en.wikipedia.org/wiki/Rule_110 localized structures – it is this that is interesting). interaction between colliding propagating signal inputs is then information processing. Wolfram proposes the Principle of Computational Equivalence - http://mathworld.wolfram.com/PrincipleofComputationalEquivalence.html - all processes that are not obviously simple can be viewed as computations of equivalent sophistication. Meaning in information may emerge from analogy & conceptual slippages – see the CopyCat program: http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf Scale Free Networks http://en.wikipedia.org/wiki/Scale-free_network have a distribution governed by a Power Law (http://en.wikipedia.org/wiki/Power_law - much more common than Normal Distribution). They are characterized by hubs (resilience to random deletion of nodes), heterogeneity of degree values, self similarity, & small world structure. They grow via preferential attachment http://en.wikipedia.org/wiki/Preferential_attachment – tipping points triggered by positive feedback loops. 2 theories of cascading system failures in complex systems are Self-Organized Criticality http://en.wikipedia.org/wiki/Self-organized_criticality and Highly Optimized Tolerance http://en.wikipedia.org/wiki/Highly_optimized_tolerance. Computational Mechanics http://en.wikipedia.org/wiki/Computational_mechanics – use of computational methods to study phenomena governed by the principles of mechanics. This book is a great intuition pump, but does not cover the more mathematical subject of Computational Complexity Theory – http://en.wikipedia.org/wiki/Computational_complexity_theory I am currently reading this book on this subject: http://www.amazon.com/Computational-Complexity-Christos-H-Papadimitriou/dp/0201530821/ref=pd_sim_b_1   stay tuned for that review!

    Read the article

  • Content appearing under multiple categories; anything I can do to prevent duplicate penalty?

    - by dave
    I'm working with a CMS that allows me to post content in to multiple categories. So, I have this link: www.site.com/category/green-cars Here are the GREEN cars TITLE: A Big green car INTRO: this is a great big green car. But then I have this link: www.site.com/category/big-cars Here are the BIG cars TITLE: A Big green car INTRO: this is a great big green car. So essentially - for every item of content, header and the intro sentence is the same regardless of the category the item appears in. Will a search engine penalise the site for having the same content in this way? I've looked at canonical links, but I don't think this is relevant here. All my content points to the same page - but the content may appear in multiple categories first. Or am I worrying about nothing? Thanks.

    Read the article

  • Examples of different architecture methodologies

    - by Lane
    Is there a resource or site which illustrates building the same application (desktop or web) using several different contrasting architectures? Such as MVP versus MVVM versus MVC, etc. It would be very helpful to see how they look side-by-side using real-world code instead of comparing written theory to written theory. I've often found that something can be described well in a book, but when you go to implement it, the subtleties and weaknesses of the theory become readily apparent.

    Read the article

  • What algorithms do "the big ones" use to cluster news?

    - by marco92w
    I want to cluster texts for a news website. At the moment I use this algorithm to find the related articles. But I found out that PHP's similar_text() gives very good results, too. What sort of algorithms do "the big ones", Google News, Topix, Techmeme, Wikio, Megite etc., use? Of course, you don't know exactly how the algorithms work. It's secret. But maybe someone knows approximately the way they work? The algorithm I use at the moment is very slow. It only compares two articles. So for having the relations between 5,000 articles you need about 12,500,000 comparisons. This is quite a lot. Are there alternatives to reduce the number of necessary comparisons? [I don't look for improvements for my algorithm.] What do "the big ones" do? I'm sure they don't always compare one article to another and this 12,500,000 times for 5,000 news. It would be great if somebody can say something about this topic.

    Read the article

  • Shift count negative or too big error - correct solution?

    - by PeterK
    I have the following function for reading a big-endian quadword (in a abstract base file I/O class): unsigned long long CGenFile::readBEq(){ unsigned long long qT = 0; qT |= readb() << 56; qT |= readb() << 48; qT |= readb() << 40; qT |= readb() << 32; qT |= readb() << 24; qT |= readb() << 16; qT |= readb() << 8; qT |= readb() << 0; return qT; } The readb() functions reads a BYTE. Here are the typedefs used: typedef unsigned char BYTE; typedef unsigned short WORD; typedef unsigned long DWORD; The thing is that i get 4 compiler warnings on the first four lines with the shift operation: warning C4293: '<<' : shift count negative or too big, undefined behavior I understand why this warning occurs, but i can't seem to figure out how to get rid of it correctly. I could do something like: qT |= (unsigned long long)readb() << 56; This removes the warning, but isn't there any other problem, will the BYTE be correctly extended all the time? Maybe i'm just thinking about it too much and the solution is that simple. Can you guys help me out here? Thanks.

    Read the article

  • How to convert Big Endian and how to flip the highest bit?

    - by Robert Frank
    I am using a TStream to read binary data (thanks to this post: http://stackoverflow.com/questions/2878180/how-to-use-a-tfilestream-to-read-2d-matrices-into-dynamic-array). My next problem is that the data is Big Endian. From my reading, the Swap() method is seemingly deprecated. How would I swap the types below? 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point - Are IEEE affected by Big Endian? And, finally, since the data is unsigned, the creators of this dataset have stored the unsigned values as signed integers (excluding the IEEE). They instruct that one need only add an offset (2^15, 2^31, and 2^63) to recover the unsigned data. But, they note that flipping the most significant bit is the fastest way to do that. How does one efficiently flip the most significant bit of a 16, 32, or 64-bit integer? So, if the data on disk (16-bit) is "85 FB" - the desired result after reading the data and swapping and bit flipping would be 1531. Is there a way to accomplish the swapping and bit flipping with generics so it fits into the generic answer at the link above? Yes, kids, THIS is how scientific astronomical data is stored by NASA, ESO, and all professional astronomers. This FITS standard is considered by some to be one of the most successful standards ever created in its proliferation and flexibility!

    Read the article

  • Touchpad not working on an Acer One 725

    - by big-marc
    I installed ubuntu on a usb drive on a acer one 725, and cannot seem to be able to make the touchpad work....here is what i have xinpbig-marc@Big-Marc:~$ xinput --list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Logitech USB Optical Mouse id=10 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad id=13 [slave pointer (2)] any can help me out....

    Read the article

  • How to make an inner shadow effect on big font using CSS text-shadow?

    - by Relax
    I want to make the effect as demonstrate on this site http://dropshadow.webvex.limebits.com/ with arguments - left:0 top:0 blur:1 opacity:1 examples:engraved font:sans serif I tried #333333 -1px -1px but seems not enough to make an inner shadow on such big font, it may be much more complex than i thought? And worse is i'm using Cufon to replace the font but Cufon doesn't support blur of text-shadow

    Read the article

  • for big website's CSS what we should use IE conditional sheets or CSS hacks, in multiple people envi

    - by metal-gear-solid
    for big website's CSS what we should use IE condition sheets ( IE6, IE7, IE8 if needed) or CSS hacks in multiple people environment? and CSS will be handled by multiple people. I'm thinking to use hacks with proper comments because there are chanceh to forgot for other to make any changes in both css. For example : #ab { width:200px} in main css and #ab { width:210px} in IE css. Need your view on this. Thanks in advance.

    Read the article

  • How may I scroll with vim into a big file ?

    - by Luc M
    Hello, I have a big file with thousands of lines of thousands of characters. I move the cursor to 3000th character. If I use PageDown or <CTRL>-D, the file will scroll but the cursor will come back to the first no-space character. There's is an option to set to keep the cursor in the same column after a such scroll ? I have the behavior with gvim on Window, vim on OpenVMS and Cygwin. Regards

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >