Search Results

Search found 1959 results on 79 pages for 'behavioural science sdfsd'.

Page 69/79 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • How can I be prepared to join a company?

    - by Aerovistae
    There's more to it than that, but this title was the best way I could think of to sum it up. I'm a senior in a good computer science program, and I'm graduating early. About to start interviews and all whatnot. I'm not a super-experienced programmer, not one of those people who started in middle school. I'm decent at this, but I'm not among the best, not nearly. I have to do an awful lot of googling. So today I'm meeting some fellow for lunch at a campus cafe to discuss some front-end details when this tall, good-looking guy begs pardon, says he's new to campus, says he's wondering if we know where he can go to sign up for recruiting developers. Quickly evolves into long conversation: he's the CEO of a seems-to-be-doing-well start-up. Hiring passionate interns and full-times. Sounds great! I take one look at his site on my own computer later, immediately spot a major bug. No idea how to fix it, but I see it. I go over to the page code, and good god. It's the standard amount of code you would expect from a full-scale web application, a couple dozen pages of HTML and scripts. I don't even know where to start reading it. I've built sites from scratch, but obviously never on that scale, nor have I ever worked on one of that scale. I have no idea which bit might generate the bug. But that sets me thinking: How could someone like me possibly settle into an environment like that? A start-up is a very high-pressure working environment. I don't know if I can work at that pace under those constraints-- I would hate to let people down. And with only 10 employees, it's not like anyone has much time to help you get your bearings. Somewhere in there is a question. Can you see it? I'm asking for general advice here. Maybe even anecdotal advice. Is joining a start-up right out of college a scary process? Am I overestimating what it would take to figure out the mass of code behind this site? What's the likelihood a decent but only moderately-experienced coder could earn his pay at such a place? For instance, I know nothing of server-side/back-end programming. Never touched it. That scares me.

    Read the article

  • New grad; To overcome complete lack of experience, should I ditch a creative pet project in lieu of one that would demonstrate more applicable skills?

    - by Hart Simha
    I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in English), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile application, development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. So in brief, the common denominator to answering the question "How can I overcome experience requirements for a job" seems to be "Show off your own project." I want to know WHICH project I should work on to best increase my chances of getting a job out of college, keeping in mind that I have no experience. I believe this question is applicable to any new grad that lacks demonstrable experience.

    Read the article

  • Are closures with side-effects considered "functional style"?

    - by Giorgio
    Many modern programming languages support some concept of closure, i.e. of a piece of code (a block or a function) that Can be treated as a value, and therefore stored in a variable, passed around to different parts of the code, be defined in one part of a program and invoked in a totally different part of the same program. Can capture variables from the context in which it is defined, and access them when it is later invoked (possibly in a totally different context). Here is an example of a closure written in Scala: def filterList(xs: List[Int], lowerBound: Int): List[Int] = xs.filter(x => x >= lowerBound) The function literal x => x >= lowerBound contains the free variable lowerBound, which is closed (bound) by the argument of the function filterList that has the same name. The closure is passed to the library method filter, which can invoke it repeatedly as a normal function. I have been reading a lot of questions and answers on this site and, as far as I understand, the term closure is often automatically associated with functional programming and functional programming style. The definition of function programming on wikipedia reads: In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. and further on [...] in functional code, the output value of a function depends only on the arguments that are input to the function [...]. Eliminating side effects can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming. On the other hand, many closure constructs provided by programming languages allow a closure to capture non-local variables and change them when the closure is invoked, thus producing a side effect on the environment in which they were defined. In this case, closures implement the first idea of functional programming (functions are first-class entities that can be moved around like other values) but neglect the second idea (avoiding side-effects). Is this use of closures with side effects considered functional style or are closures considered a more general construct that can be used both for a functional and a non-functional programming style? Is there any literature on this topic? IMPORTANT NOTE I am not questioning the usefulness of side-effects or of having closures with side effects. Also, I am not interested in a discussion about the advantages / disadvantages of closures with or without side effects. I am only interested to know if using such closures is still considered functional style by the proponent of functional programming or if, on the contrary, their use is discouraged when using a functional style.

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Which one to select for my future career; Java, C#, Azure or Apex?

    - by user636195
    Hi folks, This time, I am going to studying Masters in Computer Science in U.S.A after a week. I have been doing my B.Sc for the past three years and after my freshman year I started working on projects (in C# and very rarely in Java) for the past two years.(i.e while I was a second and third year student). Now I am in a college where all of the programming courses are going to be taken in Java only (using Eclipse) and I am going to stay in this college for 8 months on campus and then fully employed for two years in other companies as a CPT. I really love to work on Microsoft products because, for me, they are simple and easy to use and understand. My future plan is to work in Cloud computing and be a Cloud based business owner in the near future. Since the college is going to teach us and let us do every project in Java, I was confused which programming language to use that will help me and enhance me in my career, and of course I wanted to select the one I liked to do everytime. I also heard a lot about Azure (Microsoft’s ) and Apex (Salesforce.com’s cloud computing programming language). Would you please give me your advice and recommendation based on my situation? Should I have to study only Java, or should I have to study C# or Azure beside Java on my own? The reason I asked this is because, since I have no clue how Azure works and how long it will take me to know the language, I am really confused which one to select (Java Vs C# and Azure Vs Apex or if there is any popular and mostly used Cloud Computing langauge). Do you think I can get a job in cloud computing if I study Azure or Apex by my own without experience? There is also one issue I want to consider which is a short term issue is. i.e Salary. Since I have to pay my student loan, I also need to get a good job which will let me pay my loan within two years. But, as I said, my long term plan is, get experience in Cloud Computing (from programming to administrative part,i.e every area of cloud computing) and then have my own business may be within 5-10 years. What do you think? Thank you for your time.

    Read the article

  • Is it smart to take a year off from school to get experience?

    - by user134147
    firstly I apologize if this question is not appropriate for the site, but I've seen other similar (though slightly deviant) questions on this sight before and I know the people here are the most qualified to answer my question. Anyways, I'm currently between my sophomore and junior years at a 4 year university, and after a bit of deliberation I've decided on computer science as a major (BA, by the way, as a BS would require me to stay at least an extra year the way our program is set up). I've been interested now in programming for a few months and I've developed a passion for it in a very short time. I began learning C++, migrating to Java recently when I learned my school focuses on this language. Now, I should mention that the concept of higher education has never sat well with me, so part of my motivation for wanting to take time off is to truly challenge myself and see what I can accomplish when I actually try at something. The autodidact in me finds it difficult to focus on my passions while trying to keep a high GPA in unrelated classes. However, I understand the times we live in and therefore would plan to complete my degree after this year. So my question is whether or not the skills I learn in a year off from college could justify the time off from school. Unfortunately, I don't believe I know enough yet to gain any professional experience (internship, etc.) so I would mostly focus my time on learning Java and another language, possibly Wordpress (to gain an understanding of web programming concepts as I have not yet decided what field I want to get into, and to make some money to fund my off-year), and to delve into security concepts, which also interest me. I'm hoping I could work on projects, such as simple applications or contributions to open source software during this time to enhance my resume once I do finish school, so I can find a job out of college easier. I do not want to be the new hire who knows nothing beyond the concepts of his Java textbooks. Does anyone have any input about these thoughts of mine, or any ideas for where I should focus my studies or how high I might set the bar for my work? Thanks a lot everyone!

    Read the article

  • Nginx case-insensitive reverse proxy rewrites

    - by BrianM
    I'm looking to setup an nginx reverse proxy to make some upcoming server moves and load balanced implementations much easier within our apps. Since our servers are all IIS case sensitivity hasn't been an issue, but now with nginx it's becoming one for me. I am simply looking to do a rewrite regardless of case. Infrastructure notes: All backend servers are IIS Most services are WCF services I am trying to simplify the URLs so I can move services around as we continue to build out I can't set my location to case insensitive due to the following error: nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/sites-enabled/test.conf:101 The main part of my conf file where I am trying to handle the rewrite is as follows location /svc_test { proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; proxy_set_header host $http_host; proxy_pass http://backend/serviceSite/WFCService.svc; } location ~* /test { rewrite ^/(.*)/$ /svc_test/$1 last; } It's the /test location that I can't get figured out. If I call http://nginxserver/svc_test/help I get the WCF help page to display correctly and I can make all available REST calls. This HAS to be a boneheaded regex issue on my part, but I have tried several variations and all I can get are 404 or 500 errors from nginx. This is NOT rocket science so can someone point me in the right direction so I can look like an idiot and just move on?

    Read the article

  • Where can I ask questions that aren't IT questions?

    - by Adam Davis
    Thank you for your confidence in our abilities But have you read this? FAQ We get a lot of interesting technical questions on here, but Server Fault is meant to be first and foremost a resource for system administrators and IT professionals. In other words, Server Fault isn't meant to cater to every computer problem - just those that only exist in or can best be solved in the IT and sysadmin domain Yes, someone here might be able to help you, but you'll find that other forums more focused on your topic can give you a much better answer than a bunch of IT professionals. It's likely that your question will be downvoted, closed, and in some cases marked "offensive." It's not that we hate you, it's just that we like to keep our corn pops separate from our cocoa puffs. In that vein, here are a bunch of other forums where you can get help: (This list contains the top two forums for each category as voted for below.) Programming Stackoverflow Consumer Level Computer Hardware Ars Technica OpenForum Tom's Hardware Forums Computer Software Anandtech Forum Web Design/Hosting/CMS SitePoint Forums Web Hosting Talk Math, Science, Engineering Physics Forums PlanetMath Other/General Ask Metafilter Google Search Mahalo Answers Check out the answers below for even more suggestions! What forums can people go to to ask the questions that are off topic here? Please list only ONE forum per answer so votes can bring the best forums to the top. Return to FAQ Index

    Read the article

  • 'pskill \\hostname winlogon' might budge a server "stuck rebooting", but why?

    - by Snoi
    Question: Executing remote (Sysinternals) command... pskill \\machine winlogon ...can budge a server that is stuck rebooting, but how/why does this work? How do you know which service to kill? To recreate (e.g.): You run Windows Update, allow a reboot, and ...NOTHING! RDP gets cut off but the server does not reboot. Just about every other service seems to stay up. Further Background: I've faced this problem on VMs hosted around the planet for some years, and used various sc.exe and shutdown commands to learn the state of and attempt remote reboot of servers in such a state, with limited success. Most datacentres don't offer any way to see the true console or power off/on such machines. They charge $$ for you to call them to do such simple things after hours, when you nearly always have to run your maint tasks. e.g. NET USE \\machine\IPC$ /USER:login password sc \\machine query RpcSs sc \\machine query TermService sc \\machine query wuauserv tasklist /s machine This occasionally works for me... shutdown /m \\machine /r /f /t: 0 ...but more often than not it fails with: A system shutdown is in progress (1115). I found this question, and the answer by @Tweek, and it worked really well, but was I just lucky? Can not RDP to Win 2003 box or initiate remote restart @Tweek said to run: pskill \\hostname winlogon ...and that got me past this situation in a new way (Server 2008 R2 in my most recent case) - really useful! I just need to understand if I got lucky or there is more science here. What I'd like to know is why the winlogon process? @Livne said to use "tasklist /s HostName" to see what is the culprit, but how do you tell from the listed output? It's just a list of running tasks etc. From that I would not know what to look for, nor could I see anything about the winlogon process that suggested to my eyes that was the one to kill.

    Read the article

  • MICR / Check printing

    - by drewk
    I want to print my own checks on blank stock from Quicken and QuickBooks. To do so, I need a virtual driver that intercepts the print output from Quicken formats the fields properly, and adds the MICR text at the bottom of the check. This is for Windows 7 or OS X. I have looked at the following: CHAX Works, but crashes a LOT. Not giving me confidence. It is also expensive. VersaCheck Looks promising but looks like it suffers severe feature bloat. I just want to print check and deposit slips, thank you. Ganson Is only for high volume really. It is also designed for batch use vs I just want to print to a virtual printer. More than $2500. GnuMICR is more of a science project. Not yet ready to use and has not been updated for years. There are a lot of solutions on the web, so please don't just google for me. I am looking for specific experience with good solid check printing solution. Thanks!!!

    Read the article

  • Security for university research lab systems

    - by ank
    Being responsible for security in a university computer science department is no fun at all. And I explain: It is often the case that I get a request for installation of new hw systems or software systems that are really so experimental that I would not dare put them even in the DMZ. If I can avoid it and force an installation in a restricted inside VLAN that is fine but occasionally I get requests that need access to the outside world. And actually it makes sense to have such systems have access to the world for testing purposes. Here is the latest request: A newly developed system that uses SIP is in the final stages of development. This system will enable communication with outside users (that is its purpose and the research proposal), actually hospital patients not so well aware of technology. So it makes sense to open it to the rest of the world. What I am looking for is anyone who has experience with dealing with such highly experimental systems that need wide outside network access. How do you secure the rest of the network and systems from this security nightmare without hindering research? Is placement in the DMZ enough? Any extra precautions? Any other options, methodologies?

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • I cannot access Windows Update at all

    - by Cardinal fang
    I have been unable to access the Windows update site for a couple of weeks now. I just get a message saying "Internet Explorer cannot display the webpage" and saying I have connection problems. Same thing is replicated with any other Microsoft site I try to access. The Automatic Updates also do not work. I can access every other wesbite I've surfed to. I've tried Googling the problem and based on what other site have suggested I have cleared my cache and temp files. I've scanning my hard drive with my antivirus in case I have a virus (nada). I've tried turning off my firewall and anti-virus (I run Zone Alarm). I've downloaded SpyBot and scanned my drive with that in case something was missed by Zone Alarm (again nada). Based on suggestions from the smart cookies on the Bad Science forum, I've used nslookup to check my translation isn't wonky (got all the info they said I should get). I've also tried navigating there directly using the IP address I was given (nope). I normally access the internet through a 3 mobile broadband connection, but have also tried connecting using a mate's wi-fi connection in case it was something on my mobile modem interferring. I run Windows XP SP3 with Internet Explorer 7 and Zone Alarm Internet Security Suite as my anti-virus/ firewall. Any suggestions?

    Read the article

  • Best usage for a laptop being used as a desktop without removable batteries

    - by Senseful
    After reading the information on http://batteryuniversity.com, I realize that one of the best ways to permanently damage a lithium ion battery is to use the battery at a high temperature while it's fully charged. This is exactly what happens when you use the computer as if it were a desktop computer, since leaving it plugged in will keep the battery at 100% and using the computer will heat up the battery. This is why it's recommend to remove the battery from your laptop if you are using it is this scenario. My question is what would you do if the laptop doesn't have removable batteries (e.g. a MacBook Pro)? Should I use some kind of charge cycle such as: charge to 80%, unplug the power chord, use the laptop until it reaches 20%, then repeat the cycle by charging to 80% again? If so, which values should I use instead of 80% and 20%? (I think charging to 80% is better than 100% because of the damage that a hot battery at 100% can do, but I just made the figure 80% up, and I'm sure there's a better number to strive for which is backed by science.) I've read many of the articles on batteryuniversity.com, but couldn't find anything pertaining to this. Update: What about doing something like charge (or discharge) it to 50%, then plug it in and turn on settings which use the battery as much as possible (e.g. brightness all the way up, wi-fi on, etc.), in order to try to maintain the battery at 50% (i.e. the rate it is charging is the same as it is discharging). This will probably heat up the battery, but would make it so you don't need to constantly plug and unplug the laptop. The one bad thing is that you are taking up more charge cycles which would decrease the battery life, thus I'm not sure this is a good idea.

    Read the article

  • Salary Survey Entry Level network position [closed]

    - by will
    Hello, I started interning with a company about 5 months ago and for the past 7 months I have been a normal part time employee. This week I have a review, where I am hoping to get a raise. I started at $8 interning, and now I'm up to $13. What I am trying to figure out is how to survey what others are making in similar positions so I can take it to the review as a base number. here are my thoughts on my position right now. Review Thoughts My Qualifications: • Associates of applied Science - IT network Specialist • CompTIA A+ certification • CompTIA Server+ certification • CompTIA Network+ certification • Currently pursuing Cisco certifications • Junior status at Insert college Pursuing Bachelors in Information Technology with an emphasis in Networking. 3.4 GPA • 1 year of working at Insert Company. My contributions to Insert Company • Offering near fulltime through semester and fulltime through summer. • Ability to work after hours and on weekends • Developed and support helpdesk system • Set up and maintain Update server to keep desktop clients up to date • Deployed and maintain antivirus solution for end users • Assist with main projects such as SAN, Virtualization, and network survey. Any tips on determining an asking number would help. I was thinking $17-$18, am I way off here? • Migrating end user stations to Windows 7 (current project) • Developing imaging solution for Desktop PCs (current project)

    Read the article

  • My SSD stopped working as it ends up in blue death right after windows 7 launches, how can I reset it for new windows install?

    - by HattoriHanzo
    As I mentioned in the title my SSD suddenly started to fail as after launching Windows 7 it goes straight to completely idle and after a few minutes it goes to blue death and restarts. I have an another HD with a windows xp on it and it works fine and I can also see the SSD and can access everything on it. Windows 7 on the SSD does work in safe mode though yet I didn't manage to find out what causes the problem. Since I can sill access the files and save them to another HD I'm looking for the best way to wipe and reinstall Windows 7. I have yet failed to find an easy to follow (or even understand) guide, different sources on the internet recommend different ways of doing this. And some guides are just simply full of terms I have no clue about. It's an OCZ Vertex 2 120GB. I"d very much appreciate if someone could give me an advice on what I should do, preferably in a way so I could regain the best possible performance as well. it doesn't matter if I don't understand the science behind it as long as I can follow the steps. Thanks!

    Read the article

  • USB-to-Serial showing gibberish at 115200 Baud

    - by Mose
    I've got a serious problem which drives me crazy because I tried everything I could think of. First of all, I made a video: http://youtu.be/boghkuq7L_s but please read the following text for more information, not only view the video! When using a USB-to-Serial interface everything works as long as I don't go beyond 57600 Baud. At higher rates I only get giberish like this: év.­b0JNLYÆÿ¿iëd0U²(kßÞb! ú]/xscB!ï¯!BoXûÿ1ïâÖCÿ6ÌAnè*íÌC)º¿BíÞØ.C.@ÆÃwHJÂs "YE:ñ.èFðÌCÊ÷ÞÄ !x H w6@BtbHJ ̪ Ì6ì H¾a¿bH.">îvy®;f<ßBÌ p­L¨fæH­E ­þ¼MBÞI What makes the problem so strange is, I exchanged every component and the problem still presists. I tried differtent OSes (Ubuntu, WinXP, Win7, OSX 10.7) with 32 and 64 Bit. I tried USB-to-Serial interface from FTDI and Prolific. I tried reading the output from my Raspberry PI and from an Asterisk Appliance. I changed the cables and the wiring. Nothing helped. In the video I made a example with a old Notebook with native COM and put the USB-to-Serial to the same connection as "sniffer" (only Rx and GND connected) to make sure the output and everything is ok as one can see on the native port. The voltage is ok. Settings for both are 115200 Baud, 8 Bit with 1 Stop and no flow control. Native is ok. USB is messed up. I used the newest drivers and double checked all connections. I have no idea what is wrong here. As I couldn't find anyone describing problems like this I question my long experiance in computer science and think I'm doing some completly wrong... Please help :-/

    Read the article

  • Data loss by randomly unplugging the computer during runtime

    - by Kan
    I'm from Austria and we and the Germans have some sort of bad science-show which runs every day. What I call it would rougly translate to "half-knowledge" if you want so. By the way: It is called "Galileo". So they thought they'd make a computer myth busters video right now, and I couldn't believe what I saw and heard... The strangest thing to me was that they asked: "Does unplugging the computer damage your data?" Then they started up some machine with Vista on it, started copying some files and randomly unplugged the PC cable, the whole thing around 50 times. After their computer continued to start up normally, they just said "nothing can happen, your data or computer can't be damaged". They of course excluded unsaved data in running programs like text editors from this. I asked myself: What the hell are their "computer experts" saying? You can't tell by unplugging the cable 50 times if that can damage your computer. Can unplugging the cable during runtime cause data loss (as said by the moderator of the show)? (I destroyed my windows registry once during a reset)

    Read the article

  • I really need help resolving a Window Vista BSOD (Blue Screen Crash) on my desktop

    - by anonymous
    Hi, thanks for taking the time to read this. I'll get straight to the details. My desktop is on the fritz; it keeps going to blue screen with the stop message of 0x0000007E immediately after the loading bar of vista, right before transitioning to the account selection screen. My desktop runs on a dual-core 32-bit processor with windows Vista Home(?) installed. I have 3 GB of ram as two separate modules, a 1GB acer module and a 2GB geil module. I have an ati video card, unfortunately I cannot recall the exact name but the chipset is ATI and the manufacturer is Sapphire and the card is on the lower end. My hard drive is 320GB (i think) partitioned into two. The C:\ partition is red lined, while the D:\ partition is still pretty empty. As per the advice of my friend, i tried restarting the system with the graphics card removed. Upon failure, i repeated the process removing one RAM module one at a time, but the system still failed to load. Vista would attempt to repair the system and it would initially report that the system was fixed, but vista really failed to fix the problem. After removing the memory modules, vista started to report it's inability to fix the problem. I tried running on safe mode and the driver listing would always stop at crcdisk.sys. I ran memory diagnostics using the windows memory diagnostic tool found in the screen after vista's failed attempt to fix the problem with no luck. the problem details are as such: Problem Event Name : StartupRepairV2 Problem Signature 01: AutoFailover 02: (vista's version number?) 03: 6 04: 720907 05: 0x7e 06: 0x7e 07: 0 08: 2 09: WrpRepair 10: 0 OS Version: 6.0.6000.2.0.0.256.1 Locale ID 1033 any correct advice would be appreciated as i really need my pc to work so i can work on my projects. kinda sad, but i'm college of computer science and i have no idea what to do :P

    Read the article

  • Winodws server 2003 Setup

    - by Barracksbuilder
    I work at a university maintaining the computer science department server. I am looking for a more economical way to stream line the set up of student accounts. CS students are granted a Username and password an IIS virtual directory, FTP virtual directory, and a mysql database. Server is running windows server 2003R2 (Possibly migrating to 2008R2) The server is running a domain though no students physically log a terminal into it (No computers are part of my domain.) Creating the account is a manual process. I did right a PHP script to query the Universities AD and copy the information and write it to my AD. I then have to create basically the users home directory. I tried having AD do it but since the user never physically logs in it never creates the directory. Permissions on this folder are set to User - full, Instructors (group) - full, Users (group) - read, IUSER - read. Inside of the users folder their is a "Private" folder with permissions User - full, instructors (group) - full. Next step is IIS I create a virtual directory in the default web site pointed to the users home directory so they have a website. Same goes for FTP virtual directory in the default ftp configuration to allow the users to upload files to their website. Mysql I have to create a user and password then create a mysql scheme (database) full access for the user and full access to the instructors account to be able to access the students database. All of this is done manually and takes me a week to do. The closest description is maybe a shared hosting environment. Is there a better way to do this? Scripting wise, or better structure setup?

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Problems with real-valued deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • SQLAuthority News – Interview with SQL Server MVP Madhivanan – A Real Problem Solver

    - by pinaldave
    Madhivanan (SQL Server MVP) is a real community hero. He is known for his two skills – 1) Help Community and 2) Help Community. I have met him many times and every time I feel if anybody in online world needs help Madhinvanan does his best to reach them out and solve problem. His name is not new if you are ready this blog or have ever asked a question in any online SQL forum. He is always there to help. When Madhivanan has time he even helps people on this blog as well. He spends his valuable time to help community only. He recently crossed over 1000 helpful comments on this blog. On that occasion, I have interviewed him to find out if he has any life outside SQL. Q 1. Tell us something about your self. I am Madhivanan ,an MSc computer Science graduate from Chennai, India and working as a Lead Analyst-Project at Ellaar Infotek Solutions Private Limited. I am basically a developer started with Visual Basic 6.0, SQL Server 2000 and Crystal Report 8. As years go on I started working more on writing queries in SQL Server in most of the projects developed in my company. I have some good level of knowledge in ORACLE, MySQL and PostgreSQL as well. Now I am leading a project develeoped in Windows Azure. Q 2. What motivates you to help people on community and forums. When I got some errors during the application development in my early days of my career, I got good solutions from online forums and weblogs. So I decided to help others if possible. When I visit forums and help people if I know the answer to the questions. I am one of the leading posters at www.sqlteam.com and also a moderator at www.sql-server-performance.com. I also take part in Visual Basic and Crystal Reports forums. I have been SQL Server MVP since 2007. Q 3. Your personal life is not much known. Tell us something about your personal life. I am happily married person. My wife is a B.Pharm graduate. I have a son who is now 18 months old. Q 4. Where can we read further for your community activity. I have a blog at http://beyondrelational.com/blogs/madhivanan where you can find most of my T-sql stuffs Q 5. When not working with SQL what do you do? When not working with SQL, I spend time playing with my son, reading some magazines and watching TV. Madhivanan for your work and help to community, a true salute to you. Hats off my friend. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >