Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 178/563 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • I'm a student learning C++ and I've recently found out about Ruby. Would learning (some of) Ruby help me with C++ or would it just confuse me?

    - by Von32
    Hi! As the title says, I'm a student that will be starting my second year of C++ very soon. I've discovered Ruby, however. While I've heard much buzz about the language before, I've disregarded it because I always thought it wasn't something that would be useful. However, I've found a number of FANTASTIC tutorials on ruby and am interested in learning it (probably because it seems so straightforward). Would playing around with ruby be a good or bad idea? I understand that there's not such thing as bad knowledge, but I'm afraid that Ruby will only confuse me when dealing with C++. How different from C++ is it? I've read it's based on C in some way, but my google-fu seems to be horrible today. How useful is Ruby in the real world? I'm not specifically asking about jobs- I'm more interested in what sort of applications may come from this language. Any specific examples worth looking at? Going back to Question two- I've read some posts on here that Ruby and C++ can hold hands once in a while. How flexible is this relationship? Is it rarely that this would work? Thank you Very much for your time! EDIT: This has to be the one community on the internet that doesn't suck. Why have I never posted before? You guys are awesome!

    Read the article

  • What's the best way to create a static utility class in python? Is using metaclasses code smell?

    - by rsimp
    Ok so I need to create a bunch of utility classes in python. Normally I would just use a simple module for this but I need to be able to inherit in order to share common code between them. The common code needs to reference the state of the module using it so simple imports wouldn't work well. I don't like singletons, and classes that use the classmethod decorator do not have proper support for python properties. One pattern I see used a lot is creating an internal python class prefixed with an underscore and creating a single instance which is then explicitly imported or set as the module itself. This is also used by fabric to create a common environment object (fabric.api.env). I've realized another way to accomplish this would be with metaclasses. For example: #util.py class MetaFooBase(type): @property def file_path(cls): raise NotImplementedError def inherited_method(cls): print cls.file_path #foo.py from util import * import env class MetaFoo(MetaFooBase): @property def file_path(cls): return env.base_path + "relative/path" def another_class_method(cls): pass class Foo(object): __metaclass__ = MetaFoo #client.py from foo import Foo file_path = Foo.file_path I like this approach better than the first pattern for a few reasons: First, instantiating Foo would be meaningless as it has no attributes or methods, which insures this class acts like a true single interface utility, unlike the first pattern which relies on the underscore convention to dissuade client code from creating more instances of the internal class. Second, sub-classing MetaFoo in a different module wouldn't be as awkward because I wouldn't be importing a class with an underscore which is inherently going against its private naming convention. Third, this seems to be the closest approximation to a static class that exists in python, as all the meta code applies only to the class and not to its instances. This is shown by the common convention of using cls instead of self in the class methods. As well, the base class inherits from type instead of object which would prevent users from trying to use it as a base for other non-static classes. It's implementation as a static class is also apparent when using it by the naming convention Foo, as opposed to foo, which denotes a static class method is being used. As much as I think this is a good fit, I feel that others might feel its not pythonic because its not a sanctioned use for metaclasses which should be avoided 99% of the time. I also find most python devs tend to shy away from metaclasses which might affect code reuse/maintainability. Is this code considered code smell in the python community? I ask because I'm creating a pypi package, and would like to do everything I can to increase adoption.

    Read the article

  • C-Objective Function

    - by nimbus
    I'm unsure about how to make MWE with C-Obective, so if you need anything else let me know. I am trying running through a tutorial on building an iPhone app and have gotten stuck defining a function. I keep getting an error message saying "use of undeclared indentifer." However I believe I have initiated the function. In the view controller I have: if (scrollAmount > 0) { moveViewUp = YES; [scrollTheView:YES]; } else{ moveViewUp = NO; } with the function under it - (void)scrollTheView:(BOOL)movedUp { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.3]; CGRect rect = self.view.frame; if (movedUp){ rect.origin.y -= scrollAmount; } else { rect.origin.y += scrollAmount; } self.view.frame = rect; [UIView commitAnimations]; } I have initiated the function in the header file (that I have imported). - (void)scrollTheView:(BOOL)movedUp; Any help would be appreciated, thank you in advanced

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Web to Print system - which client side technology should I use

    - by mr null
    I'm getting started to develop Web to Print system. My main concern is which client-side techology should I use: jQuery/CSS Flex/ActionScript or something else? For now, idea is that user/customer choose Product eg: Business card Attributes: Dimensions,Paper type, ... etc Template or blank Adjust product (editor) Preview & order Output should be PDF 300dpi. My main issue is: adding/adjusting text in editor. (font size, font family...). Because application should be cross browser. And I think that 10px Arial can't be the same in Firefox 5 and IE 8. It must be pixel perfect in every browser. If somebody order $100 of business cards, and text is different that he/she saw when ordering - that is a big problem. So, Flash platform should be the answer. But, as I see it's dying technology, 2012 is here - HTML5 is replacing Flash rapidly. I hope that you understand me, so every guideline or few smart words would be very appreciating.

    Read the article

  • What is the most performant CSS property for transitioning an element?

    - by Ian Kuca
    I'm wondering whether there is a performance difference between using different CSS properties to translate an element. Some properties fit different situations differently. You can translate an element with following properties: transform, top/left/right/bottom and margin-top/left/right/bottom In the case where you do not utilize the transition CSS property for the translation but use some form of a timer (setTimeout, requestAnimationFrame or setImmediate) or raw events, which is the most performant–which is going to make for higher FPS rates?

    Read the article

  • What's special about currying or partial application?

    - by Vigneshwaran
    I've been reading articles on Functional programming everyday and been trying to apply some practices as much as possible. But I don't understand what is unique in currying or partial application. Take this Groovy code as an example: def mul = { a, b -> a * b } def tripler1 = mul.curry(3) def tripler2 = { mul(3, it) } I do not understand what is the difference between tripler1 and tripler2. Aren't they both the same? The 'currying' is supported in pure or partial functional languages like Groovy, Scala, Haskell etc. But I can do the same thing (left-curry, right-curry, n-curry or partial application) by simply creating another named or anonymous function or closure that will forward the parameters to the original function (like tripler2) in most languages (even C.) Am I missing something here? There are places where I can use currying and partial application in my Grails application but I am hesitating to do so because I'm asking myself "How's that different?" Please enlighten me.

    Read the article

  • Are there opportunities working as full-time paid programmer for Non-profit organizations

    - by Rick
    Some recent events in my life have made me want to contribute more to causes I believe in rather than just working for a profit-driven company. I have been thinking that if I could find a non-profit organization that I like and believe in then I might feel more fulfilled working for them. I have a decent amount of web development experience and currently work as a Java / Spring web developer. I realize the compensation wouldn't have the same "ceiling" potential as for-profit but am wondering if its possible to get at least something close to a market rate for work as I am planning to start a family sometime soon and still need a legitimate income. If anyone has any knowledge or experience about this sort of thing would be happy to hear from you. EDIT: Without getting in to too much personal detail, I have a relative who recently passed away who suffered from a mental illness so while it doesn't have to be an organization specifically dedicated to this, I am hoping to work for something along these lines at least where there is more of a social cause rather than just working on an open-source project whose only cause is the advancement of technology.

    Read the article

  • How to Mentor a Junior Developer

    - by Josh Johnson
    This title is a little broad but I may need to give a little background before I can ask my question properly. I know that similar questions have been asked here already. But in my case I'm not asking if I should be mentoring someone or if the person is a good fit for being a software developer. That is not my place to judge. I have not been asked outright, but it is apparent that myself and other fellow senior developers are to mentor the new developers that start here. I have no problem with this whatsoever and, in many cases, it lends me a fresh perspective on things and I end up learning in the process. Also, I remember how beneficial it was in the beginning of my career when someone would take some time to teach me something. When I say "new developer" they could be anywhere from fresh out of college to having a year or two of experience. Recently and in the past we've had people start here who seem to have an attitude toward development/programming which is different from mine and hard for me to reconcile; they seem to extract just enough information to get the task done but not really learn from it. I find myself going over and over the same issues with them. I understand that part of this could be a personality thing, but I feel it's my job to do my best and sort of push them out of the nest while they're under my wing, so to speak. How can I impart just enough information so that they will learn but not give so much as to solve the problem for them? Or perhaps: What's the proper response to questions that are designed to take the path of least resistance and, in essence, force them to learn instead of take the easy way out? These questions are probably more general teaching questions and don't have that much to do specifically with software development. Note: I do not get a say in what tasks they are working on. Management doles the task out and it could be anything from a very simple bug fix to starting an entire application by themselves. While this is not ideal by any means and obviously presents its own gauntlet of challenges, I feel it's a topic best left for another question. So the best I can do is help them with the problem at hand and try to help them break it down into simpler problems and also check their commit logs and point out mistakes that they made. My main objectives are to: Help them out and give them the tools they need to start becoming more self-reliant. Steer them in the right direction and break bad development habits early on. Lessen the amount of time I spend with them (the personality type described above seems to need much more one-on-one time and does not do well over IM or email. While that's generally fine, I can't always stop what I'm working on, break my stride, and help them debug an error on a moments notice; I have my own projects that need to get done).

    Read the article

  • What platform to use for browser based turn based strategy game

    - by sunwukung
    I want to write a browser based strategy game that can be played by two players in separate locations. The game itself is predominantly turn based. To that end, I want to determine the correct platform on which to build this game. To prevent gamers "gaming" the system, the business logic needs to reside in the server. I could arguably use AJAX for a large part of the games functionality, but at two key points in the game loop, the opposing player can "counter" the current players move. In addition, when it's time for the players to swap, AJAX polling is likely to fall short, so it's starting to look like WebSockets is going to be a requirement to pull this off smoothly. So, the remaining question is regarding the back end. I'd kinda like to build this in Python/Flask - but this is primarily out of wanting to tackle a project with that language, not neccessarily because it's the appropriate tool for the job. The next most likely candidate has got to be NodeJS given it's (apparently) tighter integration with the WebSockets protocol. My question, then, is regarding the best platform on which to pursue this objective.

    Read the article

  • how a pure functional programming language manage without assignment statements?

    - by Gnijuohz
    When reading the famous SICP,I found the authors seem rather reluctant to introduce the assignment statement to Scheme in Chapter 3.I read the text and kind of understand why they feel so. As Scheme is the first functional programming language I ever know something about,I am kind of surprised that there are some functional programming languages(not Scheme of course) can do without assignments. Let use the example the book offers,the bank account example.If there is no assignment statement,how can this be done?How to change the balance variable?I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory,this must can be done too. I learned C,Java,Python and use assignments a lot in every program I wrote.So it's really an eye-opening experience.I really hope someone can briefly explain how assignments are avoided in those functional programming languages and what profound impact(if any) it has on these languages. The example mentioned above is here: (define (make-withdraw balance) (lambda (amount) (if (>= balance amount) (begin (set! balance (- balance amount)) balance) "Insufficient funds"))) This changed the balance by set!.To me it looks a lot like a class method to change the class member balance. As I said,I am not familiar with functional programming languages,so if I said something wrong about them,feel free to point out.

    Read the article

  • Why doesn't Haskell have type-level lambda abstractions?

    - by Petr Pudlák
    Are there some theoretical reasons for that (like that the type checking or type inference would become undecidable), or practical reasons (too difficult to implement properly)? Currently, we can wrap things into newtype like newtype Pair a = Pair (a, a) and then have Pair :: * -> * but we cannot do something like ?(a:*). (a,a). (There are some languages that have them, for example, Scala does.)

    Read the article

  • What is the supposed productivity gain of dynamic typing?

    - by hstoerr
    I often heard the claim that dynamically typed languages are more productive than statically typed languages. What are the reasons for this claim? Isn't it just tooling with modern concepts like convention over configuration, the use of functional programming, advanced programming models and use of consistent abstractions? Admittedly there is less clutter because the (for instance in Java) often redundant type declarations are not needed, but you can also omit most type declarations in statically typed languages that usw type inference, without loosing the other advantages of static typing. And all of this is available for modern statically typed languages like Scala as well. So: what is there to say for productivity with dynamic typing that really is an advantage of the type model itself?

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • Reading source code to learn

    - by perl.j
    As you develop as a programmer, IMO, you begin to see different practices, different Algorithms, and "more than one way to do it". Seeing this code can be a great learning experience for you, even though you did not write the code. But is doing this only going to confuse you? For example, let's say you have a library in any language that was created by a colleague, and you have been using it for a while. You decide to look at the actual source code, regardless of how extensive it is, and get a better look at how this library is written. For the sake of example, the function you use most often from this library is the max function, which finds the largest of two numbers. But this function is a lot more complicated than it needs to be. The way it is written is confusing the heck out of you, and you don't know how this works. Will this make you a better programmer, because you realize how complicated it is for such a simple function, or will it make you a worse coder because you feel less confidant? So my question, in general, is does reading source code make you a better programmer and if so how? If not why do people still do it?.

    Read the article

  • moore's law and quadratic algorithm

    - by damon
    I was going thru a video (from coursera - by sedgewick) in which he argues that you cannot sustain Moore's law using a quadratic algorithm.He elaborates like this In year 197* you build a computer of power X ,and need to count N objects.This takes M days According to Moore's law,you have a computer of power 2X after 1.5 years.But now you have 2N objects to count. If you use a quadratic algorithm, In year 197*+1.5 ,it takes (4M)/2 = 2M days 4M because the algorithm is quadratic,and division by 2 because of doubling computer power. I find this hard to understand.I tried to work thru this as below To count N objects using comp=X , it takes M days. -> N/X = M After 1.5 yrs ,you need to count 2N objects using comp=2X -> 2N/(2X) -> N/X -> M days where do I go wrong? can someone please help me understand?

    Read the article

  • Dependency Checker/ Installer With Java/Ant

    - by jsn
    I need some kind of software to easily roll out code on new servers. I use Apache Ant for builds. However, say I want to set-up a new server fast and my Java program depends on GhostScript, if there any software that can automatically check the computer for it (and then maybe the PATH) and add it if is not there? I have already looked at Maven and Apache Ivy, however, I think these are only for .jar files (from what I saw). Thanks for any help.

    Read the article

  • Etymology of software project names [closed]

    - by Benoit
    I would like to have a reference community wiki here in order to know what etymology software name have or why they are named that way. I was wondering why Imagemagick's mogrify was named this way. Today I wondered the same for Apache Lucene. It would be handy to have a list here. Could we extend such a list? Let me start and let you edit it please. I will ask for this to be community wiki. For each entry please link to an external reference. GNU Emacs: stand for “Editor MACroS”. Apache Lucene: Armenian name Imagemagick mogrify: from “transmogrify”. Thanks.

    Read the article

  • Do we need use case levels or not?

    - by Gabriel Šcerbák
    I guess no one would argue for decomposing use cases, that is just wrong. However sometimes it is necessary to specify use cases, which are on lower, more technical level, like for example authetication and authorization, which give the actor value, but are further from his business needs. Cockburn argues for levels when needed and explains how to move use cases from/to different levels and how to determine the right level. On the other hand, e.g. Bittner argues against use case levels, although he uses subflows and at the end of his book mentions, that at least two levels areneeded most of the time. My questionis, do you find use case levels necessary, helpful or unwanted? What are the reasons? Am I misssing some important arguments?

    Read the article

  • How can I improve my error checking and handling?

    - by Google
    Lately I have been struggling to understand what the right amount of checking is and what the proper methods are. I have a few questions regarding this: What is the proper way to check for errors (bad input, bad states, etc)? Is it better to explicitly check for errors, or use functions like asserts which can be optimized out of your final code? I feel like explicitly checking clutters a program with a lot of extra code which shouldn't be executed in most situations anyway-- and not to mention most errors end up with an abort/exit failure. Why clutter a function with explicit checks just to abort? I have looked for asserts versus explicit checking of errors and found little to truly explain when to do either. Most say 'use asserts to check for logic errors and use explicit checks to check for other failures.' This doesn't seem to get us very far though. Would we say this is feasible: Malloc returning null, check explictly API user inserting odd input for functions, use asserts Would this make me any better at error checking? What else can I do? I really want to improve and write better, 'professional' code.

    Read the article

  • how to improve design ability

    - by Cong Hui
    I recently went on a couple of interviews and all of them asked a one or two design questions, like how you would design a chess, monopoly, and so on. I didn't do good on those since I am a college student and lack of the experience of implementing big and complex systems. I figure the only way to improve my design capability is to read lots of others' code and try to implement myself. Therefore, for those companies that ask these questions, what are their real goals in this? I figure most of college grads start off working in a team guided by a senior leader in their first jobs. They might not have lots of design experience fresh out of colleges. Anyone could give pointers about how to practice those skills? Thank you very much

    Read the article

  • Iphone/Android app – chatroom development – what framework & hosting needs?

    - by MikaelW
    I have some experience regarding IPhone and Android development but I am now struggling to solve a new class of problem: apps that involve a client/server chatroom feature. That is, an app when people can exchange text over the internet, and without having the app to constantly “pull” content from the server. So that problem can’t be solved with a normal php/mysql website, there must be some kind of application running on a server that is able to send message from the server to the phone, rather than having the phone to check for new messages every 10 seconds… So I’m looking for ways to solve the different problems here: What framework should I use on the two sides (phone / server)? It should be some kind of library that doesn’t prevent me to write paid apps. It should also be possible to have the same server for the Iphone and android version of the app. What server / hosting solution do I need with what sort of features, I just have no experience regarding server application that can handle and initiate multiple connections and are hosted on hardware that is always online I tried to find resources online but couldn’t so far, either the libraries had the wrong kind of license/language or I just didn’t understand… Sometimes there were nice tutorial but for different needs such as peer2peer chat over local network… Same with the server and the hosting problem, not sure where to start really, I’m calling for help and I promise I will complete this page with notes about the experience I will get :-) Obviously the ideal would be to find a tutorial I missed that include client code, server code and a free scalable server… That being said, If I see something as good, it probably means that I have eaten the wrong kind of mushroom again… So, failing that, any pointer which might help me toward that quest, would be greatly appreciated. Thanks in advance. Mikael

    Read the article

  • Good Laptop .NET Developer VM Setup

    - by Steve Brouillard
    I was torn between putting this question on this site or SuperUsers. I've tried to do a good bit of searching on this, and while I find plenty of info on why to go with a VM or not, there isn't much practical advise on HOW to best set things up. Here's what I currently HAVE: HP EliteBook 1540, quad-core, 8GB memory, 500GB 7200 RPM HD, eSATA port. Descent machine. Should work just fine. Windows 7 64-bit Host OS. This also acts as my day-to-day basic stuff (email, Word Docs, etc...) OS. VMWare Desktop Windows 7 64-bit Guest OS with all my .NET dev tools, frameworks, etc loaded on it. It's configured to use 2 cores and up to 6GB of memory. I figure that the dev env will need more than email, word, etc... So, this seemed like a good option to me, but I find with the VM running, things tend to slow down all around on both the host and guest OS. Memory and CPU utilization don't seem to be an issue, but I/O does. I tried running the VM on an external eSATA drive, figuring that the extra channel might pick up the slack. Things only got worse (could be my eSATA enclosure). So, for all of that I have basically two questions in one. Has anyone used this sort of setup and are there any gotchas either around the VMWare configuration or anything else I may have missed here that you can point me to? Is there another option that might work better? For example, I've considered trying a lighter weight Host OS and run both of my environments as VMs? I tried this with Server 2008 Hyper-V, but I lose too much laptop functionality going this route, so I never completed setup. I'm not averse to Linux as a host OS, though I'm no Linux expert. If I'm missing any critical info, feel free to ask. Thanks in advance for your help. Steve

    Read the article

  • Is Intellisense faster in Visual Studio 2012 compared to Visual Studio 2010 for C++ projects?

    - by syplex
    We switched to VS2010 from VS2003 a few months ago, and there are many many improvements. But the speed of Intellisense is not one of them (although it does generate higher quality results, which is great). I read that Intellisense and the MSDN help system were being improved in VS2012, so I'm curious if its actually faster? The only data I could find were graphs of an early release (VS2011). For the record, I am using a vanilla install of VS2010 with SP1 on Windows 7 SP1 (x64). No plugins or add-ins running. What I'm looking for specifically: Has the speed of intellisense autocomplete improved? Has the speed of F12 (goto definition) improved? The answers to these questions will help in determining if VS2012 is worth the money to upgrade at this time as the intellisense slowness would be the only major reason for upgrading. I'd also be interested in knowing if the help system has improved. I'm currently using MSDN help from VS2008SP1 because it has filtering and is faster.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >