Search Results

Search found 18761 results on 751 pages for 'lot'.

Page 530/751 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Why C++ people loves multithreading when it comes to performances?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approach here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that maanges the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about concurrency when they wont to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's infact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async aproach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • I have a performance problem

    - by Alan
    (copied from my wordpress blog). So start 95% of the performance calls that I receive. They usually continue something like: I have gathered some *stat data for you (eg the guds tool from Document 1285485.1), can you please root cause our problem? So, do you think you could? Neither can I, based on this my answer inevitably has to be "No". Given this kind of problem statement, I have no idea about the expectations, the boundary conditions, or even the application. The answer may as well be "Performance problems? Consult your local Doctor for Viagra". It's really not a lot to go on. So, What kind of problem description is going to allow me to start work on the issue that is being seen? I don't doubt that there really is an issue, it just needs to be pinned down somewhat. What behavior exactly are you expecting to see? Be specific and use business metrics. For example "run-time", "response-time" and "throughput". This helps us define exit criterea. Now, let's look at the system that is having problems. How is what you are seeing different? Use the same type of metrics. The answers to these two questions take us a long way towards being able to work a call. Even more helpful are answers to questions like Has this system ever worked to expectation? If so, when did it start exhibiting this behavior? Is the problem always present, or does it sometimes work to expectation? If it sometimes works to expectation, when are you seeing the problem? Is there any discernible pattern? Is the impact of the problem getting better, worse, or remaining constant? What kind of differences are there between when the system was performing to expectation and when it is not? Are there other machines where we could expect to see the same issue (eg similar usage and load), but are not? Again, differences? Once we start to gather information like this we start to build up a much clearer picture of exactly what we need to investigate, and what we need to achieve so that both you and me agree that the problem has been solved. Please help get that figure of poorly defined problem statements down from it's current 95% value.

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • Which one to select for my future career; Java, C#, Azure or Apex?

    - by user636195
    Hi folks, This time, I am going to studying Masters in Computer Science in U.S.A after a week. I have been doing my B.Sc for the past three years and after my freshman year I started working on projects (in C# and very rarely in Java) for the past two years.(i.e while I was a second and third year student). Now I am in a college where all of the programming courses are going to be taken in Java only (using Eclipse) and I am going to stay in this college for 8 months on campus and then fully employed for two years in other companies as a CPT. I really love to work on Microsoft products because, for me, they are simple and easy to use and understand. My future plan is to work in Cloud computing and be a Cloud based business owner in the near future. Since the college is going to teach us and let us do every project in Java, I was confused which programming language to use that will help me and enhance me in my career, and of course I wanted to select the one I liked to do everytime. I also heard a lot about Azure (Microsoft’s ) and Apex (Salesforce.com’s cloud computing programming language). Would you please give me your advice and recommendation based on my situation? Should I have to study only Java, or should I have to study C# or Azure beside Java on my own? The reason I asked this is because, since I have no clue how Azure works and how long it will take me to know the language, I am really confused which one to select (Java Vs C# and Azure Vs Apex or if there is any popular and mostly used Cloud Computing langauge). Do you think I can get a job in cloud computing if I study Azure or Apex by my own without experience? There is also one issue I want to consider which is a short term issue is. i.e Salary. Since I have to pay my student loan, I also need to get a good job which will let me pay my loan within two years. But, as I said, my long term plan is, get experience in Cloud Computing (from programming to administrative part,i.e every area of cloud computing) and then have my own business may be within 5-10 years. What do you think? Thank you for your time.

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • Are there new flexible texteditors? [closed]

    - by RParadox
    Vi(m) and Emacs are almost 40 years old. Why are they still the standard, and what attempts have been made at coming up with a new flexible editor? Are there any features which can not be built into vim/emacs? My question is similar to this one: Time to drop Emacs and vi? No one had a suggestion, which surprises me. I would have thought that the core of a texteditor is very small and that people brew their own. Perhaps nobody wants to deal with supporting all the modes? Edit to clarify my question: Instead of writing modes and scripts I ask myself, why there is not a much lightweight project, which lets people custom the editor more directly? Vim has 365395 LOC (C lines all included), Emacs 1.5 million LOC. Why is there a project with say 50k LOC, which is the core, why people can use more freely? Perhaps there is such project, I haven't looked very far. I thought about putting together modules from Vim myself and experimenting with some ideas. The core of editing shouldn't be more than 10k? Vim has a lot optimizations which is really an overkill nowadays. Basically I'm looking for a code base for my own editor and Vi/Emacs are I believe not intended to be used in this way. Bill Joy said the following about vi. http://web.cecs.pdx.edu/~kirkenda/joy84.html The fundamental problem with vi is that it doesn't have a mouse and therefore you've got all these commands. In some sense, its backwards from the kind of thing you'd get from a mouse-oriented thing. I think multiple levels of undo would be wonderful, too. But fundamentally, vi is still ed inside. You can't really fool it. Its like one of those pinatas - things that have candy inside but has layer after layer of paper mache on top. It doesn't really have a unified concept. I think if I were going to go back - I wouldn't go back, but start over again.

    Read the article

  • Is Ubuntu recognizing and/or using my NVIDIA graphics card?

    - by user212860
    This is my first post here, and I'm pretty new to Ubuntu/Linux. I currently have no other OS except for Ubuntu 13.10. (I used to have Win7 until i got a new terabyte hard drive). My current PC build, if any of this helps: CPU: Intel i5 quad-core Graphics: NVIDIA GeForce GTX 650 RAM: 8 GB HDD: 1 TB SATA 3 Motherboard: MSi Z77 A-G41 OS Ubuntu 13.10 So I recently installed Ubuntu 13.10 and put Steam on it, and I'm seeing that my games run a lot slower than they did when I had Win7. I figured it was a graphics problem, so I checked System Settings Details Overview. It says in "Graphics" that I have "Gallium 0.4 on NVE7" (don't really know what that is). Does this mean that Ubuntu is not using my graphics card? In System Settings Software & Updates Additional Drivers, it clearly shows like this: NVIDIA Corporation: GK107 [GeForce GTX 650] -This device is using an alternative driver (And then it shows a list of drivers that I can switch back and forth to) So this is a bit confusing. In Software and Updates, it clearly shows that I have my NVIDIA card installed, and that I have a driver selected for it. But in System Settings, it shows I have some Gallium 0.4 thing. I had done a bit of research, and ended up typing command: "lspci|grep VGA" in the Terminal. It showed this in response: VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX 650] (rev a1) The Terminal seems to recognize my graphics card. What it looks like to me, is that I don't have the proper driver, and I might be using my CPU's integrated graphics. When I switch around which driver I am using in that list, it still does not see my card in System Settings. Some of the drivers in the list give me some sort of OpenGL error when I try to run a game. It might just be that my games are running slow because the game developers have not optimized it for Ubuntu that well. However, that still doesn't take away from the fact that System Settings is not showing my NVIDIA card. TL;DR Version: How do I know if my video card is being recognized/used? If my video card is not being used, what is the best way fix that? Please make your answers easy to understand. I do not mind wordy responses, as long as I can follow what you're saying. Any help would be greatly appreciated! Thanks, Jabber5

    Read the article

  • BizTalk Orchestration & Port Tutorial Part 2

    - by bosuch
    In Part 1 I showed how to create and publish a simple Orchestration demo. Now we’ll finish configuring it in the admin console and test it. Open the BizTalk Server 2009 Administration Console, and expand BizTalk Server 2009 Administration, then Applications. You should have an entry for OrchestrationPortDemo – expand it as well. First, we’ll add the Receive Port – the place that we’ll drop the test file. Right-click on Receive Ports and select New One-way Receive Port. On the General tab, name it InputPort, then click over to Receive Locations.   Click New to add a new location. Your receive location can be FTP, SQL, WCF, SharePoint, or many other choices, but for this demo we’ll add a File location. Click the Configure button and set a receive folder (something like “C:\PortDemo\”) and a file mask (stick with “*.xml” for now) and click OK three times to create your Receive Port.   Next we’ll create the Send port – the location where BizTalk will drop the file. Right-click on Send Ports and choose New Static One-way Send Port. Give it an appropriate name, and configure the FILE Transport Properties as shown:   Click OK twice and your Send Port will be created. Now we’ll configure the Orchestration Bindings. Click on Orchestrations, then right-click the orchestration itself and select Properties. Select the Bindings tab. Choose BizTalkServerApplication as the host, and select the Send and Receive ports you previously created, as shown:   Now it’s time to fire everything up. Right-click on the send port you created and click Start. Once the Status column displays “Started”, click on Receive Locations and Enable the Receive Location previously created. Finally, start the Orchestration. Now, time to test! Create a simple xml file like: <root>    <Node1>Test</Node1>    <Node2>Test</Node2> </root> And drop it into the C:\PortDemo folder. After a couple of seconds the file should disappear – this indicates BizTalk has picked it up for processing. Look in the C:\PortDemo\Output folder and you should see an xml file with a GUID for a name, like {7C50104F-FC3E-4A49-B2FA-4F560A37636D}.xml. Open it to verify that it matches your input file. Practically, this demo doesn’t do a whole heck of a lot, but it shows you the basics for building, publishing and running an orchestration.

    Read the article

  • Help understanding my hard drive / partitioning situation... Pictures Included! :)

    - by xopenex
    So I have installed windows 7, and two different distros of linux... I have read and tried to understand things like "spanned" "extended" "primary" "swap" "dev/dev2/" "GRUB" "Windows Boot Loader/Manager" etc.... I have a very very limited understanding of all of it! :) I am trying to figure out how to get all OS boot options on one Boot manager (I'm thinking it will be GRUB), because at this point when i turn on my computer, I basically get two booting options (excluding the memtest options etc)... One options is to boot one of my Linux Distros and the second option is to boot my Windows 7. When i go with the first option, Linux boots up... when i go with the second Windows 7 option, I get the "windows boot manager screen" and I can choose Windows 7 or my other installation of Linux (Ubuntu)... In addition, I did not have swap partition from my first installation of Linux, I created it during the installation of my second distro... This is a lot of info for me, but I'm guessing that you linux Gurus, pretty much understand what is going on! Hope my question makes sense.. i will try and simplify... Can i get all 3 OS's optioned to boot from one GRUB? Can i get both Linux distros to use one swap file (I have seen this possible in other threads, but because of how my disk is partitioned, i dont know if i can do this) I hope that i dont have to start all over installing one after the other. Ive got some pics that may help understand my hard drive situation! Thanks guys! :) EDIT... i had some pics, but im a new member.. so cant post them... :( here is a description of the pics... incase i can email them or post later. [grub][3] First Screen I come to after turning on computer... "Ubuntu with linux 3.2.6" (highlighted) fires up Linux perfectly... other choice at bottom of list "Windows 7 (loader) (on dev/sda1)... brings me to the next picture below.. windows boot manager [win boot mngr][6] both options here load the os selected [Disk Manager Windows][1] picture of my hard drive situation through windows disk manager utility [gparted][2] picture of my hard drive situation through "gparted" [mycomp][4] picture of my hard drive situation through "my computer" [paragon][5] one last pic of my hard drive situation through the eyes of "paragon"

    Read the article

  • Is it smart to take a year off from school to get experience?

    - by user134147
    firstly I apologize if this question is not appropriate for the site, but I've seen other similar (though slightly deviant) questions on this sight before and I know the people here are the most qualified to answer my question. Anyways, I'm currently between my sophomore and junior years at a 4 year university, and after a bit of deliberation I've decided on computer science as a major (BA, by the way, as a BS would require me to stay at least an extra year the way our program is set up). I've been interested now in programming for a few months and I've developed a passion for it in a very short time. I began learning C++, migrating to Java recently when I learned my school focuses on this language. Now, I should mention that the concept of higher education has never sat well with me, so part of my motivation for wanting to take time off is to truly challenge myself and see what I can accomplish when I actually try at something. The autodidact in me finds it difficult to focus on my passions while trying to keep a high GPA in unrelated classes. However, I understand the times we live in and therefore would plan to complete my degree after this year. So my question is whether or not the skills I learn in a year off from college could justify the time off from school. Unfortunately, I don't believe I know enough yet to gain any professional experience (internship, etc.) so I would mostly focus my time on learning Java and another language, possibly Wordpress (to gain an understanding of web programming concepts as I have not yet decided what field I want to get into, and to make some money to fund my off-year), and to delve into security concepts, which also interest me. I'm hoping I could work on projects, such as simple applications or contributions to open source software during this time to enhance my resume once I do finish school, so I can find a job out of college easier. I do not want to be the new hire who knows nothing beyond the concepts of his Java textbooks. Does anyone have any input about these thoughts of mine, or any ideas for where I should focus my studies or how high I might set the bar for my work? Thanks a lot everyone!

    Read the article

  • "Collection Wrapper" pattern - is this common?

    - by Prog
    A different question of mine had to do with encapsulating member data structures inside classes. In order to understand this question better please read that question and look at the approach discussed. One of the guys who answered that question said that the approach is good, but if I understood him correctly - he said that there should be a class existing just for the purpose of wrapping the collection, instead of an ordinary class offering a number of public methods just to access the member collection. For example, instead of this: class SomeClass{ // downright exposing the concrete collection. Things[] someCollection; // other stuff omitted Thing[] getCollection(){return someCollection;} } Or this: class SomeClass{ // encapsulating the collection, but inflating the class' public interface. Thing[] someCollection; // class functionality omitted. public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } We'll have this: // encapsulating the collection - in a different class, dedicated to this. class SomeClass{ CollectionWrapper someCollection; CollectionWrapper getCollection(){return someCollection;} } class CollectionWrapper{ Thing[] someCollection; public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } This way, the inner data structure in SomeClass can change without affecting client code, and without forcing SomeClass to offer a lot of public methods just to access the inner collection. CollectionWrapper does this instead. E.g. if the collection changes from an array to a List, the internal implementation of CollectionWrapper changes, but client code stays the same. Also, the CollectionWrapper can hide certain things from the client code - from example, it can disallow mutation to the collection by not having the methods setThing and removeThing. This approach to decoupling client code from the concrete data structure seems IMHO pretty good. Is this approach common? What are it's downfalls? Is this used in practice?

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

  • Is my work on a developers test being taken advantage of?

    - by CodeWarrior
    I am looking for a job and have applied to a number of positions. One of them responded, I had a pretty lengthy phone interview (perhaps an hour +) and they then set me up with the developers test. I was told that this test is estimated to take between 6 and 8 hours and that, provided it met with their approval, I could be paid for my work on it. That gave me some pause, but I endeavored. The dev test took place on a VM accessed via RDP. The task was to implement a search page in a web project that requests data from the server, displays it on the screen in a table, has a pretty complicated search filtering scheme (there are about 15 statuses and when sending the search to the server you can search by these statuses) in addition to the string/field search. They want some SVG icons to change color on certain data values, they want some data to be represented differently than how it is in the database, etc. Loooong story short, this took one heck of a lot longer than 6-8 hours. Much of it was due to the very poor VM that I was running on (Visual Studio 2013 took 10 minutes to load, and another 15 minutes to open the 3 GB ginormous solution). After completing, I was told to commit my changes to source control... Hmm, OK. I get an email back that they thought that the SVGs could have their color changed differently, they found a bug in this edge-case, there was an occasional problem with this other thing that I never experienced, etc. So I am 13-14 hours into this thing now and I have to do bug fixes. I do them, and they come back with some more. This is all apparently going into a production application. I noticed some anomalies in the code that was already in there where it looked like other people had coded all of one functionality and not anything else that I could find. Am I just being used for cheap labor? Even if they pay me the promised 50 dollars and hour for 6 hours, I have committed like 18 hours to this thing now. If I bug fix all of the stuff they keep coming up with, I will have worked at least 16 hours for free. I have taken a number of dev tests. I have never taken one where I worked on code that was destined for production. I have never taken one where I implemented a feature that was in the pipeline for development (it was planned for, and I implemented it through the course of the test). And I have never taken one that took 4 rounds and a total of 20+ hours. I get the impression that they are using their dev test to field some of the functionality, that they don't have time for in their normal team, on the cheap. Also, I wouldn't mind a 'devtest' tag.

    Read the article

  • Visual Studio 2010/2012 Context Menus and a Keyboard

    - by SergeyPopov
    As a software developer, I spend a lot of time using Visual Studio. I have to say that I completely satisfied with Visual Studio generally. Nevertheless, sometimes Visual Studio starts annoying me. One issue which poisoned my existence for a long time is that context menu behavior in VS2010 is a little different than it was in VS2005/2008. Unfortunately, in VS2012 this behavior remains the same as in VS2010. So, what is the issue? Working with Visual Studio, I use the keyboard in most cases. I also use the Apps key on the keyboard to open context menus in the code editor. Moreover, long time ago I am got used to using some key sequences, and press the keys without even thinking. In VS2008, a mouse pointer position didn’t affect context menu navigation if I used the keyboard. Every time I opened a context menu I was sure that, for example, the "Apps, Down, Down, Enter, Up, Enter" key sequence always invoke "Organize Usings > Remove and Sort" function. But in VS2010, this behavior has been changed. If a mouse pointer is located over an opened context menu, the menu item under the mouse pointer becomes selected immediately! So, now the "Apps, Down, Down, Enter, Up, Enter" key sequence will not lead to expected results all the time. In some cases, the result may be a little scary. If you are using Visual SVN extension, this key sequence may invoke "Revert whole file" function. Of course, this is not a fatal problem because "Undo" function restores all the changes, but this behavior strongly annoys me. In Visual Studio 2012, context menu behavior is a little different than in VS2010, but a mouse pointer position still affects the keyboard navigation in the context menu, and this behavior is still annoying. I tried to find the way how to change this behavior, but I didn’t manage to find the answer quickly. Then I decided to go right though, so I wrote a small utility which fixes this issue. This utility watches for Apps key, and if the key is pressed in Visual Studio, the utility moves the mouse pointer to the top of the screen before opening the context menu. You can find binaries and the source code of this utility here: http://code.google.com/p/vs-ctx-menu-fix/downloads/list This utility works fine in Windows 7 and Windows 8 x64. I wrote the first version in January, 2011; now I just added Visual Studio 2012 support. I hope you will find this utility useful! :)

    Read the article

  • Want to make RTS strategy game, good starting point...?

    - by NOLANDI
    everybody! I find this community because I did research about game developing topics. So I decide to register on this community and ask for your opinion. I am not begineer in computer stuffs, and I know logic of programming language. I have little knowlegde of C, C++, Started to learn C#...but I was always oriented about other areas of IT. Amateur for game making but not in IT general! I am interested to make RTS strategy video game, isometric camera view (like PC GAME COMMANDOS for example)...so I started to read many books about DirectX, XNA, OpenGL, C#, AI programming...so I want to know where is good starting point for making this game genre. I want to use C# for this project, not C++, since I always had trouble with it, because it is too complicated. I see good oportunity for learning C#, since it has more object-oriented concept, with much less code and easy readible code for writing. Beside of that I know that C++ is the more powerful language until today. So I want to use DirectX and C# for my project. But it will required more time for sure, to learn it. I have trouble with finding books for c# & DirectX (I found just one), but in every next book, it is programming in c++! I read many web sites and they mentioned Slim DX, and in generaly, other articles which I read tells that it is possible to make good game with combination c# and directX, but...I don't have enough information about all this stuffs, so I became confused! It is really hard to have wish to learn c++ from scratch, since I started c# and I like it really...I don't try OpenGL, but I am already familiar with DirectX (some beginning stuffs), and I think that it is good and interesting. So, I decide to choose between maybe Unity3d and XNA first. XNA...hm, it sound that it is not bad solution, but I heard that it is already overcome technology, and I think that it is better to wait with XNA, until I try to make something with DirectX and C#, at first. But I dont know it is possible to make strategy game in XNA?! Maybe I am wrong. I also, find Unity 3D game engine and I find that I could programming there in C#, but many all examples are written in Javascript (Digital tutors videos)...But I generaly have wish to learn it too, because it allows to export games to mobile phones, there a lot of documentations, it is free, etc... I want to find best way to learn C# and DirectX, and make this game, but I dont know if it's possible to combine?! But maybe is good to first try Unity and XNA, because I am beginner?! Thank you all! I am willing to hear your expirience.

    Read the article

  • Camera not working

    - by user17548
    I made a camera in DX9. To move forward I press the Up arrow. To rotate on the Y axis I use the mouse. When I perform these movements on their own the camera moves at the speed I want. However, if I hold down Up and move the mouse at the same time then the camera moves a lot faster than it should. I want it to move at the same speed as it does when only the Up arrow is pressed. I think I need to normalize something somewhere but not sure what and not sure where. Have tried various combinations without success so if anyone can point me in the right direction that would be great. Thanks. My code #define KEY_DOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0) LRESULT WINAPI MsgProc( HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam ) { if( KEY_DOWN(VK_UP)) MovePlayer(D3DXVECTOR3(0, 0, -1.0f)); if( KEY_DOWN(VK_DOWN)) MovePlayer(D3DXVECTOR3(0, 0, 1.0f)); switch( msg ) { case WM_MOUSEMOVE: ProcessMouseInput(); } } void MovePlayer( D3DXVECTOR3 in_vec ) { D3DXMATRIX CameraRot; D3DXMatrixRotationY(&CameraRot,D3DXToRadian(AngleY)); D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&in_vec,&CameraRot); CameraPos += (m_timeElapsed * CameraRotTarget); } void ProcessMouseInput() { GetCursorPos( &CurrentMouseState ); if ((CurrentMouseState.x != GameMouseState.x) || (CurrentMouseState.y != GameMouseState.y)) { int dx = CurrentMouseState.x - GameMouseState.x; int dy = CurrentMouseState.y - GameMouseState.y; AngleY+=m_timeElapsed*dx*7.0f; } GameMouseState = CurrentMouseState; // Set back to window center in Render function } VOID UpdateCamera() { D3DXVECTOR3 CameraOrigTarget(0, 0, -1); D3DXVECTOR3 CameraOrigUp(0, 1, 0); D3DXMATRIX CameraRot; D3DXMATRIX CameraRotX; D3DXMatrixRotationX(&CameraRotX,D3DXToRadian(AngleX)); D3DXMATRIX CameraRotY; D3DXMatrixRotationY(&CameraRotY,D3DXToRadian(AngleY)); CameraRot = CameraRotX * CameraRotY; D3DXVECTOR3 CameraRotTarget; D3DXVec3TransformNormal(&CameraRotTarget,&CameraOrigTarget,&CameraRot); D3DXVECTOR3 CameraTarget; CameraTarget = CameraPos + CameraRotTarget; D3DXVECTOR3 vUpVec( 0.0f, 1.0f, 0.0f ); D3DXMatrixLookAtLH( &matView, &CameraPos, &CameraTarget, &vUpVec ); g_pd3dDevice->SetTransform( D3DTS_VIEW, &matView ); D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI / 4, 1.0f, 1.0f, 100.0f ); g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj ); }

    Read the article

  • Core i7-620M vs Core i5-540M

    - by Shalan
    Hi, I'm recommending a laptop to a colleague, and the specific laptop he has chosen has the above CPU chips as options. Both chips have 2-cores/4-threads. The i7-620M is a 2.66 GHz (4MB Cache) while the i5-540M is a 2.53 GHz (3MB Cache)....both Arrandale architecture. He is a .NET programmer working with SQL Server and Oracle, and occasionally uses Adobe Fireworks for web-related design elements. He also loves playing around in Adobe Premiere Pro, and does a lot of media/video work. Would you notice any significant performance difference between the 2? The laptop manufacturer claims that the battery life on both is the same irrespective of the chip used (although I find that hard to believe), but there is a major cost difference between them, with the Core i7-620M being the more expensive. According to http://ark.intel.com, the one thing that seems different (besides the obvious speeds/frequencies/etc) is a feature called "Embedded" - what is this exactly? You can see the quick comparison here - http://ark.intel.com/Compare.aspx?ids=43544,43560 I would sincerely appreciate any advice me on this. THANKS!

    Read the article

  • SRMSVC event ID 8197

    - by Godeke
    I have a Windows 2003 R2 machine that is giving an Event ID 8197 about once an hour and ten minutes. The full error is attached below. The machine is primary used to host IIS webpages and SMTP. There is no known scheduled tasks on the machine. I have read a lot of Google Search and Microsoft docs, but none of the suggestions found there have any impact. What I am curious is if there is any way to convert the SRMVOLMC81 and SRMVOLMC57 into mount point data so I could at least know where the error is sourcing from (there are no related errors in the logs, just the 8197 every hour and ten). Event Type: Error Event Source: SRMSVC Event Category: None Event ID: 8197 Date: 2/7/2011 Time: 11:32:21 AM User: N/A Computer: SERVER001 Description: File Server Resource Manager Service error: Unexpected error. Error-specific details: Error: GetVolumeNameForVolumeMountPoint, 0x80070001, Incorrect function. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 53 52 4d 56 4f 4c 4d 43 SRMVOLMC 0008: 38 31 00 00 00 00 00 00 81...... 0010: 53 52 4d 56 4f 4c 4d 43 SRMVOLMC 0018: 35 37 00 00 00 00 00 00 57......

    Read the article

  • High CPU usage by 'svchost.exe' and 'coreServiceShell.exe'

    - by kush.impetus
    I am having a laptop running on Windows 7 Ultimate 32-bit. Since past few days, my laptop is facing a serious problem. Whenever I connect to Internet, either svchost.exe or coreServiceShell.exe or both hog the CPU. The coreServiceShell.exe consumes a lot of RAM also. Going into the details, I found that high CPU usage of svchost.exe is caused by Network Location Awareness service. And the high CPU usage of coreServiceShell.exe is caused by Trend Micro Titanium Internet Security 2012. That kind'a makes me think that Trend Micro may be the root of the problem. After further testing, I found that if I use IE or Firefox to browse the Internet, immediately after connecting to Internet, things are normal. See and But if I use Google Chrome, the coreServiceShell.exe hogs both CPU and RAM. At this point, if I disconnect the Internet, the CPU and RAM usage by coreServiceShell.exe continues to be high till I close the Chrome. Also, when I close the Chrome, while Internet is connected, svchost.exe continues to hog CPU but coreServiceShell.exe leaves the race. That makes think that Chrome is the root of the problem, but again, tracing coreServiceShell.exe takes me back to Trend Micro Internet Security. Stopping the Protection by the Trend Micro Internet Security doesn't help either (I am not able to stop its services though). I have updated the Chrome, but no help. I just can't figure out who is the culprit. I can't do without the Google Chrome (of course, by not using it) because of its immensely useful and indispensable features both during browsing and development. Secondly, I can't uninstall the Trend Micro Internet security Suite since it still has few months before it expires and is proving me reliable protection. What could be the cause of the problem and what can I do to resolve this? Thanks in advance

    Read the article

  • How to install Konica Minolta PagePro 1300W to Windows 7 64-bit?

    - by jsalonen
    I recently switched to Windows 7 (Home Pro, 64-bit) to discover that my Konica Minolta PagePro 1300W printer no longer works. When connected, Win7 prompts that it can not install a driver for the device. I have done a lot of googling to solve this problem, with no luck so far. From Konica Minolta official website, I can find drivers only for Windows XP/2000. My current reasoning is that they currently don't and most likely are not going to support Win7 let alone 64-bit version of it for this rather old printer. So my question is: does anyone have any good tips on how to make this printer work on my system? Is there any other place I could look for drivers, or in generally, do you know any workarounds that could let me printer work? One of the workaround I have been considering is to install a Windows XP / Ubuntu Linux on a virtualbox and use that system when I really really need to printer. This is of course not the optimal solution, but would let me possibly to use the printer until I buy a newer model.

    Read the article

  • WmiPrvSE memory leak on Windows 2008 *R2*

    - by MichaelGG
    I've seen references on Windows 2008 to WmiPrvSE leaks, but nothing about Windows 2008 R2. We're running R2 on top of Hyper-V (2008). We are also running NSClient++ for monitoring from opsview. Over time, WmiPrvSE.exe starts to use a lot of memory, causing memory alert issues (less than 10% free). VM has 2GB, WmiPrvSE consumes up to 500-600MB before I kill it. Killing the process doesn't seem to have any negative effect; it starts up again and I haven't noticed any problems. But after a day or two, it's back in the same situation. Any ideas on what to do? Resource Monitor doesn't show any Disk or Network IO by WmiPrvSE.exe. Just slowly climbing private memory... Edited to add: We aren't running clustering, or Windows System Resource Manager. The only regular WMI user I can guess is NSClient++, but we don't seem to have this problem on other servers.

    Read the article

  • Adaptec 6405 RAID controller turned on red LED

    - by nn4l
    I have a server with an Adaptec 6405 RAID controller and 4 disks in a RAID 5 configuration. Staff in the data center called me because they noticed a red LED was turned on in one of the drive bays. I have then checked the status using 'arcconf getconfig 1' and I got the status message 'Logical devices/Failed/Degraded: 2/0/1'. The status of the logical devices was listed as 'Rebuilding'. However, I did not get any suspicious status of the affected physical device, the S.M.A.R.T. setting was 'no', the S.M.A.R.T. warnings were '0' and also 'arcconf getsmartstatus 1' returned no problems with any of the disk drives. The 'arcconf getlogs 1 events tabular' command gives lots of output (sorry, can't paste the log file here as I only have remote console access, I could post a screenshot though). Here are some sample entries: eventtype FSA_EM_EXPANDED_EVENT grouptype FSA_EXE_SCSI_GROUP subtype FSA_EXE_SCSI_SENSE_DATA subtypecode 12 cdb 28 00 17 c4 74 00 00 02 00 00 00 00 data 70 00 06 00 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 0 The 'arcconf getlogs 1 device tabular' command reports mediumErrors 1 for two of the disks. Today, I have checked the status of the controller again. Everything is back to normal, the controller status is now 'Logical devices/Failed/Degraded: 2/0/0', the logical devices are also all back to 'Optimal'. I was not able to check the LED status, my guess is that the red LED is off again. Now I have a lot of questions: what is a possible cause for the medium error, why it is not reported by the SMART log too? Should I replace the disk drives? They were purchased just a month ago. The rebuilding process took one or two days, is that normal? The disks are 2 TByte each and the storage system is mostly idling. the timestamp of the logs seem to show the moment of the log retrieval, not the moment of the incident. Please advise, all help is very appreciated.

    Read the article

  • Internet Explorer 8 crashes on Citrix, Windows 2003

    - by Workshop Alex
    Difficult to decide where this Q fits best. It's server-related and programming-related. But it's a user-problem so I'll put it here, first... I work on a (Delphi) application that uses an Internet Explorer component to show information to the user. It's not a web application, just a desktop application which creates HTML pages to display them within a browser component. Some of the information on these webpages are retrieved from a web server, while other information is provided "live" by the application itself. It works quite well, but it adds an IE-process (child process) next to my application. And this IE process seems to eat a lot of system resources. For normal users, this is not a real problem, so it's not an issue that I want to fix in the code. But one customer of this application uses it with about 100 users on a Citrix/Windows 2003 environment and they complain about problems with the application. IE8 tends to crash, not show, hang or cause other mayhap. Then again, I've warned them that -officially- I won't support any Citrix environment. But I'm willing to help them to find a solution to fix this, and if need be I could make minor changes to my code to help fix this issie. (If possible.) But basically, I need a solution that any user/administrator on this Citrix environment can follow/use. Any ideas on how to resolve this resource problem?

    Read the article

  • Convert Custom Firefox Setup to Firefox Portable?

    - by dfree
    I have a pretty awesome firefox set up and spent a lot of time getting it perfect. Is there any way that anyone knows about to convert the entire configuration to portable? Programs like MozBackup are great for backing up the complete set up, but you can't restore a Firefox profile to Firefox portable (maybe there is a workaround to fake it out? or possibly another method?) In case anyone is interested here is the gist of the best add-ons I've found: Autopager (scroll down google and other multi page results without clicking next) Coral IE Tab (IE in firefox - in case a website 'insists' that you use IE) Cyber search (search google straight from the address bar - VERY HELPFUL) Download StatusBar (display progress of downloads in the bottom of ff - no annoying popups FireFTP (erases need for an external FTP client - opens in a tab) Gmail manager (if you use multiple gmail accounts) Session Manager (saving multiple sessions of tabs - ff session recover) Surf Canyon (pull relevant stuff out of the depths of search results - even from craigslist Tab Mix Plus (ESSENTIAL - tab behavior customization - have multiple rows of tabs I also have it set up so you can type 'g test' in the address bar and ff will pull up the google results for 'test'. Similarly have it set up for guitar tabs (tab), facebook (f), wikipedia (w), google maps from my house (gmhome), torrents (tor), ticketmaster (t), rotten tomatoes (rt), craiglist (c) plus about 20 other sites.

    Read the article

  • Cisco: Site-to-site VPN with cisco 878 and ASA weirdness

    - by cpf
    I currently have 2 sites, both connected to each other through 2 firewalls / routers in a site-to-site VPN. Pinging from server to server (Using 2mb/2mb SDSL) through that VPN obviously works, however, at one site, we have another internet connection (7m/400k ADSL), and only the link between the two sites should be on the other connection. All pc's should go over the other connection for internet, just communication between servers & Communication between pc's and the server at the other side should go through there too. What is configured at the moment is the server is using the SDSL directly as default gateway. Since it's not intended to surf anything it is a safe config. PC's are configured on the ADSL as default gateway. Now I wanted to route through everything that uses the range used on the other site, it should be sent from the ADSL modem to the SDSL modem, which has the VPN connection. I figured I could use OSPF to do so, however, OSPF doesn't seem to "detect" the range of the external site. Also (due to bad ip subnetting thanks to the other administrator), the ip used internally as the server on the other site also exists on the internet (causing a lot of confusion), so rdp-ing from our server to the server of the other site works (somehow), but tracerouting on the SDSL router (which should actually, in my opinion, go over the VPN) actually goes all over the internet. My question(s): Why doesn't the SDSL router ping the external ip through VPN, but the server does? Why can't I route from the ADSL router to the SDSL over VPN? I would seriously appreciate some help, since I can't figure out why it does it like this.

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >