Search Results

Search found 12047 results on 482 pages for 'general debugging tidbits'.

Page 393/482 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • C# trying to capture the KeyDown event on a form

    - by Patrick
    Hello! I am creating a small game, the game is printed onto a panel on a windows form. Now i want to capture the keydown event to see if its the arrow keys that has been pressed, the problem however is that i can't seem to capture it. Let me explain, on the form i have 4 buttons and various other controls and if the user for instance press one of the buttons (to trigger a game event) then the button has focus and i can't capture the movements with the arrow keys. I tried something like private void KeyDown(KeyEventArgs e) { if (e.KeyCode == Keys.Left) { game.MovePlayer(DonutWarsLibrary.GameObjects.Direction.E); game.DrawObjects(panel1.CreateGraphics()); } else if (e.KeyCode == Keys.Right) { game.MovePlayer(DonutWarsLibrary.GameObjects.Direction.W); game.DrawObjects(panel1.CreateGraphics()); } else if (e.KeyCode == Keys.Up) { game.MovePlayer(DonutWarsLibrary.GameObjects.Direction.N); game.DrawObjects(panel1.CreateGraphics()); } else if (e.KeyCode == Keys.Down) { game.MovePlayer(DonutWarsLibrary.GameObjects.Direction.S); game.DrawObjects(panel1.CreateGraphics()); } } and then when the form key down event was pressed, i used this private void MainForm_KeyDown(object sender, KeyEventArgs e) { KeyDown(e); } I also added keydown for the buttons and the various other controls on the windows form, but i am not getting any response back. I have setup a breakpoint inside the function to see if it's being called, but that breakpoint never triggers? Any ideas? The most optimal was to have a general KeyDown event that triggers (regardless of what control that currently has focus) and then calls the KeyDown method.

    Read the article

  • open source business intelligence solutions

    - by opensas
    which open source business intelligence solution would you recommend? All I need is to build some cubes and let the end user play with dimensions, filter data, sort, etc, and once it's done being able to export it to excel... I'd like the solution to be as simple and easy on resources as possible, and also I'd like it to be as much open source as possible, by the way. I've heard that many solutions available do have many restrictions when it comes to there community version. I'd like to ear your advices and the pros/cons of each alternative, to help me choose the right tool, and if you could point me to some basic demo and tutorial to get started. thanks a lot ps: I'm using sql server databases, they aren't huge databases (in general less than a million records) and I doesn't necessarily have to work on "live" data... ps: some useful links: http://en.wikipedia.org/wiki/Business_intelligence_tools#Open_source_free_products http://www.manageability.org/blog/stuff/open-source-java-business-intelligence http://www.jaspersoft.com/jasperanalysis http://community.pentaho.com/projects/bi_platform/ http://community.pentaho.com/faq/platform_licensing.php http://www.eclipse.org/birt/phoenix/ http://www.spagoworld.org/xwiki/bin/view/SpagoWorld/ http://docs.google.com/viewer?a=v&q=cache:vhsqMQXwCUkJ:www.ow2.org/xwiki/bin/download/Activities/EuropeLocalChapterWebinars/ELCWebinarOSBI.pdf+open+source+business+intelligence&hl=en&pid=bl&srcid=ADGEESgpJJ2MqaKprJQOF2jX2UXCZQjg_asv8d7EVYtq0Vma-e-tR1tFxS-I0SOW0IhJC5acYc94rkDOrgP1WckCp_vk4qhKqR9y2Klp_u9cL8hlXoKoUpMkpAd5wabu61A4W0y15E5P&sig=AHIEtbRJ5FAI-3YK-qtayPjKkF_CwOgZag

    Read the article

  • Why does Clojure hang after hacing performed my calculations?

    - by Thomas
    Hi all, I'm experimenting with filtering through elements in parallel. For each element, I need to perform a distance calculation to see if it is close enough to a target point. Never mind that data structures already exist for doing this, I'm just doing initial experiments for now. Anyway, I wanted to run some very basic experiments where I generate random vectors and filter them. Here's my implementation that does all of this (defn pfilter [pred coll] (map second (filter first (pmap (fn [item] [(pred item) item]) coll)))) (defn random-n-vector [n] (take n (repeatedly rand))) (defn distance [u v] (Math/sqrt (reduce + (map #(Math/pow (- %1 %2) 2) u v)))) (defn -main [& args] (let [[n-str vectors-str threshold-str] args n (Integer/parseInt n-str) vectors (Integer/parseInt vectors-str) threshold (Double/parseDouble threshold-str) random-vector (partial random-n-vector n) u (random-vector)] (time (println n vectors (count (pfilter (fn [v] (< (distance u v) threshold)) (take vectors (repeatedly random-vector)))))))) The code executes and returns what I expect, that is the parameter n (length of vectors), vectors (the number of vectors) and the number of vectors that are closer than a threshold to the target vector. What I don't understand is why the programs hangs for an additional minute before terminating. Here is the output of a run which demonstrates the error $ time lein run 10 100000 1.0 [null] 10 100000 12283 [null] "Elapsed time: 3300.856 msecs" real 1m6.336s user 0m7.204s sys 0m1.495s Any comments on how to filter in parallel in general are also more than welcome, as I haven't yet confirmed that pfilter actually works.

    Read the article

  • video streaming infrastructure

    - by alchemical
    We would like to set-up a live video-chat web site and are looking for basic architectural advice and/or a recomendation for a particular framework to use. Here are the basic features of the site: Most streams will be broadcast live from a single person with a web cam, etc., and viewed by typically 1-10 people, although there could be up to 100+ viewers on the high side. Audio and video do not have to be super-high quality, but do need to be "good enough". The main point is to convey the basic info in the video (and audio). If occasionally the frame-rate drops low and then goes back to normal fairly soon, we could live with that. Budget is an issue, so we are in general looking for a lower cost solution that will give us most of what we need in temers of performance and quality. We are looking at Peer1 for co-lo. The rest of our web site will be .Net / Windows platform. We are open to looking at any platform for the best streaming solution, although our technical expertise is currently more on the Windows side.

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • How to properly implement cheat codes?

    - by Axarydax
    Hi, what would be the best way to implement kind of cheat codes in general? I have WinForms application in mind, where a cheat code would unlock an easter egg, but the implementation details are not relevant. The best approach that comes to my mind is to keep index for each code - let's consider famous DOOM codes - IDDQD and IDKFA, in a fictional C# app. string[] CheatCodes = { "IDDQD", "IDKFA"}; int[] CheatIndexes = { 0, 0 }; const int CHEAT_COUNT = 2; void KeyPress(char c) { for (int i = 0; i < CHEAT_COUNT; i++) //for each cheat code { if (CheatCodes[i][CheatIndexes[i]] == c) { //we have hit the next key in sequence if (++CheatIndexes[i] == CheatCodes[i].Length) //are we in the end? { //Do cheat work MessageBox.Show(CheatCodes[i]); //reset cheat index so we can enter it next time CheatIndexes[i] = 0; } } else //mistyped, reset cheat index CheatIndexes[i] = 0; } } Is this the right way to do it?

    Read the article

  • How do you remove invalid hexadecimal characters from an XML-based data source prior to constructing

    - by Oppositional
    Is there any easy/general way to clean an XML based data source prior to using it in an XmlReader so that I can gracefully consume XML data that is non-conformant to the hexadecimal character restrictions placed on XML? Note: The solution needs to handle XML data sources that use character encodings other than UTF-8, e.g. by specifying the character encoding at the XML document declaration. Not mangling the character encoding of the source while stripping invalid hexadecimal characters has been a major sticking point. The removal of invalid hexadecimal characters should only remove hexadecimal encoded values, as you can often find href values in data that happens to contains a string that would be a string match for a hexadecimal character. Background: I need to consume an XML-based data source that conforms to a specific format (think Atom or RSS feeds), but want to be able to consume data sources that have been published which contain invalid hexadecimal characters per the XML specification. In .NET if you have a Stream that represents the XML data source, and then attempt to parse it using an XmlReader and/or XPathDocument, an exception is raised due to the inclusion of invalid hexadecimal characters in the XML data. My current attempt to resolve this issue is to parse the Stream as a string and use a regular expression to remove and/or replace the invalid hexadecimal characters, but I am looking for a more performant solution.

    Read the article

  • Accessing class member variables inside a BackgroundWorker's DoWork event handler, and other Backgro

    - by Justin
    Question 1 In the DoWork event handler of a BackgroundWorker, is it safe to access (for both reading and writing) member variables of the class that contains the BackgroundWorker? Is it safe to access other variables that are not declared inside the DoWork event handler itself? Obviously DoWork should not be accessing any UI objects of, say, a WinForms application, as the UI should only be updated from the UI thread. But what about accessing other (not UI-related) member variables? The reason why I ask is that I've seen the occasional comment come up while Googling saying that accessing member variables is not allowed. The only example I can find at the moment is a comment on this MSDN page, which says: Note, that the BGW can cause exceptions if it attempts to access or modify class level variables. All data must be passed to it by delegates and events. And also: NEVER. NEVER. Never try to reference variables not declared inside of DoWork. It may seem to work at times, but in reality you are just getting lucky. As far as I know, MSDN itself does not document any restrictions of this kind (although if I'm wrong, I'd appreciate a link). But comments like these do seem to pop up every now and again. (Of course if DoWork does access/modify a member variable that could be accessed/modified by the main thread at the same time, it is necessary to synchronise access to that field, eg by using a locking object. But the above quotes seem to require a blanket ban of accessing member variables, rather than just synchronising access!) Question 2 To make this into a more general question, are there any other (not documented?) restrictions that users of the BackgroundWorker should be aware of, aside from the above? Any "best practices", perhaps?

    Read the article

  • using Autofac in a multi-layered architecture

    - by Kamyar
    I'm fairly new to the DI/IoC concept and would like to use Autofac in a 3-layered ASP.NET Webforms application. UI layer: An ASP.NET webforms website. BLL: Business logic layer which calls the repositories on DAL. DAL: .EDMX file (Entity Model) and ObjectContext with Repository classes which abstract the CRUD operations for each entity. Entities: The POCO Entities. Persistence Ignorant. Generated by Microsoft's ADO.Net POCO Entity Generator. I have asked a more general question here. Basically, I'd like to create an obejctcontext per HttpContext in my DAL. But i don't want to add a reference to DAL in UI or access to HttpContext in DAL directly. I guess this is where IoC tools come to play. The answer to my previous question is a very good example of using Windsor Castle. I'd like to use Autofac as my IoC tool and Don't know how to achieve this. (How to access DAL in application_start to register the component while I don't want to reference it in my UI, what are the proper references to be able to use DAL component in BLL with Autofac, Should I register BLL as a component with Autofac too) Sorry folks for not providing an explicit question and requesting a kind of working example, But I'm very unfamiliar to the whole IoC concept and I don't think I can achieve it to use in my current time-limited project.

    Read the article

  • Invalid iPhone Application Binary

    - by Kristopher Johnson
    I'm trying to upload an application to the iPhone App Store, but I get this error message from iTunes Connect: The binary you uploaded was invalid. The signature was invalid, or it was not signed with an Apple submission certificate. My guess is that it is not properly signed. I have downloaded my App Store distribution certficate, but I can't figure out how to "sign" my application with it. The SDK's documentation about code signing is not very helpful. (FWIW, I can install the app on my iPhone just fine using the development provisioning profile.) However, it is possible that I screwed things up on a more basic level. Here's what I did to try to prepare it for upload: In Xcode, select the Device|Release target Select the target and click the Info button. Change "Code Signing Identity" to "iPhone Distribution", and change "Code Signing Provisioning Profile" to my App Store distribution profile. Build Go to the directory where the built MyApp.app bundle is, control-click and choose "Compress" to create MyApp.zip Upload MyApp.zip to the App Store via iTunes Connect (which resulted in the above error message). Can anybody give me any hints? Edit: Found someone with the same problem. Unfortunately, he won't tell us how he fixed it. http://www.rhonabwy.com/wp/2008/07/18/seattlebus-diary-ongoing-update-saga/#comments http://www.rhonabwy.com/wp/2008/07/22/seattlebus-diary-update-is-pending-review/ (Note: For general information on submitting iPhone applications to the App Store, see Steps to upload an iPhone application to the AppStore.)

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • svn import, dont modify revision OR modify the list of files in a transaction

    - by Vaughan Durno
    Hi Ive gained so much knowledge/insight from this site in the past few years, now im actually hoping to get some enlightenment. The scenario is as follows: You have the general structure of the repo (trunk,branches,tags) but added to the layout you have another directory called 'db_revs'. Now in the pre-commit, you take a dump of a specific database (the specifics are irrelevant) into a temporary file, say /tmp/REV.sql (REV being the HEAD revision number of the repo, or the transaction). K all is well and you can just import that temp file into the repo at /db_revs/REV.sql Now obviously that import, even tho its happening during a commit, increments the revision of the repo. So when u do a commit at some point to say 'test.php' in the trunk and it completes at say revision 159, then the pre-commit runs as it should and the DB dump gets imported but then u r sitting with a tree in the repo-browser where 'trunk' is at revision 159, and 'db_revs', which has the imported dump, is at 158 (Ive made it so that the filename matches the revision ie: 159.sql but that file is then at revision 158). NB If you're doing an import in a pre-commit, you need to add some logic to not perform the import, say by checking first for the existence of the temp file, otherwise it will cause, um, a stack overflow and your PC will quickly crawl to a stand still So I wanted to know if it was possible to make an import to not commit its changes. I realise I might be barking up the wrong tree to begin with so I have another idea of doing this so that brings me to the 2nd part of my question, would it be possible to modify the list of files that the transaction is about to commit to the repo. I know this can be done to a WC but that wont help as a WC is a checked out copy of say the trunk so im not sure how u would add a file to the 'db_revs' folder which is above trunk? Any help is greatly appreciated Cheers Vaughan

    Read the article

  • What is the relationship between Turing Machine & Modern Computer ?

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I just cannot build a bridge from a conceptual Turing Machine to a real modern computer. Could someone help me build this bridge? Below is my current understanding. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine.

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • Can I create template-based library objects in Dreamweaver CS5?

    - by Danjah
    At work we need two 'streams' of template. The first are general layout templates, like the ones already available in the MX through CS5 packages (except we'd have our own customised ones). The second are more granular objects, some of which are functional. In both cases, I don't want Jimmy to be able to wreak havoc inside anything other than the 'editable regions' which make up the templates. Now this is fine if I stick with the first scenario (layout templates) where there's simply a big chunk of editable region for good ole Jim to sprawl into, think of this as the 'body content' area. But I really do need these granular library (or snippet) objects to work in the same way. Unfortunately with my attempts so far they don't work as I'd have thought - perhaps for good reason? When I create a blank template and throw in my chunk of HTML (unobstrusive JS and external CSS use selectors in this HTML to provide style and function) and save it as a new library item or snippet, all looks well. Then I create a new document based on a layout template and save it as a plain html file (still all good so far). Next I drop in my custom library item... still all good... but then I go to save the document and it only allows me to save it as a new template! I expected it would just allow me to save it as HTML and have it simply respect the defined editable regions, as happens in the containing page 'body content' editable region. Apologies if that got specific and technical quite quickly, but it is quite particular. If you want some example files lemme know and I'll zip some up. Many thanks :) p.s It is not a requirement that library objects must somehow inject their dependency files into the newly created page - I already know what they'll be. Also, I know I must 'detatch from original' once I drop a library item into a document which then allows customisation of the library object.

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • How do I begin reading source code?

    - by anonnoir
    I understand the value of reading source code, and I am trying my best to read as much as I can. However, every time I try getting into a 'large' (i.e. complete) project of sorts, I am overwhelmed. For example, I use Anki a lot when revising languages. Also, I'm interested in getting to know how an audio player works (because I have some project ideas), hence quodlibet on Google Code. But whenever I open the source code folders for the above programs, there are just so many files that I don't know where or what to begin with. I think that I should start with files marked init.py but I can't see the logical structure of the programs, or what reasoning was applied when the original writer divided his modules the way he did. Hence, my questions: How/where should I begin reading source? Any general tips or ideas? How does a programmer keep in mind the overall structure and logic of the program, especially for large projects, and is it common not to document that structure? As an open source reader, must I look through all of the code and get a bird's eye view of the code and libraries, before even being able to proceed? Would an IDE like Eclipse SDK (with PyDev) help with code-reading? Thanks for the help; I really appreciate your helping me.

    Read the article

  • If I wanted to make a Pac-Man Game?

    - by SoulBeaver
    I am immediately placing this as a community wiki thing. I don't want to ask for help in programming yet or have even a specific question about programming, but rather the process and the resources needed to make such a game. To put it simply: My college friend and I decided to give ourselves a really big challenge to further our skills in programming. In six months time we want to show ourselves a Pac-Man game. Pac-Man will be AI-controlled like the Ghosts and whichever Pac-Man lives the longest after a set of tries wins. This isn't like anything we've done so far. The goal here, for me, isn't to create a perfect game, but to try and complete it, learn a whole bunch in the process. Even if I don't finish in the time, which is a good possibility, I would want to have at least tried this. So my question is this: How should I start preparing myself? I already have started vector math, matrices, all that fun stuff. My desired platform would be DirectX 9.0c; is that advisable? Keep in mind that this is not a preference just for this project, but I wish to have some kind of future in graphics develepment, so I want to pick a platform that is future-safe. As for the game development in general, what should I take into consideration? I have never done a real game before, so any and all advise to development of mid-scale projects( if this would be a mid-scale project ) is greatly appreciated. My main concerns are the pit-falls and demotivators. Sorry if the question is so vague. If it doesn't belong here, then I will remove it. Otherwise, any and all advise regarding making larger projects is greatly appreciated.

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • What's it like being a financial programmer?

    - by Mike
    As a student who's done an internship at a Silicon Valley company(non-financial), I'm curious to know what it's like working for a financial company doing software development. I'd expect the hours to be longer, and the pay to be higher. Specifically, I have the following questions: What's the work/life balance really like? Are you expected to work 80 hours a week most weeks? For those who have worked in non-financial software engineering jobs, how does being a financial software engineer compare in terms of work/life balance? How much does it pay? I'm curious as to starting(i.e. just got a BS) pay, as well as "top out" pay. (I'd prefer concrete numbers - ballpark is fine). Also, bonuses would be useful information. What jobs do financial programmers typically have? Are most just general software engineers, or do people typically have very specialized(i.e. AI or systems) backgrounds? Also, do most programmers have PhDs? Are programmers typically required to be at work, or are financial companies generally flexible about letting programmers work from home? When at work, do programmers have to dress formally? What are the technology environments like? Are finance companies using state-of-the-art hardware and software, or are they generally more conservative in upgrading their equipment? What programming languages are typically used? If VBA(shudder) is used, is it a large part of a finance company's workflow? If you could turn back the clock, would you still be a financial programmer? I'm going to keep this post open a little bit longer to get some more responses.

    Read the article

  • Converting a Linq expression tree that relies on SqlMethods.Like() for use with the Entity Framework

    - by JohnnyO
    I recently switched from using Linq to Sql to the Entity Framework. One of the things that I've been really struggling with is getting a general purpose IQueryable extension method that was built for Linq to Sql to work with the Entity Framework. This extension method has a dependency on the Like() method of SqlMethods, which is Linq to Sql specific. What I really like about this extension method is that it allows me to dynamically construct a Sql Like statement on any object at runtime, by simply passing in a property name (as string) and a query clause (also as string). Such an extension method is very convenient for using grids like flexigrid or jqgrid. Here is the Linq to Sql version (taken from this tutorial: http://www.codeproject.com/KB/aspnet/MVCFlexigrid.aspx): public static IQueryable<T> Like<T>(this IQueryable<T> source, string propertyName, string keyword) { var type = typeof(T); var property = type.GetProperty(propertyName); var parameter = Expression.Parameter(type, "p"); var propertyAccess = Expression.MakeMemberAccess(parameter, property); var constant = Expression.Constant("%" + keyword + "%"); var like = typeof(SqlMethods).GetMethod("Like", new Type[] { typeof(string), typeof(string) }); MethodCallExpression methodExp = Expression.Call(null, like, propertyAccess, constant); Expression<Func<T, bool>> lambda = Expression.Lambda<Func<T, bool>>(methodExp, parameter); return source.Where(lambda); } With this extension method, I can simply do the following: someList.Like("FirstName", "mike"); or anotherList.Like("ProductName", "widget"); Is there an equivalent way to do this with Entity Framework? Thanks in advance.

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • How to improve Java performance on Informix for Windows

    - by Michal Niklas
    I have problem with performance of Java UDR functions on Informix on Windows. On this server I already have some functions in C and SPL. I chose one function to write it in those 3 languages and I measured performance of this function on test table. Function calculates some kind of checksum so it does not use any db libraries etc. only string and math operations. I observed performance on 30k records with SQL like: select function(txt) from _tmp_perf_test and I changed function to 'function_c, function_spl or function_java. My performance tests showed that C function is the fastest, SPL function is about 5 times slower, where Java is 100 (one hundred!) times slower than C. I checked it few times and 1:100 ratio didn't improve. I changed Java function to simply return length of the string but even this do not help so it looks, that there is general problem with Java function invocation, because there was no difference in time between Java function that calculate checksum and Java function that returns length of the string. I increased JVM_MAX_HEAP_SIZE to 128 and it not helped too. I use IBM Informix Dynamic Server Version 11.50.TC6DE. The same test on Linux server: IBM Informix Dynamic Server Version 11.50.FC6 show more "normal" results, i.e. Java is slower from C and SPL but only 2 to 5 times. What can I do to improve Java performance on Informix server on Windows? More info about Java on servers: c:\Informix\extend\krakatoa\jre\bin>java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pwi32dev-20081129a (SR9-0 )) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Windows Server 2003 x86-32 j9vmwi3223-20081129 (JIT enabled) J9VM - 20081126_26240_lHdSMr JIT - 20081112_1511ifx1_r8 GC - 200811_07) JCL - 20081129 [root@informix11 bin]# ./java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pxa64devifx-20071025 (SR6b)) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux amd64-64 j9vmxa6423-20071005 (JIT enabled) J9VM - 20071004_14218_LHdSMr JIT - 20070820_1846ifx1_r8 GC - 200708_10) JCL - 20071025

    Read the article

  • Suppress Eclipse compiler errors in in a plug-in

    - by Jan Gorzny
    Hi, I'm currently working on a plug-in for Eclipse that translates some custom Java code (which doesn't necessarily run/compile), to runnable Java code. In particular, the plug-in allows code to be written using classes created or imported during the translation. In general, the pre-translation code runs/compiles fine provided the writer uses import statements at the top of their class files. However, it would be convenient for my users if it was not necessary to import these classes. At the moment, the lack of import statements results in (obvious) compiler errors. Would it be possible to empower my plug-in to either a) suppress/ignore these errors, or b) have Eclipse find these classes automatically, without the use of import statements? I should point out that the translated code would these include the required import statements--but this is not a problem for me. I'm also aware that this could lead to lazy programmers and some bad habits. To clarify, consider the following example of pre-translated code: File f = new File("Somefilename.txt"); which clearly requires the possibly imported class File. Without an import statement (import java.io.File;), Eclipse reports that File can not be resolved to a type. This is the error I'd like to hide in files pertaining to projects created for use with my plug-in. (The translated code would include import java.io.File; so that it would be runnable) In closing, I should point out I'm not necessarily looking for code (though I wouldn't be opposed to it), but rather some links to some relevant tutorials (if they exist), or helpful tips/ideas. Also, as this is my first plug-in, it's entirely possible that what I'd like to do is not possible and that I don't realize it--if this is the case, please let me know, preferably with some justification. Thanks!

    Read the article

  • Why avoid increment ("++") and decrement ("--") operators in JavaScript?

    - by artlung
    I'm a big fan of Douglas Crockford's writing on JavaScript, particularly his book JavaScript: The Good Parts. It's made me a better JavaScript programmer and a better programmer in general. One of his tips for his jslint tool is this : ++ and -- The ++ (increment) and -- (decrement) operators have been known to contribute to bad code by encouraging excessive trickiness. They are second only to faulty architecture in enabling to viruses and other security menaces. There is a plusplus option that prohibits the use of these operators. This has always struck my gut as "yes, that makes sense," but has annoyed me when I've needed a looping condition and can't figure out a better way to control the loop than a while( a < 10 )do { a++ } or for (var i=0;i<10;i++) { } and use jslint. It's challenged me to write it differently. I also know in the distant past using things, in say PHP like $foo[$bar++] has gotten me in trouble with off-by-one errors. Are there C-like languages or other languages with similarities that that lack the "++" and "--" syntax or handle it differently? Are there other rationales for avoiding "++" and "--" that I might be missing? UPDATE -- April 9, 2010: In the video Crockford on JavaScript -- Part 5: The End of All Things, Douglas Crockford addresses the ++ issue more directly and with more detail. It appears at 1:09:00 in the timeline. Worth a watch.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >