Search Results

Search found 3219 results on 129 pages for 'dallas fort worth'.

Page 106/129 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Win32 DLL importing issues (DllMain)

    - by brady
    I have a native DLL that is a plug-in to a different application (one that I have essentially zero control of). Everything works just great until I link with an additional .lib file (links my DLL to another DLL named ABQSMABasCoreUtils.dll). This file contains some additional API from the parent application that I would like to utilize. I haven't even written any code to use any of the functions exported but just linking in this new DLL is causing problems. Specifically I get the following error when I attempt to run the program: The application failed to initialize properly (0xc0000025). Clock on OK to terminate the application. I believe I have read somewhere that this is typically due to a DllMain function returning FALSE. Also, the following message is written to the standard output: ERROR: Memory allocation attempted before component initialization I am almost 100% sure this error message is coming from the application and is not some type of Windows error. Looking into this a little more (aka flailing around and flipping every switch I know of) I linked with /MAP turned on and found this in the resulting .map file: 0001:000af220 ??3@YAXPEAX@Z 00000001800b0220 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af226 ??2@YAPEAX_K@Z 00000001800b0226 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af22c ??_U@YAPEAX_K@Z 00000001800b022c f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af232 ??_V@YAXPEAX@Z 00000001800b0232 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll If I undecorate those names using "undname" they give the following (same order): void __cdecl operator delete(void * __ptr64) void * __ptr64 __cdecl operator new(unsigned __int64) void * __ptr64 __cdecl operator new[](unsigned __int64) void __cdecl operator delete[](void * __ptr64) I am not sure I understand how anything from ABQSMABasCoreUtils.dll can exist within this .map file or why my DLL is even attempting to load ABQSMABasCoreUtils.dll if I don't have any code that references this DLL. Can anyone help me put this information together and find out why this isn't working? For what it's worth I have confirmed via "dumpbin" that the parent application imports the same DLL (ABQSMABasCoreUtils.dll), so it is being loaded no matter what. I have also tried delay loading this DLL in my DLL but that did not change the results.

    Read the article

  • Exception from HRESULT: 0x80020009 (DISP_E_EXCEPTION)) in SharePoint Part 2

    - by BeraCim
    Hi all: Following this post which I posted some time ago, I now get the same error every time I try to rewire 2 web's URLs. Basically, this is the code. It runs in a LongRunningOperationJob: SPWeb existingWeb = null; using (existingWeb = site.OpenWeb(wedId)) { SPWeb destinationWeb = createNewSite(existingWeb); existingWeb.AllowUnsafeUpdates = true; existingWeb.Name = existingWeb.Name + "_old"; existingWeb.Title = existingWeb.Title + "_old"; existingWeb.Description = existingWeb.Description + "_old"; existingWeb.Update() existingWeb.AllowUnsafeUpdates = false; destinationWeb.AllowUnsafeUpdates = true; destinationWeb.Name = existingWeb.Name; destinationWeb.Title = existingWeb.Title; destinationWeb.Description = existingWeb.Description; destinationWeb.Update(); destinationWeb.AllowUnsafeUpdates = false; // null this for what its worth existingWeb = null; destinationWeb = null; } // <---- Exception raised here Basically, the code is trying to rename the existing site's URL to something else, and have the destination web's url point to the old site's URL. When I run this for the first time, I received the Exception mentioned in the subject. However, every run after, I do not see the exception anymore. The webs DO get rewired... but at the cost of the app dying an unnecessary and terrible death. I'm at a complete lost as to what is going on and needs urgent help. Does sharepoint keep any hidden table from me or is the logic above has fatal problems? Thanks.

    Read the article

  • Memory leak in Apples 'Scrolling' sample code

    - by John
    Hi All, I'm using code based on Apple's "Scrolling" sample code - here's where I have a problem: // load all the images from our bundle and add them to the scroll view NSUInteger i; for (i = 1; i <= jNumImages; i++) { NSString *imageName = [NSString stringWithFormat:@"page%d.png", i]; UIImage *image = [UIImage imageNamed:imageName]; UIImageView *imageView2 = [[UIImageView alloc] initWithImage:image]; The UIImageView causes a leak - it doesn't seem to be releasing (though I do state [imageView2 release]; after adding the imageView as a subView to scrollView2 I have say 15 chapters of a book each held in a nav bar, each containing a scroll view with one chapters worth of these image views (each image is a page). When I get to around the second last chapter the app crashes due to the memory leaks... really annoying! I think it might be because imageView's been alloc'd twice (in alloc and in addSubView) but i'm not sure.... tried releasing twice but it didn't seem to help. any pointers? Thanks in advance ^.^

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • One config file for multiple environments

    - by ho
    I'm currently working with systems that has quite a lot of configuration settings that are environment specific (Dev, UAT, Production). Does anyone have any suggestions for minimizing the changes needed to the config file when moving between environments as well as minimizing the duplication of data in the config file? It's mostly Application settings rather than User settings. The way I'm doing it at the moment is something similar to this: <DevConnectionString>xyz</DevConnectionString> <DevInboundPath>xyz</DevInboundPath> <DevProcessedPath>xyz</DevProcessedPath> <UatConnectionString>xyz</UatConnectionString> <UatInboundPath>xyz</UatInboundPath> <UatProcessedPath>xyz</UatProcessedPath> ... <Environment>Dev</Environment> And then I have a class that reads in the Environment setting via the My.Settings class (it's VB project) and then uses that to decide what other settings to retrieve. This leads to too much duplication though so I'm not sure if it's worth it.

    Read the article

  • Find the Algorithm that generates the checksum

    - by knivmannen
    I have a sensing device that transmits a 6-byte message along with an 1-byte counter and supposely a checksum. The data looks something like this: ------DATA----------- -Counter- --Checksum?-- 55 FF 00 00 EC FF ---- 60---------- 1F The last four bits in the counter are always set 0, i.e those bits are probably not used. The last byte is assumed to be the checksum since it has a quite peculiar nature. It tends to randomly change as data changes. Now what i need is to find the algorithm to compute this checksum based on --DATA--. what i have tried is all possible CRC-8 polynomials, for each polynomial i have tried to reflect data, toggle it, initiate it with non-zeroes etc etc. Ive come to the conclusion that i am not dealing with a normal crc-algorithm. I have also tried some flether and adler methods without succes, xor stuff back and forth but still i have no clue how to generate the checksum. My biggest concern is, how is the counter used??? Same data but with different countervalue generates different checksums. I have tried to include the counter in my computations but without any luck. Here are some other datasamples: 55 FF 00 00 F0 FF A0 38 66 0B EA FF BF FF C0 CA 5E 18 EA FF B7 FF 60 BD F6 30 16 00 FC FE 10 81 One more thing that might be worth mentioning is that the last byte in the data only takes on the values FF or FE Plz if u have any tips or tricks that i may try post them here, I am truly desperate. Thx

    Read the article

  • Extension methods for encapsulation and reusability

    - by tzaman
    In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself?

    Read the article

  • Attention JavaScript gurus: Need a hand with setInterval()

    - by alex
    I am trying to make a non interactive display for a real estate shop window. It's been a while since I've played with setInterval(). The first time my script steps through, it is fine. But when it tries to get the next property via getNextProperty(), it starts to go haywire. If you have Firebug, or an equivalent output of console.log(), you'll see it is calling things it shouldn't! Now there is a fair bit of JavaScript, so I'll feel better linking to it than posting it all. Store Display Offending JavaScript It is worth mentioning all my DOM/AJAX is done with jQuery. I've tried as best to make sure clearInterval() is working, and it seems to not run any code below it. The setInterval() is used to preload the next image, and then display it in the gallery. When the interval detects we are at the last image ((nextListItem.length === 0)), it is meant to clear that interval and start over with a new property. It has been driving me nuts for a while now, so anyone able to help me? It is probably something really obvious! Many thanks!

    Read the article

  • Microsoft Training in .NET

    - by JD
    Hey guys, A quick question about training. My company are offering to put me through a 5 day course in .NET programming with C#, on the proviso that I work for them for a minimum 12 month period immediately after the training has been completed. IF I don't work for them for 12months I will be expected to pay them back teh cost of the training, minus the amount of months which I have stayed at the company (e.g. stay 6 months and only pay back half). The course will be $3,300 AUD - and will have no qualifications. I have been programming in C# for about 6 months, and in PHP for about 10 years - so I am getting the hang of .NET slowly but surely. Is it worth doing this? Or would I be best advised to learn from reading the MS books from Amazon for $40 and then sitting the 2 exams to get MS certified off my own back? The company are not willing to pay for my certification as well as 'they are not training me to get certified' and hence more employable... Thoughts? How do other companies deal with training? And stop people from taking off after becoming certified?

    Read the article

  • How to use Data aware controls "correctly"?

    - by lyborko
    Hi, I would like to ask experienced users, if you prefer to use data aware controls to add, insert, delete and edit data in DB or you favor to do it manualy. I developed some DB applications, in which for the sake of "user friendly policy" I run into complicated web of table events (afterinsert, afteredit, after... and beforeedit, beforeinsert, before...). After that it was a quite nasty work to debug the application. Aware of this risk (later by another application) I tried to avoid this problem, so I paid increased attention to write code well, readable and comprehensive. It seemed everything all right from the beginning, but as I needed to handle some preprocessing stuff before sending and loading data etc, I run into the same problems again, "slowly and inevitably". Sometime I could not use dataaware controls anyway, and what seemed to be a "cool" feature of DAControl at the beginning it turned to an obstacle on the end. I "had to" write special routine for non-dataaware controls, in order to behave as dataaware. Then I asked myself, why on earth should I use dataaware controls? Is it better to found application architecture on non-dataaware controls? It requires more time to write bug-proof code, of course, but does it worth of it? I do not know... I happened to me several times, like jinxed : paradise on the beginning hell on the end... I do not know, if I use wrong method to write DB program, if there is some standard common practice how to proceed. Or if it is common problem to everybody? Thanx for advices and your experiences

    Read the article

  • Flex 4 front end connecting to Java Jersey Web Service

    - by user305801
    I created a Java REST service using Jersey. I use three of the HTTP "verbs" GET, POST and DELETE. I want to create several prototype front ends for the service. After much research, a lot dating to 2008 and 2009, I have been unable to find anything remotely simple. My three options are: 1) resthttpservice. This project hasn't been updated in a year. The only activity are one off suggestions that individual users have implemented. http://code.google.com/p/resthttpservice/ 2) Create an AIR application. This isn't unfeasible. 3) Writing my own socket level code but there is a security restriction with flash players and I need to implement a policy server. I have already read the question posted about asking whether using Flex for REST services were worth it. That information is old as well. I want to introduce REST services to my company but Flex's limited support for HTTP PUT and DELETE are discouraging. My service also uses the Accept header to determine if JSON or XML will be returned to the client. I can't seem to change HTTP headers without doing socket programming. I'm fine with that but the security policy thing is annoying. Is there an easy way to use Flex 4 with RESTful services that uses PUT/DELETE and the Accept HTTP header? Please help. I'm very frustrated.

    Read the article

  • Project Management and Scheduling Techniques

    - by Alec Smart
    Hello, I know this is probably the nth project management question. But am trying to move my team onto a more robust project management technique. Am wondering what is the best technique to use? I know that probably no technique is best, but which are the most popular techniques? Poker planning? Evidence Based Scheduling? COCOMO? Agile? Scrum? XP? Which one to use? Also, suppose I use EBS, wouldn't it be too time consuming to break down every single activity into fine grained tasks? E.g. "Design" is a goal, what kind of fine-grained tasks will I have under it? Is this is a waste of time i.e. dividing work into so many micro parts. Usually when I give my programmers a task, I follow up every week, and they complete quite a lot of the task assigned to them (the tasks are very broad e.g. X module). Is EBS worth it? Are there any white-papers on it so that I can implement it on my own? (instead of using Fogbugz) Most of my projects are web-based projects. Thank you for your time.

    Read the article

  • Java SWT: wrapping syncExec and asyncExec to clean up code

    - by jonescb
    I have a Java Application using SWT as the toolkit, and I'm getting tired of all the ugly boiler plate code it takes to update a GUI element. Just to set a disabled button to be enabled I have to go through something like this: shell.getDisplay().asyncExec(new Runnable() { public void run() { buttonOk.setEnabled(true); } }); I prefer keeping my source code as flat as I possibly can, but I need a whopping 3 indentation levels just to do something simple. Is there some way I can wrap it? I would like a class like: public class UIUpdater { public static void updateUI(Shell shell, *function_ptr*) { shell.getDisplay().asyncExec(new Runnable() { public void run() { //Execute function_ptr } }); } } And can be used like so: UIUpdater.updateUI(shell, buttonOk.setEnabled(true)); Something like this would be great for hiding that horrible mess SWT seems to think is necessary to do anything. As I understand it, Java cannot do functions pointers. But Java 7 will have something called Closures which should be what I want. But in the meantime is there anything at all I can do to pass a function pointer or callback to another function to be executed? As an aside, I'm starting to think it'd be worth the effort to redo this application in Swing, and I don't have to put up with this ugly crap and non-cross-platformyness of SWT.

    Read the article

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Does a CS PhD Help for Software Engineering Career?

    - by SiLent SoNG
    I would like to seek advice on whether or not to take a PhD offer from a good university. My only concern is the PhD will take at least 4 year's commitment. During the period I won't have good monetary income. I am also concerned whether the PhD will help my future career development. My career goal is software engineering only. Some of the PhD info: The PhD is CS related. The research area is Information Retrieval, Machine Learning, and Nature Language Processing. More specifically, the research topic is Deep Web search. Some of backgrounds: I worked in Oracle for 3 years in database development after obtained a CS degree from some good university. In last year I received an email telling an interesting project from a professor and thereafter I was lured into his research team. The team consists of 4 PhD students; those students have little or no industry experiences and their coding skills are really really bad. By saying bad I mean they do not know some common patterns and they do not know pitfalls of the programming languages as well as idioms for doing things right. I guess a at least 4 year commitment is worth of serious consideration. I am 27 at this moment. If I take the offer, that implies I will be 31+ upon graduation. Wah... becoming.. what to say, no longer young. Hence, here I am seeking advice on whether it is good or not to take the PhD offer, and whether a CS PhD is good for my future career growth as a software engineer? I do not intent to go for academia.

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • Google Spreadsheet API problem: memory exceeded

    - by Robbert
    Hi guys, Don't know if anyone has experience with the Google Spreadsheets API or the Zend_GData classes but it's worth a go: When I try to insert a value in a 750 row spreadsheet, it takes ages and then throws an error that my memory limit (which is 128 MB!) was exceeded. I also got this when querying all records of this spreadsheet but this I can imaging because it's quite a lot of data. But why does this happen when inserting a row? That's not too complex, is it? Here's the code I used: public function insertIntoSpreadsheet($username, $password, $spreadSheetId, $data = array()) { $service = Zend_Gdata_Spreadsheets::AUTH_SERVICE_NAME; $client = Zend_Gdata_ClientLogin::getHttpClient($username, $password, $service); $client->setConfig(array( 'timeout' => 240 )); $service = new Zend_Gdata_Spreadsheets($client); if (count($data) == 0) { die("No valid data"); } try { $newEntry = $service->insertRow($data, $spreadSheetId); return true; } catch (Exception $e) { return false; } }

    Read the article

  • how can I save/keep-in-sync an in-memory graph of objects with the database?

    - by Greg
    Question - What is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Background: That is say I have the classes Node and Relationship, and the application is building up a graph of related objects using these classes. There might be 1000 nodes with various relationships between them. The application needs to query the structure hence an in-memory approach is good for performance no doubt (e.g. traverse the graph from Node X to find the root parents) The graph does need to be persisted however into a database with tables NODES and RELATIONSHIPS. Therefore what is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Ideal requirements would include: build up changes in-memory and then 'save' afterwards (mandatory) when saving, apply updates to database in correct order to avoid hitting any database constraints (mandatory) keep persistence mechanism separate from model, for ease in changing persistence layer if needed, e.g. don't just wrap an ADO.net DataRow in the Node and Relationship classes (desirable) mechanism for doing optimistic locking (desirable) Or is the overhead of all this for a smallish application just not worth it and I should just hit the database each time for everything? (assuming the response times were acceptable) [would still like to avoid if not too much extra overhead to remain somewhat scalable re performance]

    Read the article

  • Core Data: Overkill for simple, static UITableView-based iPhone App?

    - by David Foster
    Hello! I have a rather simple iPhone app consisting of numerous views containing a single, grouped table view. These views are held together in navigation controllers which are grouped in a tab bar. Simple stuff. My table views do little more than list text (like "Dog", "Cat" and "Weasel") and this data is being served from a collection of plists. It's perhaps worth mentioning too that these tables are 'static' in the sense that their data is pre-determined and will only ever be amended—and if so, very rarely indeed—by the developer (in this case, moi). This rudimentary approach has reached its limits though, and I think I'm going to need something a bit more relational. I have worked a tad with Core Data in the past, but only with apps whose data is determined by user input. I have four closely related questions: Is Core Data overkill for an app consisting mainly of a selection of simple table views? Do you recommend using Core Data to manage data which is predetermine and extremely unlikely to ever change? Can one lock Core Data down so that its data can't change, thereby relinquishing my responsibility as the developer to handle the editing and saving of the managed object context? How do I go about giving Core Data my predetermined data, and in a format I know that it can work with? Thanks a bunch guys.

    Read the article

  • Why open source it? And how to get real involvement?

    - by donpal
    For me the main goal of open sourcing something is collaboration. If the most that other developers are going to do is take it and use it and report bugs to me, then I might as well close source it. Closed source provides me with all that. I was recently looking at a small javascript library (or more like a plugin, 1000 lines of code) that's actually quite popular. There were some bugs in it because new browsers and browser versions get released everyday and these bugs just pop up as a result. What bothered me is that these bugs would actually be quite easy to fix by even intermediate javascript developers, but for an entire month no one stepped up to fix the bug and submit the fixed version. The original author was apparently busy for that month, but that's the point of open sourcing your code: so that others can use it and help themselves AND the project if they can. So this makes me doubt the promise of open source. If people aren't working on it too, I might as well close source my new projects. And how do you get people involved so that open sourcing is worth it?

    Read the article

  • An Ideal Keyboard Layout for Programming

    - by Jon Purdy
    I often hear complaints that programming languages that make heavy use of symbols for brevity, most notably C and C++ (I'm not going to touch APL), are difficult to type because they require frequent use of the shift key. A year or two ago, I got tired of it myself, downloaded Microsoft's Keyboard Layout Creator, made a few changes to my layout, and have not once looked back. The speed difference is astounding; with these few simple changes I am able to type C++ code around 30% faster, depending of course on how hairy it is; best of all, my typing speed in ordinary running text is not compromised. My questions are these: what alternate keyboard layouts have existed for programming, which have gained popularity, are any of them still in modern use, do you personally use any altered layout, and how can my layout be further optimised? I made the following changes to a standard QWERTY layout. (I don't use Dvorak, but there is a programmer Dvorak layout worth mentioning.) Swap numbers with symbols in the top row, because long or repeated literal numbers are typically replaced with named constants; Swap backquote with tilde, because backquotes are rare in many languages but destructors are common in C++; Swap minus with underscore, because underscores are common in identifiers; Swap curly braces with square brackets, because blocks are more common than subscripts; and Swap double quote with single quote, because strings are more common than character literals. I suspect this last is probably going to be the most controversial, as it interferes the most with running text by requiring use of shift to type common contractions. This layout has significantly increased my typing speed in C++, C, Java, and Perl, and somewhat increased it in LISP and Python.

    Read the article

  • Free solution for automatic updates with .NET/C#?

    - by a2h
    Yes, from searching I can see this has been asked time and time again. Here's a backstory. I'm an individual hobbyist developer with zero budget. A program I've been developing has been in need of constant bugfixes, and me and users are getting tired of having to manually update. Me, because my current solution of Manually FTP to my website Update a file "newest.txt" with the newest version Update index.html with a link to the newest version Hope for people to see the "there's an update" message Have them manually download the update sucks, and whenever I screw up an update, I get pitchforks. Users, because, well, "Are you ever going to implement auto-update?" "Will there ever be an auto-update feature?" Over the past few hours I have looked into: http://windowsclient.net/articles/appupdater.aspx - I can't comprehend the documentation http://www.codeproject.com/KB/vb/Auto_Update_Revisited.aspx - Doesn't appear to support anything other than working with files that aren't in use http://wyday.com/wyupdate/ - wyBuild isn't free, and the file specification is simply too complex. Maybe if I was under a company paying me I could spend the time, but then I may as well pay for wyBuild. http://www.kineticjump.com/update/default.aspx - Ditto. ClickOnce - Workarounds for implementing launching on startup are massive, horrendous and not worth it for such a simple feature. Publishing is a pain; manual FTP and replace of all files is required for servers without FrontPage Extensions. I'm pretty much ready to throw in the towel right now and strangle myself. And then I think about Sparkle...

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • Scrolling to the bottom of a div on page load: issue with syntaxhighlighter

    - by Rayne
    I've been using this code: var objDiv = document.getElementById("code"); objDiv.scrollTop = objDiv.scrollHeight; to scroll to the very bottom of the div. It worked perfectly in FF and Chrome (I asked a question about it not working in Chrome a few days ago, but it appears the guy who was testing it on Chrome was incorrect, so I tested it myself) until I started syntax highlighting the code that I put in the div with SyntaxHighlighter. Before, I was putting the code in a <p> and breaking lines with <br />, but the <br /> stuff doesn't fly with SyntaxHighlighter, so I replaced all of those with newlines (not entirely certain if this is important, but it's worth mentioning). Now, when the page loads, it does scroll, but not all the way down. It scrolls nearly to the bottom. I've tried all the methods listed in the other question I mentioned but they all do the same thing, or nothing at all. Is there anything else I can try? Here is the relevant piece of the generated HTML. Forgive the poor formatting, I'm not writing the HTML by hand, but rather using Hiccup with Clojure, and it doesn't bother with formatting. <div class="scroll" id="code"><pre class="brush: clojure">=> (doseq [x (range 1 100)] (println x)) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 nil </pre></div><script type="text/javascript">var objDiv = document.getElementById("code"); objDiv.scrollTop = objDiv.scrollHeight;</script>

    Read the article

  • Is there a way to use Linq projections with extension methods

    - by Acoustic
    I'm trying to use AutoMapper and a repository pattern along with a fluent interface, and running into difficulty with the Linq projection. For what it's worth, this code works fine when simply using in-memory objects. When using a database provider, however, it breaks when constructing the query graph. I've tried both SubSonic and Linq to SQL with the same result. Thanks for your ideas. Here's an extension method used in all scenarios - It's the source of the problem since everything works fine without using extension methods public static IQueryable<MyUser> ByName(this IQueryable<MyUser> users, string firstName) { return from u in users where u.FirstName == firstName select u; } Here's the in-memory code that works fine var userlist = new List<User> {new User{FirstName = "Test", LastName = "User"}}; Mapper.CreateMap<User, MyUser>(); var result = (from u in userlist select Mapper.Map<User, MyUser>(u)) .AsQueryable() .ByName("Test"); foreach (var x in result) { Console.WriteLine(x.FirstName); } Here's the same thing using a SubSonic (or Linq to SQL or whatever) that fails. This is what I'd like to make work somehow with extension methods... Mapper.CreateMap<User, MyUser>(); var result = from u in new DataClasses1DataContext().Users select Mapper.Map<User, MyUser>(u); var final = result.ByName("Test"); foreach(var x in final) // Fails here when the query graph built. { Console.WriteLine(x.FirstName); } The goal here is to avoid having to manually map the generated "User" object to the "MyUser" domain object- in other words, I'm trying to find a way to use AutoMapper so I don't have this kind of mapping code everywhere a database read operation is needed: var result = from u in new DataClasses1DataContext().Users select new MyUser // Can this be avoided with AutoMapper AND extension methods? { FirstName = v.FirstName, LastName = v.LastName };

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >