Search Results

Search found 1351 results on 55 pages for 'pain'.

Page 42/55 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Is it practical to program with your feet?

    - by bmm
    Has anyone tried using foot pedals in addition to the traditional keyboard and mouse combo to improve your effectiveness in the editor? Any actual experiences out there? Does it work, or is it just for carpal tunnel relief? I found one blog entry from a programmer who actually tried it: So now I can type using my feet for most of the modifier keys. I am using the pedals as I type this. I am still getting used to them, but the burning in my left wrist has definitely reduced. I think I can also type a little faster, but I am too lazy to do the speed tests with and without the pedals to verify this. On the negative side: Working out where to put your feet when you aren’t typing can be a little awkward. The pedals tend to move around the carpet, despite being metal and quite heavy. Some small spikes might have helped. Although the travel on the pedals is small, they are surprisingly stiff. Another programmer's experience: Anybody with hand pain must get foot pedals, since they can remove a tremendous load from your hands. I have two foot pedals, and use one for the SHIFT key, and the other for the CONTROL key. (I still type META by hand.) I have found that in the process of using the Emacs text editor to compose computer programs, I tend to use the SHIFT, CONTROL and META keys constantly, and it is easy to remove most of this load from one's hands. Some foot switch products: Savant Elite Triple Foot Switch FragPedal Bilbo Step On It!

    Read the article

  • Convert function to read from string instead of file in C

    - by Dusty
    I've been tasked with updating a function which currently reads in a configuration file from disk and populates a structure: static int LoadFromFile(FILE *Stream, ConfigStructure *cs) { int tempInt; ... if ( fscanf( Stream, "Version: %d\n",&tempInt) != 1 ) { printf("Unable to read version number\n"); return 0; } cs->Version = tempInt; ... } to one which allows us to bypass writing the configuration to disk and instead pass it directly in memory, roughly equivalent to this: static int LoadFromString(char *Stream, ConfigStructure *cs) A few things to note: The current LoadFromFile function is incredibly dense and complex, reading dozens of versions of the config file in a backward compatible manner, which makes duplication of the overall logic quite a pain. The functions that generate the config file and those that read it originate in totally different parts of the old system and therefore don't share any data structures so I can't pass those directly. I could potentially write a wrapper, but again, it would need to handle any structure passed in in a backwards compatible manner. I'm tempted to just pass the file as is in as a string (as in the prototype above) and convert all the fscanf's to sscanf's but then I have to handle incrementing the pointer along (and potentially dealing with buffer overrun errors) manually. This has to remain in C, so no C++ functionality like streams can help here Am I missing a better option? Is there some way to create a FILE * that actually just points to a location in memory instead of on disk? Any pointers, suggestions or other help is greatly appreciated.

    Read the article

  • Updating permissions on Amazon S3 files that were uploaded via JungleDisk

    - by Simon_Weaver
    I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN. The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200. I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able. I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure. JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not. S3 doesn't appear to propagate permissions down which is a real pain. I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.

    Read the article

  • Testing system where App-level and Request-level IoC containers exist

    - by Bobby
    My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there. We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy. Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing. The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container. We are also using MSTest, which presents some concurrency dilemmas during testing as well. So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament? How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers. I hope the question is clear, thanks!

    Read the article

  • Porting an IBXpress Interbase 6 app to the current Firebird platform, on Delphi 7?

    - by robsoft
    Just wondering if there are any gotchas to be wary of here. We have a legacy D7 app that we developed several years ago for a client, which uses IBXpress to talk to the open source Interbase 6 build. We're having a number of issues with that platform these days (very slow to connect/start-up on new hardware being the chief one) and the client has okayed spending some time/money moving the database over to Firebird. We really DON'T want to embark upon moving it to D2010 (or D2007 which would be my preference right now) as we figure that we might have to move the database layer from IBXpress to something else to best suit Firebird anyway. And at the end of the day, the client is only looking to lessen the database pain, not overhaul/upgrade/rewrite the app. Given the ancestry of Firebird, is it a fairly painless, well-understood path from IBXpress Interbase 6 to (whatever) with Firebird? We have quite a number of sprocs, triggers (and even datatypes) etc in the existing IB database already (and the client has a number of paying customers all using this platform) so we felt that going to Firebird was more likely to be a smoother move than moving to SQL Express (or another flavour of DB entirely). Note that we're not looking for 'embedded' DB advocacy - in many of our client's customers' installations, the software is used in a multi-user client-server way so keeping that kind of approach is important.

    Read the article

  • Why is Microsoft under-supporting or under-developping VBNET?

    - by Will Marcouiller
    I ran into a situation where the lack of some features has become somewhat frustrating while developping in VB.NET 2.0. Since my first day of programming, I've always been a C programmer, and still am. Naturally, I chose C# as my favorite .NET language. Recently, a customer of mine has obliged that all of his development projects which disregard SharePoint development have to be written in VB.NET 2.0, that is to avoid conflictual systems to come into some problems. That is a legitimate choice of his which I approve somehow, since he's running some old central systems and is slowly migrating toward latest technologies. As for me, I would have prefered to go with C#, but then, never having done much VB in my life, I see it as an opportunity to learn somethings new, how to handle this and that in VBNET, etc. Except that the syntax is really too verbose for me, which is a pain! I got used to it and that is fine. However, I recently wanted to use the InternalsVisibleToAttribute which I discovered lastly here on SO. But then, in addition to not being able to have lambda expression that returns no value, which I discovered months ago, today I learn that I can't use the attribute in VBNET! Here is what I have read in an article: [...] Sorry VB.Net developers, Microsoft is again shunning you guys and this attribute is NOT available to you.... :( And here is the link: InternalsVisibleTo: Testing internal methods in .Net 2.0 I have heard from Anders Hejlsberg mouth while watching a Webcast from his presentation of .NET 4.0 Framework that the VBNET team was working or has worked in collaboration with the C# team (Eric Lippert and others) in order to bring VBNET to offer the same features as C# offers. But then, I say to myself that the VBNET team has a huge step forward to make, if already in .NET 2.0, some of the most important features lacked! So my question is this: Why is Microsoft under-supporting or under-developping VBNET? Will VBNET ever be lacking the C# features?

    Read the article

  • PHP (A few questions) OO, refactoring, eclipse

    - by jax
    I am using PHP in eclipse. It works ok, I can connect to my remote site, there is colour coding of code elements and some code hints. I realise this may be too long to answer all questions, if you have a good answer for one part, answering just that is ok. Firstly General Coding I have found that it is easy to loose track of included files and their variables. For example if there was a database $cursor it is difficult to remember or even know that it was declared in the included file (this becomes much worse the more files you include). How are people dealing with this? How are people documenting their code - in particular the required GET and POST data? Secondly OO Development: Should I be going full OO in my development. Currently I have a functions library which I can include and have separated each "task" into a separate file. It is a bit nasty but it works. If I go OO how do I structure the directories in PHP, java uses packages - what about php? How should I name my files, should I use all lower case with _ for spaces "hello_world.php"? Should I name classes with Uppercase like Java "HelloWorld.php"? Is there a different naming convention for Classes and regular function files? Thirdly Refactoring I must say this is a real pain. If I change the name of a variable in one place I have to go through whole document and each file that included this file and change the name their too. Of course, errors everywhere is what results. How are people dealing with this problem? In Java if you change the name in one place it changes everywhere. Are there any plugins to improve php refactoring? I am using the official PHP version of Eclipse from their website. thanks

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • MS Access form_current() firing multiple times

    - by Eric G
    I have a form with two subforms (on separate tab pages). It's an MDB project in Access 2003. When it initially opens, Form_Current on the active subform fires once, as it should. But when you move to another record (ie. from the main form), it fires Form_Current on the active subform 4 times. Then subsequent record-moves result in Form_Current firing 2 times. This is a pain, because the subforms have a lot of fields that get moved and/or hidden and so it jumps around for every Form_Current, not to mention being slow. I am opening the form with a filter via DoCmd.OpenForm (actually it sends the filter in via OpenArgs). FilterOn is only set once, in Form_Open on the main form, never in the subforms. Form_Current is not called explicitly anywhere else in the code. When I look at the call stack when Form_Current fires moving the first time, it looks like: my_subform.Form_Current [<Debug Window>] my_subform.Form_Current So it seems like something in Form_Current is triggering another Form_Current event. But only on the first record move. The code in Form_Current is somewhat complex, involving custom classes and event callbacks, but generally does not touch the table data. The only thing I can think might be triggering a Form_Current is that it checks OldValue on form controls - could this be causing it? Or anything else come to mind? Thanks. Eric

    Read the article

  • How to invalidate the OutputCache in a webfarm?

    - by Pure.Krome
    Hi folks, i've got a website that uses OutputCache attribute to cache pages. Works great. Now, I'm in the middle of R&D'ing scaling up this site to be in a web farm. Along with the usual suspects for webfarm pain ... I've noticed (pretty quickly/obviously) that the OutputCache from Server_A doesn't invalidate the OutputCache from Server_B .. if a try and invalidate a single server's OutputCache. This makes total sense - how can S_A 'tell' S_B to invalidate when they are physically 2 seperate machines, etc? So - what are our options? Velocity? I understand this will move the caching to a different layer .. which means that the final result (output) will always be required to be determined .. as opposed to the OutputCache whic remembers the final output content (yes, varby gives different versions, etc.. which is totally fine). So even though the poco or business objects are all sync'd, there's still that last rendering effort required (even if it's tiny .. compared to the effort to generate/sync business objects). So yeah .. not sure of the options here and what other people do?

    Read the article

  • UITableView, having problems changing accessory when selected

    - by zpasternack
    I'm using a UITableView to allow selection of one (of many) items. Similar to the UI when selecting a ringtone, I want the selected item to be checked, and the others not. I would like to have the cell selected when touched, then animated back to the normal color (again, like the ringtone selection UI). A UIViewController subclass is my table's delegate and datasource (not UITableViewController, because I also have a toolbar in there). I'm setting the accessoryType of the cells in cellForRowAtIndexPath:, and updating my model when cells are selected in didSelectRowAtIndexPath:. The only way I can think of to set the selected cell to checkmark (and clear the previous one) is to call [tableView reloadData] in didSelectRowAtIndexPath:. However, when I do this, the animating of the cell deselection is weird (a white box appears where the cell's label should be). If I don't call reloadData, of course, the accessoryType won't change, so the checkmarks won't appear. I suppose I could turn the animation off, but that seems lame. I also toyed with getting and altering the cells in didSelectRowAtIndexPath:, but that's a big pain. Any ideas? Abbreviated code follows... - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell* aCell = [tableView dequeueReusableCellWithIdentifier:kImageCell]; if( aCell == nil ) { aCell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:kImageCell]; } aCell.text = [imageNames objectAtIndex:[indexPath row]]; if( [indexPath row] == selectedImage ) { aCell.accessoryType = UITableViewCellAccessoryCheckmark; } else { aCell.accessoryType = UITableViewCellAccessoryNone; } return aCell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [tableView deselectRowAtIndexPath:indexPath animated:YES]; selectedImage = [indexPath row] [tableView reloadData]; }

    Read the article

  • Using multiple sockets, is non-blocking or blocking with select better?

    - by JPhi1618
    Lets say I have a server program that can accept connections from 10 (or more) different clients. The clients send data at random which is received by the server, but it is certain that at least one client will be sending data every update. The server cannot wait for information to arrive because it has other processing to do. Aside from using asynchronous sockets, I see two options: Make all sockets non-blocking. In a loop, call recv on each socket and allow it to fail with WSAEWOULDBLOCK if there is no data available and if I happen to get some data, then keep it. Leave the sockets as blocking. Add all sockets to a fd_set and call select(). If the return value is non-zero (which it will be most of the time), loop through all the sockets to find the appropriate number of readable sockets with FD_ISSET() and only call recv on the readable sockets. The first option will create a lot more calls to the recv function. The second method is a bigger pain from a programming perspective because of all the FD_SET and FD_ISSET looping. Which method (or another method) is preferred? Is avoiding the overhead on letting recv fail on a non-blocking socket worth the hassle of calling select()? I think I understand both methods and I have tried both with success, but I don't know if one way is considered better or optimal. Only knowledgeable replies please!

    Read the article

  • Control Oracle Forms with outside program

    - by user256893
    I work at a company that uses the Forms based Oracle 11i. A lot of employees complain of the redundancy of data entry and I want to write a program that will ease some of that pain since all attempts to ask IT to do it have failed. The problem is, since Oracle Forms are Java based there are no "controls" as there would be on say a windows application or an HTML based form. Does anyone know of a way to tell the PC to (example only) click edit field 3 on the RMA creation form and then enter the data? The only way I can programmatically navigate Oracle is with hotkeys and it's very unreliable. I'm not concerned about the language or learning a new application to resolve this issue. I currently know (elementary to Intermediate levels) Java, VB.NET and will be taking C++ in school. Is there a tool, bridge, element spy of some sort that will allow me to send commands to elements on the forms? edit APC sez: Oracle Forms over the web run as a Java applet. I mention this because it may be relevant to your responses.

    Read the article

  • Architecture Guidance Needed?

    - by vijay
    We are about to automate number of process for our reporting team. (The reports are like daily reports, weekly reports, monthly reports, etc..) Mostly the process is like pulling some data from the oracle and then fill them in particular excel template files. Each reports and so their templates are different from each other. Except the excel file manipulation, there are hardly any business logic behind these. Client wanted an integrated tool and all the automated processes are placed as menus/submenus. Right now roughly there are around 30 process waiting to be automated. And we are expecting more new reports in the next quarter. I am nowhere to near having any practical experience when comes to architecuring. Already i have been maintaining two or three systems(they are more than 4yrs old.) for this prestegious client.The possiblity of the above mentioned tool will be manintained for another 3 yrs is very likely. From my past experience i've been through the pain of implmenting change requests to the rigd & undocumented code base resulting in the break down of the system and then eventually myself. So My main and top most concern is the maintainablity. When i was searching for these i came across this link, Smart Clients Using CAB and SCSF is the above link appropriate for my requirement? Also Should i place each automated processes in separate forms under a single project, or place them in separate projects under a single solution.. Please correct me if have missed any other important information. Thx.

    Read the article

  • Using Excel as front end to Access database (with VBA)

    - by Alex
    I am building a small application for a friend and they'd like to be able to use Excel as the front end. (the UI will basically be userforms in Excel). They have a bunch of data in Excel that they would like to be able to query but I do not want to use excel as a database as I don't think it is fit for that purpose and am considering using Access. [BTW, I know Access has its shortcomings but there is zero budget available and Access already on friend's PC] To summarise, I am considering dumping a bunch of data into Access and then using Excel as a front end to query the database and display results in a userform style environment. Questions: How easy is it to link to Access from Excel using ADO / DAO? Is it quite limited in terms of functionality or can I get creative? Do I pay a performance penalty (vs.using forms in Access as the UI)? Assuming that the database will always be updated using ADO / DAO commands from within Excel VBA, does that mean I can have multiple Excel users using that one single Access database and not run into any concurrency issues etc.? Any other things I should be aware of? I have strong Excel VBA skills and think I can overcome Access VBA quite quickly but never really done Excel / Access link before. I could shoehorn the data into Excel and use as a quasi-database but that just seems more pain than it is worth (and not a robust long term solution) Any advice appreciated. Alex

    Read the article

  • MVVM: Thin ViewModels and Rich Models

    - by Dan Bryant
    I'm continuing to struggle with the MVVM pattern and, in attempting to create a practical design for a small/medium project, have run into a number of challenges. One of these challenges is figuring out how to get the benefits of decoupling with this pattern without creating a lot of repetitive, hard-to-maintain code. My current strategy has been to create 'rich' Model classes. They are fully aware that they will be consumed by an MVVM pattern and implement INotifyPropertyChanged, allow their collections to be observed and remain cognizant that they may always be under observation. My ViewModel classes tend to be thin, only exposing properties when data actually needs to be transformed, with the bulk of their code being RelayCommand handlers. Views happily bind to either ViewModels or Models directly, depending on whether any data transformation is required. I use AOP (via Postsharp) to ease the pain of INotifyPropertyChanged, making it easy to make all of my Model classes 'rich' in this way. Are there significant disadvantages to using this approach? Can I assume that the ViewModel and View are so tightly coupled that if I need new data transformation for the View, I can simply add it to the ViewModel as needed?

    Read the article

  • Show me your Linq to SQL architectures!

    - by Brad Heller
    I've been using Linq to SQL for a new implementation that I've been working on. I have about 5000 lines of code and am a little ways from a solid demo. I've been pretty satisfied with Linq to SQL so far -- the tools are excellent and pretty painless and it allows you to get a DAL up and running quickly. That said, there are some major draw backs that I just keep hitting over and over again. Namely how to handle separation of concerns between my DAL and my business layer and juggling that with different data contexts. Here is the architecture I've been using: My repositories do all my data access and they return Linq to SQL objects. Each of my Linq to SQL objects implements an IDetachable interface. A typical implementation looks like this: partial class PaymentDetail : IDetachable { #region IDetachable Members public bool IsAttached { get { return PropertyChanging != null; } } public void Detach() { if (IsAttached) { PropertyChanged = null; PropertyChanging = null; Transaction.Detach(); } } #endregion } Every time I do a DAL operation in my repository I "detach" when I'm done with the object (and it should theoretically detach from any child objects) to remove the DataContext's context. Like I said, this works pretty well, but there are some edge cases that seem to be a big pain in the ass. For instance, my Transaction object has many PaymentDetails. Even when there are no PaymentDetails in that collection it's still attached to the DataContext's context! Thus, if I try to update (I update by Attach()ing to the object and then SubmitChanges()) I get that dreaded "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." message. Anyway, I'm starting to doubt that this technology was a good gamble. Has anyone got a decent architecture that they're willing to share? I'd really love to use this technology but I feel like I spend 1/3 of my time just debugging is retarded quirks!

    Read the article

  • R: disentangling scopes

    - by rescdsk
    Hi, Right now, in my R project, I have functions1.R with doFoo() and doBar(), functions2.R with other functions, and main.R with the main program in it, which first does source('functions1.R'); source('functions2.R'), and then calls the other functions. I've been starting the program from the R GUI in Mac OS X, with source('main.R'). This is fine the first time, but after that, the variables that were defined the first time through the program are defined for the second time functions*.R are sourced, and so the functions get a whole bunch of extra variables defined. I don't want that! I want an "undefined variable" error when my function uses a variable it shouldn't! Twice this has given me very late nights of debugging! So how do other people deal with this sort of problem? Is there something like source(), but that makes an independent namespace that doesn't fall through to the main one? Making a package seems like one solution, but it seems like a big pain in the butt compared to e.g. Python, where a source file is automatically a separate namespace. Any tips? Thank you!

    Read the article

  • Updating a DLL in a Production ASP.NET Web Site bin folder

    - by Josh Stodola
    I want to update a class library (a single DLL file) in a production web application. This web app is pre-compiled (published). I read an answer on StackOverflow (sorry, can't seem to find it anymore because the Search function does not work very well), that led me to believe that I could just paste the new DLL in the bin folder and it would be picked up without problems (this would cause the WP to recycle, which is fine with me because we do not use InProc session state). However, when I tried this, my site blows up and gives a FileLoadException saying that the assembly manifest definition does not match the assembly reference. What in the world is this?! Updating the DLL in Visual Studio and re-deploying the entire site works just fine, but it is a huge pain in the rear. What is the point of having a separate DLL if you have to re-deploy the entire site to implement any changes? Here's the question: How can I update a DLL on a production web site without breaking the app and without re-deploying all of the files?

    Read the article

  • What's the best way to transition to MVC coding?

    - by ggfan
    It's been around 5 months since I picked up a PHP book and started coding in PHP. At first, I created all my sites without any organizational plan or MVC. I soon found out that was a pain.. Then I started to read on stackoverflow on how to separate php and html and that's what I have been doing ever since. Ex: profile.php <--this file is HTML,css. I just echo the functions here. profile_functions.php <--this file is mostly PHP. has the functions. This is how I have been separating all my coding so far and now I feel I should move on and start MVC. But the problem is, I never used classes before and suck with them. And since MVC (such as cakephp and codeigniter) is all classes, that can't be good. My question: Is there any good books/sites/articles that teaches you how to code in MVC? I am looking for beginner beginner books :) I just started reading the codeigniter manuel and I think I am going to use that. When I looked at the example MVC, they use different PHP coding. When I start coding in MVC, would I have to learn a "new" way to code? Because right now I am coding in pure basic PHP.

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Java: attributes order in .jsp getting inversed

    - by NoozNooz42
    Every single time I've read about the meta tags, the attribute where in this order for the description: <meta name="description" content="..." /> First name, then content. It's also like that in the Google Webmaster documentation. Basically, it's like that everywhere. Now in a .jsp (in XML notation) I've got the following: <meta name="description" content="${metadesc}"/> So it's name first, then content. Yet on the generated webpage, I get this: <meta content="...(200 chars or so here making it a very long line)..." name="description"/> Somehow the attributes have been inversed. Because the content follows the official W3C and Google recommendations, the content is a bit less than 200 characters long, which makes it a major pain to "visually verify" that the name attribute is correctly there (I've got to scroll). Anyway... Why are these attribute not appearing in the order defined in the .jsp? Can I force them to appear in the same order as I wrote them in my .jsp? I realize the resulting tag may be valid... But I can also imagine a lot of very creative ways to have valid tags which users would be very upset about. Does this make any sense to inverse these attributes? EDIT wow, just wow... If I invert the attributes in my .jsp (that is, writing them in the "wrong" order), then they appear as I want them to appear in the generated web page. (Tomcat 6.0.26 btw)

    Read the article

  • VS2010: "Select Resource" dialog & resx location

    - by Dav
    Got two issues with the VS2010 / VS2008 select resource dialog - the one that appears when you want to add an image to a button in a WinForms app for example. Give me my files back! It only seems to see the default project resources file (Properties\Resources.resx), and resx files in project root (say MyProject\famfamfam.resx). We have quite a few icons all over the app, and because some of them come from different icon sets (like famfamfam), and some are related to this project only we'd like to keep them separate. For that same reason (keeping solution neat & tidy) we want to store these extra resource files in the Resources folder (eg. Resources\famfamfam.resx). However, we'd also like to keep using the Select Resource dialog :-) Because it does not see the 'extra' resource files, we're having to select a 'fake' icon now (from the global Resources.resx file) and then manually change that to reference the right icon in .Designer.cs. As you can imagine, this is a pain. Stop modifying my files! Second issue is a bit more annoying. We use the excellent MultiLang add-in for Visual Studio to globalize our app. It stores its translations in MultiLang.resx & MultiLang.XY.resx files in the project root, where XY is a language code, eg. .cs.resx for Czech. These have to be set to No code generation access modifier. What Select Resource seems to be doing is set all .resx files it can find to Internal. Exec summary Is there a way to convince Select Resource dialog to look for extra .resx files anywhere besides the project root? Is there any way to stop it from modifying the access modifier of the resources it does see (other than file a bug with MS)? Thanks in advance for any suggestions!

    Read the article

  • Dependency Injection and decoupling of software layers

    - by cs31415
    I am trying to implement Dependency Injection to make my app tester friendly. I have a rather basic doubt. Data layer uses SqlConnection object to connect to a SQL server database. SqlConnection object is a dependency for data access layer. In accordance with the laws of dependency injection, we must not new() dependent objects, but rather accept them through constructor arguments. Not wanting to upset the DI gods, I dutifully create a constructor in my DAL that takes in SqlConnection. Business layer calls DAL. Business layer must therefore, pass in SqlConnection. Presentation layer calls Business layer. Hence it too, must pass in SqlConnection to business layer. This is great for class isolation and testability. But didn't we just couple the UI and Business layers to a specific implementation of the data layer which happens to use a relational database? Why do the Presentation and Business layers need to know that the underlying data store is SQL? What if the app needs to support multiple data sources other than SQL server (such as XML files, Comma delimited files etc.) Furthermore, what if I add another object upon which my data layer is dependent on (say, a second database). Now, I have to modify the upper layers to pass in this new object. How can I avoid this merry-go-round and reap all the benefits of DI without the pain?

    Read the article

  • Replace System.Net.Mail.MailMessage with manually created message and send it

    - by DEH
    I am trying to send emails that will bounce to a known mailbox. I plan to use VERP. Unfortunately the System.Net.Mail.MailMessage object does not allow me to precisely set the From: and Sender: headers within my email - it forces the values so that the resulting email contains the phrase 'on behalf of', and does not allow me fine control over the relevant mime headers. I therefore plan to manually write mime email messages directly to the pickup directory so that I can independently control the From and Sender headers. My dev box is a Vista box and therefore does not have an SMTP server. I would like to configure the dev box so that I have an SMTP server running on it. I can then turn off the SMTP server, write messages to the pickup dir, then turn on the SMPT server and see how the individual emails that I have written will behave (some delivered, some bounced to a bounce handler on a different email domain, as dictated by the Sender). Two questions: 1. Can anyone recommend an SMTP server that will monitor a pickup directory? 2. If I set headers as follows; From:[email protected]; Sender:[email protected] then the recipient will see the email as having come from [email protected] ( and won't see any reference to [email protected]), but if the mail bounces then the NDR will be sent to [email protected]). It's a real pain to have to do this, but I can't see any way of using System.Net.Mail.MailMessage without it messing up my headers.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >