Search Results

Search found 10115 results on 405 pages for 'coding practices'.

Page 356/405 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Benefits of migrating my work to a new web development framework?

    - by John
    When I first started programming with PHP, I was ignorant of other php frameworks (like code igniter, cake php, etc...). So I fell into the trap of re-inventing wheels, which had the benefit of being "fun" and "educational". Overtime, I discovered other open source products that I found useful, like smarty templating engine, jquery library, tcpdf library, fdf etc...so I started bundling these technologies along with things I've built over the years into a LAMP development framework to make life easier for myself. This pass year, I've been having fun developing on the code igniter framework. It does many of the things I do in my framework. Coding in CI feels natural because the MVC and ORM feels similar to the MVC and ORM of my framework. So now I'm contemplating migrating a lot of the plugins in my framework over to CI. The pros and cons I can think of for such a project are: Pros: benefit from the vast community of CI developers lots of other developers will be familiar with it better documentation Cons: I've built a lot of useful plugins against my own framework, and it will take a lot of time to move even just the essential ones at the moment, I still, work faster against my own framework than CI, just because I'm more familiar with it even if I did migrate to CI, there will always be newer and better frameworks in the near future, and i'll be contemplating this scenario again So my question is the following: perhaps I should leave my old framework as is, and for each new project I receive, I make a decision on whether the requirements are best served by developing with CI or my own framework. Is this the right approach?

    Read the article

  • Is a "factory" method the right pattern?

    - by jdt141
    Hey all - So I'm working to improve an existing implementation. I have a number of polymorphic classes that are all composed into a higher level container class. The problem I'm dealing with at the moment is that the higher level container class, well, sucks. It looks something like this, which I really don't have a problem with (as the polymorphic classes in the container should be public). My real issue is the constructor... /* * class1 and class 2 derive from the same superclass */ class Container { public: boost::shared_ptr<ComposedClass1> class1; boost::shared_ptr<ComposedClass2> class2; private: ... } /* * Constructor - builds the objects that we need in this container. */ Container::Container(some params) { class1.reset(new ComposedClass1(...)); class2.reset(new ComposedClass2(...)); } What I really need is to make this container class more re-usable. By hard-coding up the member objects and instantiating them, it basically isn't and can only be used once. A factory is one way to build what I need (potentially by supplying a list of objects and their specific types to be created?) Other ways to get around this problem? Seems like someone should have solved it before... Thanks!

    Read the article

  • Non-wiki CMS for an online user guide

    - by Russell Leggett
    For a large web application I'm building, I need to create an extensive user guide. The first thought was a wiki, but what I've seen lacks the ease of customization I've seen in CMSs, and has a lot of extra features I don't need. The number of users editing the document is small and closed, but it needs to be editable by non-technical users. The number of pages will likely be between 50-100. It also needs to be searchable. It would also be a plus if it had nice readable urls to link to from our web app. Right now, my best guess is WordPress, but that seems a lot more geared towards blogging with just a handful of pages, than having several pages, and possibly no blogs. There isn't a language requirement, although we have the most experience with Java and PHP. We aren't looking to have to do any major coding other than customizing for visuals, so hopefully the language will not be too important. Again, I'm not looking for the best general purpose CMS, just something that would be easiest for a user guide.

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • Gracefully handling screen orientation change during activity start

    - by Steve H
    I'm trying to find a way to properly handle setting up an activity where its orientation is determined from data in the intent that launched it. This is for a game where the user can choose levels, some of which are int portrait orientation and some are landscape orientation. The problem I'm facing is that setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE) doesn't take effect until the activity is fully loaded. This is a problem for me because I do some loading and image processing during startup, which I'd like to only have to do once. Currently, if the user chose a landscape level: the activity starts onCreate(), defaulting to portrait discovers from analysing its launching Intent that it should be in landscape orientation continues regardless all the way to onResume(), loading information and performing other setup tasks at this point setRequestedOrientation kicks in so the application runs through onPause() to onDestroy() it then again starts up from onCreate() and runs to onResume() repeating the setup from earlier Is there a way to avoid that and have it not perform the loading twice? For example, ideally, the activity would know before even onCreate was called whether it should be landscape or portrait depending on some property of the launching intent, but unless I've missed something that isn't possible. I've managed to hack together a way to avoid repeating the loading by checking a boolean before the time-consuming loading steps, but that doesn't seem like the right way of doing it. I imagine I could override onSaveInstanceState, but that would require a lot of additional coding. Is there a simple way to do this? Thanks!

    Read the article

  • How to prevent linq-to-sql designer undo my changing

    - by anonim.developer
    Dear All, Thanks for your attention in advance, I’ve met an issue with LINQ-2-SQL designer in VS 2008 SP1 which has made me CRAZY. I use Linq2sql as my DAL. It seems Linq2sql speeds up coding in the first step but lots of issues arise in feature specifically with table or object inheritance. In this case I have a class Entity that all other entity classes generated by Linq2sql designer inherit from. public abstract class Entity { public virtual Guid ID { get; protected set; } } public partial class User : monius.Data.Entity { } And the following generated by L2S designer (DataModel.designer.cs) [Column(Storage = "_ID", AutoSync = AutoSync.OnInsert, DbType = "UniqueIdentifier NOT NULL", IsPrimaryKey = true, IsDbGenerated = true, UpdateCheck = UpdateCheck.Never)] [DataMember(Order = 1)] public System.Guid ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } When I compile the code VS warns me that Warning 1 'User.ID' hides inherited member 'Entity.ID'. To make the current member override that mplementation, add the override keyword. Otherwise add the new keyword. That warning is obvious and I have to change the code generated by L2S designer (DataModel.designer.cs) to […] public override System.Guid ID { … protected set … } And the code compiled with no error or warning and everyone is happy. But that is not the end of story. As soon as I made changes to entities of the diagram (dbml) or even I open dbml file to view it, any change manually I made to designer has been vanished and POOF! Redo AGAIN. That is a painful job. Now I wonder if there is a way to force L2S designer not changing portions of auto-generated code. I’ll be appreciated if someone kindly helps me with this issue.

    Read the article

  • C++ : Avoid lot of boolean variable for multiple verification conditions in trading app

    - by Naveen
    Hi i am a junior dev in trading app... we have a order refresh verification unit. It has to verify order confirmation from exchange. We send a bunch of different request in bulk ( NEW, MODIFY, CANCEL ) to exchange... Verification has to happen for max N times with each T intervals for all orders. if verification successful for all the order before N retry then fine.. otherwise we need to indicate as verification unsuccessfull. i hv done a basic coding done in very urgent like below for( N times ) { for_each ( sent_request_order ) // SENT { 1) get all the refreshed order from DB or shared mem i.e REFRESHED 2) find current sent order in REFRESHED if( not_found ) not refreshed from exchange, continue to next order if( found ) case NEW : //check for new status, mark verification done case MODIFY : //check for modified status.. //if not mark pending, go to next order, //revisit the same after T time case CANCEL : //check for cancelled status.. //if not mark pending, go to next order, //revisit the same after T time } if( all_verified ) exit from verification. wait ( T sec ) } order_verification_pending, order_verification_done, order_visited, order_not_visited, all_verified, all_not_verified ... lot of boolean flags used for indication.. is there any better approach for doing this.... splitting responsibilities across the classes......???? i know this is not a general question.... but still flags are making me tidious to handle...

    Read the article

  • Cheapest way to to determine if a MySQL connection is still alive

    - by MtnViewMark
    I have a pool of MySQL connections for a web-based data service. When it starts to service a request, it takes a connection from the pool to use. The problem is that if there has been a significant pause since that particular connection has been used, the server may have timed it out and closed its end. I'd like to be able to detect this in the pool management code. The trick is this: The environment in which I'm coding gives me only a very abstact API into the connection. I can basically only execute SQL statements. I don't have access to the actual socket or direct access to the MySQL client API. So, the question is: What is the cheapest MySQL statement I can execute on the connection to determine if it is working. For example SELECT 1; should work, but I'm wondering if there is something even cheaper? Perhaps something that doesn't even got across the wire, but is handled in the MySQL client lib and effectively answers the same question?

    Read the article

  • can I disable the "(Type e to repeat macro)" message in emacs?

    - by lindes
    Hi there, So, I've finally made the plunge, and have gotten to the state where I'm quite happy to have switched from vi and vim to emacs... I've been putting stuff in my .emacs file, learning how to evaluate things (not to mention becoming familiar with movement commands), etc. etc. etc. And now I have a problem with a require line in my .emacs file (a require statement*), which bombs out when I launch emacs (and generally fails to work). So, this lead me to the following situation: In the process of trying to debug the above situation, one of the steps I did was to open the file I was trying to require, and evaluate it bit by bit, using C-M-f and C-x C-e (and later just M-x eval-buffer), which all worked fine. But along the way of the section-by-section, I got tired of typing all those, and so I recorded a keyboard macro... C-x ( C-M-f C-x C-e C-x ) and then C-x e... which gave me a message in the minibuffer (I think I'm using the right name), saying (Type e to repeat macro). Which meant I could no longer see the resultant value of the evaluation of each section of code... which, while not critical in this case, I was liking having. Which leads me to the actual question: Is there a way to disable that message, and/or to cause the minibuffer to show multiple lines at once? I know about the *Messages* buffer, and that could have helped, I'm just wondering if there's a way to either disable that message, or otherwise make it coexist with other messages. Any suggestions? Thanks! lindes * - the problem at hand, which is not really my question, is that (require 'ruby-mode/ruby-mode) fails, even though emacs is definitely and successfully (per system call tracing) opening and reading the ruby-mode.el file. I presume this is because the provide line says just 'ruby-mode. I've found a solution for this, but if anyone can point me to any "best practices", I'd appreciate it.

    Read the article

  • What is the best approach for creating a Common Information Model?

    - by Kaiser Advisor
    Hi, I would like to know the best approach to create a Common Information Model. Just to be clear, I've also heard it referred to as a canonical information model, semantic information model, and master data model - As far as I can tell, they are all referring to the same concept. I've heard in the past that a combined "top-down" and "bottom-up" approach is best. This has the advantage of incorporating "Ivory tower" architects and developers - The work will meet somewhere in the middle and usually be both logical and practical. However, this involves bringing in a lot of people with different skill sets. I've also seen a couple of references to the Distributed Management Task Force, but I can't glean much on best practices in terms of CIM development. This is something I'm quite interested in getting some feedback on since having a strong CIM is a prerequisite to SOA. Thanks for your help! KA Update I've heard another strategy goes along with overall SOA implementation: Get the business involved, and seek executive sponsorship. This would be part of the "Top-down" effort.

    Read the article

  • How can I implement a volume meter for a song currently playing? (iPhone OS 3.1.3)

    - by Adam
    Hi i'm very new to core audio and I just would like some help in coding up a little volume meter for whatever's being outputted through headphones or built-in speaker. Like a dB meter. I have the following code, and have been trying to go through the apple source project "SpeakHere", but it's a nightmare trying to go through all that, without knowing how it works first... Could anyone shed some light? Here's the code I have so far... (void)displayWaveForm { while (musicIsPlaying == YES { NSLog(@"%f",sizeof(AudioQueueLevelMeterState)); } } (IBAction)playMusic { if (musicIsPlaying == NO) { NSURL *url = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@/track7.wav",[[NSBundle mainBundle] resourcePath]]]; NSError *error; music = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error]; music.numberOfLoops = -1; music.volume = 0.5; [music play]; musicIsPlaying = YES; [self displayWaveForm]; } else { [music pause]; musicIsPlaying = NO; } }

    Read the article

  • Simplest way to automatically alter "const" value in Java during compile time

    - by Michael Mao
    Hi all: This is a question corresponds to my uni assignment so I am very sorry to say I cannot adopt any one of the following best practices in a short time -- you know -- assignment is due tomorrow :( link to Best way to alter const in Java on StackOverflow Basically the only task (I hope so) left for me is the performance tuning. I've got a bunch of predefined "const" values in my single-class agent source code like this: //static final values private static final long FT_THRESHOLD = 400; private static final long FT_THRESHOLD_MARGIN = 50; private static final long FT_SMOOTH_PRICE_IDICATOR = 20; private static final long FT_BUY_PRICE_MODIFIER = 0; private static final long FT_LAST_ROUNDS_STARTTIME = 90; private static final long FT_AMOUNT_ADJUST_MODIFIER = 5; private static final long FT_HISTORY_PIRCES_LENGTH = 10; private static final long FT_TRACK_DURATION = 5; private static final int MAX_BED_BID_NUM_PER_AUC = 12; I can definitely alter the values manually and then compile the code to give it another go around. But the execution time for a thorough "statistic analysis" usually requires over 2000 times of execution, which will typically lasts more than half an hour on my own laptop... So I hope there is a way to alter values using other ways than dig into the source code to change the "const" values there, so I can automatically distributed compiled code to other people's PC and let them run the statistic analysis instead. One other reason for a automatically value adjustment is that I can try using my own agent to defeat itself by choosing different "const" values. Although my values are derived from previous history and statistical results, they are far from the most optimized values. I hope there is a easy way so I can quickly adopt that so to have a good sleep tonight while the computer does everything for me... :) Any hints on this sort of stuff? Any suggestion is welcomed and much appreciated.

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • jQuery Validation: Validating Checkboxes with Different Names

    - by Michael
    I have a set of 4 checkboxes, all with different names, and require that at least 1 is checked. I know there are a few posts on here already trying to answer this question. The solution that seems to be most logical to me is the answer posted on Question 1734840, but I can't seem to get it working! What's wrong with my code? Or any other new coding ideas to get this working? I have set the class on all of them to 'require-one'. My jQuery code is $(document).ready(function(){ $("#itemForm").validate({ highlight: function(element, errorClass, validClass) { $(element).addClass(errorClass).removeClass(validClass); $(element.form).find("label[for=" + element.name + "]") .addClass("radioerror"); }, unhighlight: function(element, errorClass, validClass) { $(element).removeClass(errorClass).addClass(validClass); $(element.form).find("label[for=" + element.name + "]") .removeClass("radioerror"); }, rules: { 'require-one': { required : { depends: function(element) { var allBoxes = $('.require-one'); if (allBoxes.filter(':checked').length == 0) { if (allBoxes.eq(element).length != 0) { return true; } } return false; } } } , }, errorPlacement: function(error, element) { if ( element.is("#other_descrip") ) error.appendTo("#othererror" ); if ( element.is("#itemlist") ) error.appendTo("#itemerror" ); } }); });

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • How to transform vertical table into horizontal table?

    - by avivo
    Hello, I have one table Person: Id Name 1 Person1 2 Person2 3 Person3 And I have its child table Profile: Id PersonId FieldName Value 1 1 Firstname Alex 2 1 Lastname Balmer 3 1 Email [email protected] 4 1 Phone +1 2 30004000 And I want to get data from these two tables in one row like this: Id Name Firstname Lastname Email Phone 1 Person1 Alex Balmer [email protected] +1 2 30004000 What is the most optimized query to get these vertical (key, value) values in one row like this? Now I have a problem that I done four joins of child table to parent table because I need to get these four fields. Some optimization is for sure possible. I would like to be able to modify this query in easy way when I add new field (key,value). What is the best way to do this? To create some StoreProcedure? I would like to have strongly types in my DB layer (C#) and using LINQ (when programming) so it means when I add some new Key, Value pair in Profile table I would like to do minimal modifications in DB and C# if possible. Actually I am trying to get some best practices in this case.

    Read the article

  • Is it bad practice to make a setter return "this"?

    - by Ken Liu
    Is it a good or bad idea to make setters in java return "this"? public Employee setName(String name){ this.name = name; return this; } This pattern can be useful because then you can chain setters like this: list.add(new Employee().setName("Jack Sparrow").setId(1).setFoo("bacon!")); instead of this: Employee e = new Employee(); e.setName("Jack Sparrow"); ...and so on... list.add(e); ...but it sort of goes against standard convention. I suppose it might be worthwhile just because it can make that setter do something else useful. I've seen this pattern used some places (e.g. JMock, JPA), but it seems uncommon, and only generally used for very well defined APIs where this pattern is used everywhere. Update: What I've described is obviously valid, but what I am really looking for is some thoughts on whether this is generally acceptable, and if there are any pitfalls or related best practices. I know about the Builder pattern but it is a little more involved then what I am describing - as Josh Bloch describes it there is an associated static Builder class for object creation.

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Enforce strong type checking in C (type strictness for typedefs)

    - by quinmars
    Is there a way to enforce explicit cast for typedefs of the same type? I've to deal with utf8 and sometimes I get confused with the indices for the character count and the byte count. So it be nice to have some typedefs: typedef unsigned int char_idx_t; typedef unsigned int byte_idx_t; With the addition that you need an explicit cast between them: char_idx_t a = 0; byte_idx_t b; b = a; // compile warning b = (byte_idx_t) a; // ok I know that such a feature doesn't exist in C, but maybe you know a trick or a compiler extension (preferable gcc) that does that. EDIT: I still don't really like the Hungarian notation in general, I couldn't used it for this problem because of project coding conventions, but I used it now in another similar case, where also the types are the same and the meanings are very similar. And I have to admit: it helps. I never would go and declare every integer with a starting "i", but as in Joel's example for overlapping types, it can be life saving.

    Read the article

  • Programmtically choosing image conversion format to JPEG or PNG for Silverlight display

    - by Otaku
    I have a project where I need to convert a large number of image types to be displayable in a Silverlight app - TIFF, GIF, WMF, EMF, BMP, DIB, etc. I can do these conversions on the server before hydrating the Silverlight app. However, I'm not sure when I should choose to convert to which format, either JPG or PNG. Is there some kind of standard out there like TIFF should always be a JPEG and GIF should always be a PNG. Or, if a BMP is 24 bit, it should be converted to a JPEG - any lower and it can be a PNG. Or everything is a PNG and why? What I usually see or see in response to this type of question is "Well, if the picture is a photograph, go with JPEG" or "If it has straight lines, PNG is better." Unfortunately, I won't have the luxury of viewing any of the image files at all and would like just a standard way to do this via code, even if that is a zillion if/then statements. Are there any standards or best practices around this subject? P.S. Please don't move to close this subject - it actually has no duplicate on SO because I'm not looking for subjectivity.

    Read the article

  • Batch Image compression tool for optimizing thousands of images

    - by Daniel Magliola
    Hi all, I'm maintaining a site that has thousands of images that have not been compressed nearly enough. The homepage weighs in at 1.5 Mb currently, and it could easily be way less that half that. I'm looking for some kind of tool that'll take a folder full of JPG pictures and will recompress them to their "optimal" compression value. Obviously, "optimal lossy compression setting" is an oxymoron, but I'm thinking maybe a tool that'll try different levels and compare the outputs to the input, and choose a "sweet spot" between size and destruction? Or even try whether PNG is a better option, many times it is, for "drawing" type stuff. Does anyone of you know any such tool? I'd have lots of fun coding one, but I bet someone already did and will save me 2 days. Alternatively, of course, anything that'll take all pictures in a folder and recompress them with a fixed quality level (say, 40) will also work, it'll just not make my inner nerd as happy, but it'll solve my problem just fine. (Ideally something that can run on Windows, ideally from the command line) Thank you!

    Read the article

  • Monotouch threads, GC, WCF

    - by cvista
    Hi This is a question about best practices i guess but it applies directly to my current MT project. I'm using WCF services to communicate with the server. To do this i do the following: services.MethodToCall(params); and the asynch: services.OnMethodToCallCompleted += delegate{ //do stuff and ting }; This can lead to issues if you're not careful in that variables defined within the scope of the asynch callback can sometimes be cleaned up by the gc and this can cause crashes. So - I am making it a practice to declare these outside of the scope of the callback unless I am 100% sure they are not needed. Now - when doing stuff and ting implies changing the ui - i wrap it all in an InvokeOnMainThread call. I guess wrapping everything in this would slow the main thread down and rubbish the point of having multi threads. Even though I'm being careful about all this i am still getting crashes and I have no idea why! I am certain it has something to do with threads, scope and all that. Now - the only thing I can think of outside of updating the UI that may need to happen inside of InvokeOnMainThread is that I have a singleton 'Database' class. This is based on the version 5 code from this thread http://www.yoda.arachsys.com/csharp/singleton.html So now if the service method returns data that needs to be added/updated to the Database class -I also wrap this inside an InvokeOnMainThread call. Still getting random crashes. So... My question is this: I am new to thick client dev - I'm coming from a web dev perspective where we don't need to worry about threads so much :) Aside from what I have mentioned -are there any other things I should be aware of? Is the above stuff correct? Or am i miss-understanding something? Cheers w://

    Read the article

  • save html-formatted text to database

    - by yozhik
    Hi all! I want to save html-formatted text to database, but when I do that it is don't save html-symbols like < / ' and others This is how I read article from database for editing: <p class="Title">??????????? ???????:</p> <textarea name="EN" cols="90" rows="20" value="<?php echo $articleArr['EN']; ?>" ></textarea> And this is how I save it to database: function UpdateArticle($ArticleID, $ParentName, $Title, $RU, $EN, $UKR) { //fetch data from database for dropdown lists //connect to db or die) $db = mysql_connect($GLOBALS["gl_dbName"], $GLOBALS["gl_UserName"], $GLOBALS["gl_Password"] ) or die ("Unable to connect"); //to prevenr ????? symbols in unicode - utf-8 coding mysql_query("SET NAMES 'UTF8'"); mysql_set_charset('utf8'); mysql_query("SET NAMES 'utf8' COLLATE 'utf8_general_ci'"); //select database mysql_select_db($GLOBALS["gl_adminDatabase"], $db); $sql = "UPDATE Articles SET AParentName='".$ParentName."', ATitle='".$Title."', RU='".$RU."', EN='".$EN."', UKR='".$UKR."' WHERE ArticleID='".$ArticleID."';"; //execute SQL-query $result = mysql_query($sql, $db); if (!$result) { die('Invalid query: ' . mysql_error()); } //close database = very inportant mysql_close($db); } Help me please, how can I save such texts properly, Thanx!

    Read the article

  • Understanding sql queries formulation methodoloy. How do you think while formulating Sql Queries

    - by Shantanu Gupta
    I have been working on sql server and front end coding and have usually faced problem formulating queries. I do understand most of the concepts of sql that are needed in formulating queries but whenever some new functionality comes into the picture that can be dont using sql query, i do usually fails resolving them. I am very comfortable with select queries using joins and all such things but when it comes to DML operation i usually fails For every query that i never done before I usually finds uncomfortable with that while creating them. Whenever I goes for an interview I usually faces this problem. Is it their some concept behind approaching on formulating sql queries. Eg. I need to create an sql query such that A table contain single column having duplicate record. I need to remove duplicate records. I know i can find the solution to this query very easily on Googling, but I want to know how everyone comes to the desired result. Is it something like Practice Makes Man Perfect i.e. once you did it, next time you will be able to formulate or their is some logic or concept behind.

    Read the article

  • ASP.Net MVC 2 - How to set up a Cancel button with client side navigation

    - by arame3333
    Thanks to a previous question I found a useful link on multiple buttons. http://weblogs.asp.net/dfindley/archive/2009/05/31/asp-net-mvc-multiple-buttons-in-the-same-form.aspx What I want to do is have a cancel button on my page, similar to this; <button name="button" type="button" onclick="document.location.href=$('#cancelUrl').attr('href')">Cancel</button> <a id="cancelUrl" href="<%: Url.Action("Index", "Home") %>" style="display:none;"></a> However although this code works, I really want to go back to the previous page. For Web Forms I could use the javascript Back() or Go(-1) functions, but they relied on postbacks. I could of course hard code the previous page and controller as I have done above. However I am struggling to find links that explain to me how Url.Action works. Because if I do this, I also need to include an index parameter, and I am not clear how the syntax works for that. It seems odd the amount of coding to do this. Out of curiosity, I am also wondering how you TDD client side code like this.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >