Search Results

Search found 14292 results on 572 pages for 'high integrity systems'.

Page 480/572 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • How to accommodate for the iPhone 4 screen resolution?

    - by dontWatchMyProfile
    This is a programming question! Read on before you vote to close! According to Apple, the iPhone 4 has a new and better screen resolution: 3.5-inch (diagonal) widescreen Multi-Touch display 960-by-640-pixel resolution at 326 ppi This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most -if not all- developers do is: They designed everything in such a way, that a touchable area is -for example- 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom. Edit: It seems Apple has integrated an switch that allows to tell if an app is highRes or not. Nice. When we develop high-resolution apps, they probably won't work on older devices. And if they do, they would suffer a lot from 4-times the size of any image, having to scale them down in memory.

    Read the article

  • image appears larger than bounding rectangle?

    - by Kildareflare
    Hello I'm implementing a custom print preview control. One of the objects it needs to display is an image which can be several pages high and wide. I've successfully divided the image into pages and displayed them in the correct order. Each "page" is drawn within the marginbounds. (That is, each page is sized to be the same as the marginBounds.) The problem I have is that the image the page represents exceeds the bottom margin. However if I draw a rectangle with the same dimensions as the image, at the same position as the image, the rectangle matches the marginbounds. Thus when drawn to the page (in print preview and printed page) the image is larger than a rectangle drawn based on the image dimensions. I.e. The image is drawn at the same size as e.MarignBounds and is drawn at e.MarginBounds.Location yet exceeds the bottom margin. How is this? I've checked resoutions at each step of the process and they are all 96DPI. //... private List<Image> m_printImages = new List<Image>(); //... private void ImagePrintDocument_PrintPage(object sender, PrintPageEventArgs e) { PrepareImageForPrinting(e); e.Graphics.DrawImage(m_printImages[m_currentPage], new Point(e.MarginBounds.X, e.MarginBounds.Y); e.Graphics.DrawRectangle(new Pen(Color.Blue, 1.0F), new Rectangle( new Point(e.MarginBounds.X, e.MarginBounds.Y), new Size(m_printImages[m_currentPage].Width, m_printImages[m_currentPage].Height))); //rest of handler } PS Not able to show an image as free image hosting blocked here.

    Read the article

  • Discussion on SEO best-practices for site development involving php...

    - by Bradley Herman
    Recently in our work, I've started getting some experience with SEO (finally). It's something I've put off for a long time because I've always maintained that SEO is a buzz-word b.s. pseudo-science and more about providing quality, relevant content (assuming proper header tags and the basics are covered). However, sometimes a client doesn't have stellar content yet still demands SEO and high rankings. While it's not how I design sites 100% of the time (as design dictates structure), I typically create a basic template from the design my boss gives me, then I optimize it, and then strip the top and bottom and move those to header.php and footer.php, using the following to bring in the header and footer based on AJAX versus HTML requests: <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/header.php'); }?> #content here <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/footer.php'); }?> Then, I use jQuery to intercept page requests and I use AJAX to fill in, for example, a #copy div with the new content. This avoids unnecessarily loading all the header and footer info everytime, but still allows users without Java to access pages without any problems. (also to think about, depending on size of content, do the extra http requests added using this method render it more of a server strain versus a single, larger file?) I don't have a really solid understanding of the meta keywords and their SEO significance, but as I recall reading, the keywords, title, and description on a page should match up to the pages content--ie. each page should have slightly different keywords/description while retaining some common ground. What I'm getting at here is trying to foster a discussion on whether my approach is flawed to begin with, if there are things I can do (within reason) that keep the site structure simple but allow for better SEO practices, or if my SEO understandings are wrong. This isn't a question, per say, but hopefully a constructive discussion here that more than just I can learn from. I appreciate any responses and hope to hear from you. Thanks!

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • How do I improve this design for dealing with intersecting date ranges and resolving date range conf

    - by derdo
    I am working on some code that deals with date ranges. I have pricing activities that have a starting-date and an end-date to set a certain price for that range. There are multiple pricing activities with intersecting date ranges. What I ultimately need is the ability to query valid prices for a date range. I pass in (jan1,jan31) and I get back a list that says jan1--jan10 $4 , jan11--jan19 $3 jan20--jan31 $4. There are priorities between pricing activities. Some type of pricing activities have high priority, so they override other pricing activities and for certain type of pricing activities lowest price wins etc. I currently have a class that holds these pricing activities and keeps a resolved pricing calendar. As I add new pricing activities I update the resolved calendar. As I write more tests/code, it started to get very complicated with all the different cases (pricing activities intersecting in different ways). I am ending up with very complicated code where I am resolving a newly added pricing activity. See AddPricingActivity() method below. Can anybody think of a simpler way to deal with this? Could there be similar code somewhere out there? Public class PricingActivity() { DateTime startDate; DateTime endDate; Double price; Public bool StartsBeforeEndsAfter (PricingActivity pAct) { // pAct covers this pricing activity } Public bool StartsMiddleEndsAfter (PricingActivity pAct){ // early part of pAct Itersects with later part of this pricing activity } //more similar methods to cover all the combinations of intersecting } Public Class PricingActivityList() { List<PricingActivity> activities; SortedDictionary<Date, PricingActivity> resolvedPricingCalendar; Public void AddPricingActivity(PricingActivity pAct) { //update the resolvedCalendar //go over each activity and find intersecting ones //update the resolved calendar correctly //depending on the type of the intersection //this part is getting out of hand….. } }

    Read the article

  • Why does adding Crossover to my Genetic Algorithm give me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • sorting a timer in matlab

    - by AP
    ok it seems like a simple problem, but i am having problem I have a timer for each data set which resets improperly and as a result my timing gets mixed. Any ideas to correct it? without losing any data. Example timer col ideally should be timer , mine reads 1 3 2 4 3 5 4 6 5 1 6 2 how do i change the colum 2 or make a new colum which reads like colum 1 without changing the order of ther rows which have data this is just a example as my file lengths are 86000 long , also i have missing timers which i do not want to miss , this imples no data for that period of time. thanks EDIT: I do not want to change the other columns. The coulm 1 is the gps counter and so it does not sync with the comp timer due to some other issues. I just want to change the row one such that it goes from high to low without effecting other rows. also take care of missing pts ( if i did not care for missing pts simple n=1: max would work.

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • What's the most efficient way to setup a multi-lingual website

    - by Jasper De Bruijn
    Hi, I'm developing a website that will be available in different languages. It is a LAMP (Linux, Apache, MySQL, PHP) setup, and it makes use of Smarty, mostly for the template engine. The way we currently translate is by a self-written smarty plugin, which will recognize certain tags in the HTML files, and will find the corresponding tag in an earlier defined language file. The HTML could look as follows: <p>Hi, welcome to $#gamedesc;!</p> And the language file could look like this: gamedesc:Poing 2009$; welcome:this is another tag$; Which would then output <p>Hi, welcome to Poing 2009!</p> This system is very basic, but it is pretty hard to control, if I f.e. would like to keep track of what has been translated so far, or give certain users the rights to translate only certain tags. I've been looking at some alternative ways to approach this, by either replacing the text-file with XML files which could store some extra meta-data, or by perhaps storing all the texts in the database, and retrieving it there. My question is, what would be the best way to make this system both maintainable and perform well with high user-traffic? Are there perhaps any (lightweight) plugins I could take a look at?

    Read the article

  • ISO/IEC Website and Charging for C and C++ Standards

    - by Michael Aaron Safyan
    The ISO C Standard (ISO/IEC 9899) and the ISO C++ Standard (ISO/IEC 14882) are not published online; instead, one must purchase the PDF for each of those standards. I am wondering what the rationale is behind this... is it not detrimental to both the C and C++ programming languages that the authoritative specification for these languages is not made freely available and searchable online? Doesn't this encourage the use of possibly inaccurate, non-authoritative sources for information regarding these standards? While I understand that much time and effort has gone into developing the C and C++ standards, I am still somewhat puzzled by the choice to charge for the specification. The OpenGroup Base Specification, for example, is available for free online; they make money buy charging for certification. Does anyone know why the ISO standards committees don't make their revenue in certifying standards compliance, instead of charging for these documents? Also, does anyone know if the ISO standards committee's atrociously looking website is intentionally made to look that way? It's as if they don't want people visiting and buying the spec. One last thing... the C and C++ standards are generally described as "open standards"... while I realize that this means that anyone is permitted to implement the standard, should that definition of "open" be revised? Charging for the standard rather than making it openly available seems contrary to the spirit of openness. P.S. I do have a copy of the ISO/IEC 9899:1999 and ISO/IEC 14882:2003, so please no remarks about being cheap or anything... although if you are tempted to say such things, you might want to consider the high school, undergraduate, and graduate students who might not have all that much extra cash. Also, you might want to consider the fact that the ISO website is really sketchy and they don't even tell you the cost until you proceed to the checkout... doesn't really encourage one to go and get a copy, now does it?

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • URL shortening: using inode as short name?

    - by Licky Lindsay
    The site I am working on wants to generate its own shortened URLs rather than rely on a third party like tinyurl or bit.ly. Obviously I could keep a running count new URLs as they are added to the site and use that to generate the short URLs. But I am trying to avoid that if possible since it seems like a lot of work just to make this one thing work. As the things that need short URLs are all real physical files on the webserver my current solution is to use their inode numbers as those are already generated for me ready to use and guaranteed to be unique. function short_name($file) { $ino = @fileinode($file); $s = base_convert($ino, 10, 36); return $s; } This seems to work. Question is, what can I do to make the short URL even shorter? On the system where this is being used, the inodes for newly added files are in a range that makes the function above return a string 7 characters long. Can I safely throw away some (half?) of the bits of the inode? And if so, should it be the high bits or the low bits? I thought of using the crc32 of the filename, but that actually makes my short names longer than using the inode. Would something like this have any risk of collisions? I've been able to get down to single digits by picking the right value of "$referencefile". function short_name($file) { $ino = @fileinode($file); // arbitrarily selected pre-existing file, // as all newer files will have higher inodes $ino = $ino - @fileinode($referencefile); $s = base_convert($ino, 10, 36); return $s; }

    Read the article

  • Flash won't load, embed error?

    - by Adrian M.
    Hello, I want to know why the flash movie in the header located here: http://www.dolphintemplate.com/demo/dolphin7/index.php?skin=dt_firestarter_red only loads in Firefox but NOT in IE and Chrome.. The flash movie resides in a iframe, this is the code of the iframe: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>header</title> </head> <body bgcolor="#000000" topmargin="0" leftmargin="0" marginwidth="0" marginheight="0"> <object data="header.swf" type="application/x-shockwave-flash" id="myflash" width="988" height="240"> <param name="movie" value="header.swf" /> <param name="bgcolor" value="#000000" /> <param name="height" value="988" /> <param name="width" value="240" /> <param name="quality" value="high" /> <param name="menu" value="false" /> <param name="allowscriptaccess" value="samedomain" /> <p>Adobe <a href="http://get.adobe.com/flashplayer/">Flash Player</a> is required to view this content.</p> </object> </body> </html> Thanks.

    Read the article

  • How does someone without a CS degree get an interview in a sluggish economy?

    - by Anon
    I've been programming off and on since 4th grade (for about 20 years). Technology is one of my passions but after working in the field for a couple years out of High School, I spent nine months and $15,000 getting an accredited certificate in music performance instead of CS. I've been doing lots of self study but I think a CS degree is overkill for most line of business applications. Even so, HR departments can't be expected to know that... How does one get their foot in the proverbial door without a degree, especially in a smaller "fly-over country" market? ...or... Where can I get the cheapest/easiest degree that will pass muster (including testing out of as much as possible)? Don't get me wrong, I'm down with learning new things but I don't necessarily need the expense or coaching to motivate me. EDIT Consolidating good answers: Networking/User Groups Portfolio/Open Source Contributions Look for hybrid jobs (How I got my start :) ) Seek un-elitist companies/hiring managers. (Play the numbers game) Start my own business. (This is a bit challenging for a family man but a very good answer. My reason for searching is to reduce my commute thereby allowing more time to cultivate income on the side) Avail myself of political subsidies to constituents in the teachers' unions ;) .

    Read the article

  • Delphi Performance: Case Versus If

    - by Andreas Rejbrand
    I guess there might be some overlapping with previous SO questions, but I could not find a Delphi-specific question on this topic. Suppose that you want to check if an unsigned 32-bit integer variable "MyAction" is equal to any of the constants ACTION1, ACTION2, ... ACTIONn, where n is - say 1000. I guess that, besides being more elegant, case MyAction of ACTION1: {code}; ACTION2: {code}; ... ACTIONn: {code}; end; if much faster than if MyAction = ACTION1 then // code else if MyAction = ACTION2 then // code ... else if MyAction = ACTIONn then // code; I guess that the if variant takes time O(n) to complete (i.e. to find the right action) if the right action ACTIONi has a high value of i, whereas the case variant takes a lot less time (O(1)?). Am I correct that switch is much faster? Am I correct that the time required to find the right action in the switch case actually is independent of n? I.e. is it true that it does not really take any longer to check a million cases than to check 10 cases? How, exactly, does this work?

    Read the article

  • lots of backbone views - performance issues?

    - by ksol
    tl;dr: I wonder if having lots (100+ for the moment, potentially up to 1000/2000 or more) of backbone views (as a cell of a table) is too heavy or not The project I'm working on revolves around a planning view. There one row per user that covers 6 hours of a day, each hour splitted in 4 slots of 15mn. This planning is used to add "reservations" when clicking on a slot, and should handle hovering of the correct slots, and also handle when it is NOT possible to make a reservation - ie. prevent user click on an "unavailable" slot. There is many reasons why a slot can't be clicked on: the user is not available at this time, or the user is in a reservation; or the app needs to "force" a delay slot between two reservations. Reservations (a div) are rendered in a slot (a cell of a table), and by toying with dimensions, hovers the right number of slots. All this screen is handled with backbone. So For each slot I'm hovering on, I need to check wether I can do a reservation here or not. As of this moment, I use this by toying with the data attributes on the slots : when a reservation object is added, the slots covered are "enhanced with (among others) the reservation object (the backbone view object). But in some cases I don't quite have a grasp on now, it mixes up, and when the reservation view is removed, the slots are not "cleaned up" : the previous class is not reset correctly. It is probably something I've done wrong or badly, but this is only going to get heavier; I think I should use another class of Backbone views here, but I'm afraid the number of slots and thereof of views objects will be high and cause performance issue. I don't know mush about js perf so I'd like to have some feedback before jumping on that train. Any other advice on how to do this would be quite welcomed too. Thanks for your time. If this is not clear enough, tell me, I'll try and rephrase it.

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • Calling DI Container directly in method code (MVC Actions)

    - by fearofawhackplanet
    I'm playing with DI (using Unity). I've learned how to do Constructor and Property injection. I have a static container exposed through a property in my Global.asax file (MvcApplication class). I have a need for a number of different objects in my Controller. It doesn't seem right to inject these throught the constructor, partly because of the high quantity of them, and partly because they are only needed in some Actions methods. The question is, is there anything wrong with just calling my container directly from within the Action methods? public ActionResult Foo() { IBar bar = (Bar)MvcApplication.Container.Resolve(IBar); // ... Bar uses a default constructor, I'm not actually doing any // injection here, I'm just telling my conatiner to give me Bar // when I ask for IBar so I can hide the existence of the concrete // Bar from my Controller. } This seems the simplest and most efficient way of doing things, but I've never seen an example used in this way. Is there anything wrong with this? Am I missing the concept in some way?

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • Reading same file from multiple threads in C#

    - by Gustavo Rubio
    Hi. I was googling for some advise about this and I found some links. The most obvious was this one but in the end what im wondering is how well my code is implemented. I have basically two classes. One is the Converter and the other is ConverterThread I create an instance of this Converter class that has a property ThreadNumber that tells me how many threads should be run at the same time (this is read from user) since this application will be used on multi-cpu systems (physically, like 8 cpu) so it is suppossed that this will speed up the import The Converter instance reads a file that can range from 100mb to 800mb and each line of this file is a tab-delimitted value record that is imported to another destination like a database. The ConverterThread class simply runs inside the thread (new Thread(ConverterThread.StartThread)) and has event notification so when its work is done it can notify the Converter class and then I can sum up the progress for all these threads and notify the user (in the GUI for example) about how many of these records have been imported and how many bytes have been read. It seems, however that I'm having some trouble because I get random errors about the file not being able to be read or that the sum of the progress (percentage) went above 100% which is not possible and I think that happens because threads are not being well managed and probably the information returned by the event is malformed (since it "travels" from one thread to another) Do you have any advise on better practices of implementation of threads so I can accomplish this? Thanks in advance.

    Read the article

  • Can't send smtp email from network using C#, asp.net website

    - by Kaysar
    Hi, I have my code here, it works fine from my home, where my user is administrator, and I am connected to internet via a cable network. But, problem is when I try this code from my work place, it does not work. Shows error: "unable to connect to the remote server" From a different machine in the same network: "A socket operation was attempted to an unreachable network 209.xxx.xx.52:25" I checked with our network admin, and he assured me that all the mail ports are open [25,110, and other ports for gmail]. Then, I logged in with administrative privilege, there was a little improvement, it did not show any error, but the actual email was never received. Please note that, the code was tested from development environment, visual studio 2005 and 2008. Any suggestion will be much appreciated. Thanks in advance try { MailMessage mail_message = new MailMessage("[email protected]", txtToEmail.Text, txtSubject.Text, txtBody.Text); SmtpClient mail_client = new SmtpClient("SMTP.y7mail.com"); NetworkCredential Authentic = new NetworkCredential("[email protected]", "xxxxx"); mail_client.UseDefaultCredentials = true; mail_client.Credentials = Authentic; mail_message.IsBodyHtml = true; mail_message.Priority = MailPriority.High; try { mail_client.Send(mail_message); lblStatus.Text = "Mail Sent Successfully"; } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.Message); lblStatus.Text = "Mail Sending Failed\r\n" + ex.Message; } } catch (Exception ex) { lblStatus.Text = "Mail Sending Failed\r\n" + ex.Message; }

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >