Search Results

Search found 9193 results on 368 pages for 'batcher sort'.

Page 256/368 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Source of (programmer) inefficiency

    - by Daniel
    I am interested to gain a better insight about the possible reasons of personal inefficiency as programmers (and only in programming) due to – simply - our own errors (because we are humans – well, almost all of us). I am not interested in how much we are productive or in how many adjustements the customer asks for when the work is done, but where and how each of us spend that part of its time in tasks that are unproductive and there is no one to blame except ourselves. Excluding ego - feeding and / or self – gratification, what I am trying to get (for all of us) is: what are the common issues eating our time; insight on reasons for that issues; identify simple way for us, personally (not delegating actions to other or our organizations), to correct our own problems. Please, do not think in academic terms but aim at the opportunity to compare our daily experiences and understand what are and how we try to fix our personal deficiencies. If you are interested to respond to this post, please: integrate the list if you see something important (or obvious) missing; highlight or name honestly your first issue tellng the way you try to address and solve your issue acting on yourself and yourself only in a sort of "continuous quality improving" My criteria for accepting the answer is: choose the best solution (feasibility and utility) to fix one (or more) of the problems of the list. Of course, selecting an error is not a vote on our skills: maybe we are hyper professional programmers and we lose ten minutes only every year or we are terribly inefficient, losing a couple of days a week: reasons for inefficiency could be really the same - but in a different scale. A possible list: Plain error in the names (variables, functions). Inability to see the obvious in your code. Misreading. Lack of concentration. Trying to use a technology you have not mastered. Errors with data types. Time required to understand your previous code or your documentation. Trying to do something more than requested because you enjoy it Using solutions more complicated than required because you enjoy it. Plain logical errors. Errors due to your fault in communications. Distraction My first personal issue: "Trying to use a technology you do not master." I have to use daily several technologies and I often need to spend significant time correcting code because my assumptions were plainly wrong. Reasons for this: production needs put high pressure and make difficult to find the time to learn. I try to address this reading technical books - as many as I can - even if this actually consumes a lot of time.

    Read the article

  • Canonicalization of single, small pages like reviews or product categories [SEO]

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/">HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do? Thank you for your help!

    Read the article

  • Oracle????????????????????????~????????????????????

    - by Yusuke.Yamamoto
    RDBMS ???????·????????????????????????????????????????????????????????????????????????? ????????Oracle ?????????????????????????????????? Oracle Database ???????????????????????????????? ????????????????????? ????Oracle???????????????????????????????????????????????????????????????????????????? ?????????????? Oracle Database ???????????????????????? ??????????????????????????????????2????????????? 1. ??????(Query Transformation) Query Transformation ???????SQL??????????????????SQL????????????????????? Query Transformation ???Predicate Transformation ? Common Sub-expression Elimination (CSE), Order-BY Elimination (OBYE), Outer Join Elimination (OJE), Simple View Meging (SVM), Predicate Move around (PM), Complex View Merging (CVM), Sub-query Unnesting (SU), Join Predicate Push Down (JPPD) ???? OR Expansion, Star Transformation (ST) ????????????? ···???????????????????????????????????????????????????? Predicate Transformation ?????? Transitive Predicate Generation ????????????? ?????????????SQL???deptno ? 10 ????????????????????????????? select e.ename, d.loc from emp e, dept d where e.deptno=d.deptno and e.deptno=10; ???????????????emp ??? deptno=10 ??????????????dept ??? d.deptno=10 ??????????????????? emp ?? deptno=10 ????????????????????emp ?? deptno=10 ??????10???????10? dept ????????????dept ??20???????????????????????10?*20?=200?????(??????????·?????????)? ??SQL?? Transitive Predicate Generation ??????SQL????????????????? select e.ename, d.location from emp e, dept d where e.deptno=d.deptno and e.deptno=10 and d.deptno=10; ^^^^^^^^^^^ ??????dept ?????? deptno=10 ??????????????????????????10?*1?=10(dept.deptno ?unique????)?1/20????????????????1/20????????????????10??????????30???????????????Query Transformation ???????????????????????????? ?:??????????? dept ?? 1-row table ??????dept ?? driving ???(Outer Table)??? emp ?? probe ???(Inner Table)????????????1?*10?=10 ????????????????????????????????????????????????????????1/20????????????? ?????? Query Transformation ??????SQL????????????????????????????????? Transformation ??????????????????????????????????? 2. ????·????(Access Path Analysis) Access Path Analysis ??Query Transformation ??SQL????????????(Access Path)?????????(Join Method)?????(Join Order)?????????? ??????????????????(FTS)?ROWID?????????????????????????????·?????(Nested Loop Join)???????(Hash Join)????/?????(Sort Merge Join)????????????????????????????????????????????????????????????????????????? Oracle Database ????????? Query Transformation ???? Logical Optimizer?Access Path Analysis ???? Physical Optimizer ????????? ??????????????????????????????????????????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????? Oracle Database ????????????????????? "Oracle ????????" ?????????? Sustaining Engineering?? ?(??? ???) ???????????????? Sustaining Engineering ????????????????????????Oracle Database ???????????????????????? ?????????????????????Ruby????????????????????????? Oracle????????????????????????! Oracle????????????? Oracle????????????????????????

    Read the article

  • Fast programmatic compare of "timetable" data

    - by Brendan Green
    Consider train timetable data, where each service (or "run") has a data structure as such: public class TimeTable { public int Id {get;set;} public List<Run> Runs {get;set;} } public class Run { public List<Stop> Stops {get;set;} public int RunId {get;set;} } public class Stop { public int StationId {get;set;} public TimeSpan? StopTime {get;set;} public bool IsStop {get;set;} } We have a list of runs that operate against a particular line (the TimeTable class). Further, whilst we have a set collection of stations that are on a line, not all runs stop at all stations (that is, IsStop would be false, and StopTime would be null). Now, imagine that we have received the initial timetable, processed it, and loaded it into the above data structure. Once the initial load is complete, it is persisted into a database - the data structure is used only to load the timetable from its source and to persist it to the database. We are now receiving an updated timetable. The updated timetable may or may not have any changes to it - we don't know and are not told whether any changes are present. What I would like to do is perform a compare for each run in an efficient manner. I don't want to simply replace each run. Instead, I want to have a background task that runs periodically that downloads the updated timetable dataset, and then compares it to the current timetable. If differences are found, some action (not relevant to the question) will take place. I was initially thinking of some sort of checksum process, where I could, for example, load both runs (that is, the one from the new timetable received and the one that has been persisted to the database) into the data structure and then add up all the hour components of the StopTime, and all the minute components of the StopTime and compare the results (i.e. both the sum of Hours and sum of Minutes would be the same, and differences introduced if a stop time is changed, a stop deleted or a new stop added). Would that be a valid way to check for differences, or is there a better way to approach this problem? I can see a problem that, for example, one stop is changed to be 2 minutes earlier, and another changed to be 2 minutes later would have a net zero change. Or am I over thinking this, and would it just be simpler to brute check all stops to ensure that The updated run stops at the same stations; and Each stop is at the same time

    Read the article

  • Don Knuth and MMIXAL vs. Chuck Moore and Forth -- Algorithms and Ideal Machines -- was there cross-pollination / influence in their ideas / work?

    - by AKE
    Question: To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms? I'm interested in citations, interviews, articles, links, or any other sort of evidence. It could also be evidence of the form of A and B here suggest that Moore might have borrowed or influenced C and D from Knuth here, or vice versa. (Opinions are of course welcome, but references / links would be better!) Context: Until fairly recently, I have been primarily familiar with Knuth's work on algorithms and computing models, mostly through TAOCP but also through his interviews and other writings. However, the more I have been using Forth, the more I am struck by both the power of a stack-based machine model, and the way in which the spareness of the model makes fundamental algorithmic improvements more readily apparent. A lot of what Knuth has done in fundamental analysis of algorithms has, it seems to me, a very similar flavour, and I can easily imagine that in a parallel universe, Knuth might perhaps have chosen Forth as his computing model. That's the software / algorithms / programming side of things. When it comes to "ideal computing machines", Knuth in the 70s came up with the MIX computer model, and then, collaborating with designers of state-of-the-art RISC chips through the 90s, updated this with the modern MMIX model and its attendant assembly language MMIXAL. Meanwhile, Moore, having been using and refining Forth as a language, but using it on top of whatever processor happened to be in the computer he was programming, began to imagine a world in which the efficiency and value of stack-based programming were reflected in hardware. So he went on in the 80s to develop his own stack-based hardware chips, defining the term MISC (Minimal Instruction Set Computers) along the way, and ending up eventually with the first Forth chip, the MuP21. Both are brilliant men with keen insight into the art of programming and algorithms, and both work at the intersection between algorithms, programs, and bare metal hardware (i.e. hardware without the clutter of operating systems). Which leads me to the headlined question... Question:To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?

    Read the article

  • Working with Reporting Services Filters – Part 3: The TOP and BOTTOM Operators

    - by smisner
    Thus far in this series, I have described using the IN operator and the LIKE operator. Today, I’ll continue the series by reviewing the TOP and BOTTOM operators. Today, I happened to be working on an example of using the TOP N operator and was not successful on my first try because the behavior is just a bit different than we find when using an “equals” comparison as I described in my first post in this series. In my example, I wanted to display a list of the top 5 resellers in the United States for AdventureWorks, but I wanted it based on a filter. I started with a hard-coded filter like this: Expression Data Type Operator Value [ResellerSalesAmount] Float Top N 5 And received the following error: A filter value in the filter for tablix 'Tablix1' specifies a data type that is not supported by the 'TopN' operator. Verify that the data type for each filter value is Integer. Well, that puzzled me. Did I really have to convert ResellerSalesAmount to an integer to use the Top N operator? Just for kicks, I switched to the Top % operator like this: Expression Data Type Operator Value [ResellerSalesAmount] Float Top % 50 This time, I got exactly the results I expected – I had a total of 10 records in my dataset results, so 50% of that should yield 5 rows in my tablix. So thinking about the problem with Top N some  more, I switched the Value to an expression, like this: Expression Data Type Operator Value [ResellerSalesAmount] Float Top N =5 And it worked! So the value for Top N or Top % must reflect a number to plug into the calculation, such as Top 5 or Top 50%, and the expression is the basis for determining what’s in that group. In other words, Reporting Services will sort the rows by the expression – ResellerSalesAmount in this case – in descending order, and then filter out everything except the topmost rows based on the operator you specify. The curious thing is that, if you’re going to hard-code the value, you must enter the value for Top N with an equal sign in front of the integer, but you can omit the equal sign when entering a hard-coded value for Top %. This experience is why working with Reporting Services filters is not always intuitive! When you use a report parameter to set the value, you won’t have this problem. Just be sure that the data type of the report parameter is set to Integer. Jessica Moss has an example of using a Top N filter in a tablix which you can view here. Working with Bottom N and Bottom % works similarly. You just provide a number for N or for the percentage and Reporting Services works from the bottom up to determine which rows are kept and which are excluded.

    Read the article

  • What&rsquo;s new in RadChart for 2010 Q1 (Silverlight / WPF)

    Greetings, RadChart fans! It is with great pleasure that I present this short highlight of our accomplishments for the Q1 release :). Weve worked very hard to make the best silverlight and WPF charting product even better. Here is some of what we did during the past few months.   1) Zooming&Scrolling and the new sampling engine: Without a doubt one of the most important things we did. This new feature allows you to bind your chart to a very large set of data with blazing performance. Dont take my word for it give it a try!   2) New Smart Label Positioning and Spider-like labels feature: This new feature really helps with very busy graphs. You can play with the different settings we offer in this example.   3) Sorting and Filtering. Much like our RadGridview control the chart now allows you to sort and filter your data out of the box with a single line of code!   4) Legend improvements Weve also been paying attention to those of you who wanted a much improved legend. It is now possible to customize the look and feel of legend items and legend position with a single click.   5) Custom palette brushes. You have told us that you want to easily customize all palette colors using a single clean API from both XAML and code behind. The new custom palette brushes API does exactly that.   There are numerous other improvements as well, as much improved themes, performance optimizations and other features that we did. If you want to dig in further check the release notes and changes and backwards compatibility topics.   Feel free to share the pains and gains of working with RadChart. Our team is always open to receiving constructive feedback and beer :-)Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to store Role Based Access rights in web application?

    - by JonH
    Currently working on a web based CRM type system that deals with various Modules such as Companies, Contacts, Projects, Sub Projects, etc. A typical CRM type system (asp.net web form, C#, SQL Server backend). We plan to implement role based security so that basically a user can have one or more roles. Roles would be broken down by first the module type such as: -Company -Contact And then by the actions for that module for instance each module would end up with a table such as this: Role1 Example: Module Create Edit Delete View Company Yes Owner Only No Yes Contact Yes Yes Yes Yes In the above case Role1 has two module types (Company, and Contact). For company, the person assigned to this role can create companies, can view companies, can only edit records he/she created and cannot delete. For this same role for the module contact this user can create contacts, edit contacts, delete contacts, and view contacts (full rights basically). I am wondering is it best upon coming into the system to session the user's role with something like a: List<Role> roles; Where the Role class would have some sort of List<Module> modules; (can contain Company, Contact, etc.).? Something to the effect of: class Role{ string name; string desc; List<Module> modules; } And the module action class would have a set of actions (Create, Edit, Delete, etc.) for each module: class ModuleActions{ List<Action> actions; } And the action has a value of whether the user can perform the right: class Action{ string right; } Just a rough idea, I know the action could be an enum and the ModuleAction can probably be eliminated with a List<x, y>. My main question is what would be the best way to store this information in this type of application: Should I store it in the User Session state (I have a session class where I manage things related to the user). I generally load this during the initial loading of the application (global.asax). I can simply tack onto this session. Or should this be loaded at the page load event of each module (page load of company etc..). I eventually need to be able to hide / unhide various buttons / divs based on the user's role and that is what got me thinking to load this via session. Any examples or points would be great.

    Read the article

  • Best practices for logging and tracing in .NET

    - by Levidad
    I've been reading a lot about tracing and logging, trying to find some golden rule for best practices in the matter, but there isn't any. People say that good programmers produce good tracing, but put it that way and it has to come from experience. I've also read similar questions in here and through the internet and they are not really the same thing I am asking or do not have a satisfying answer, maybe because the questions lack some detail. So, folks say that tracing should sort of replicate the experience of debugging the application in cases where you can't attach a debugger. It should provide enough context so that you can see which path is taken at each control point in the application. Going deeper, you can even distinguish between tracing and event logging, in that "event logging is different from tracing in that it captures major states rather than detailed flow of control". Now, say I want to do my tracing and logging using only the standard .NET classes, those in the System.Diagnostics namespace. I figured that the TraceSource class is better for the job than the static Trace class, because I want to differentiate among the trace levels and using the TraceSource class I can pass in a parameter informing the event type, while using the Trace class I must use Trace.WriteLineIf and then verify things like SourceSwitch.TraceInformation and SourceSwitch.TraceErrors, and it doesn't even have properties like TraceVerbose or TraceStart. With all that in mind, would you consider a good practice to do as follows: Trace a "Start" event when begining a method, which should represent a single logical operation or a pipeline, along with a string representation of the parameter values passed in to the method. Trace an "Information" event when inserting an item into the database. Trace an "Information" event when taking one path or another in an important if/else statement. Trace a "Critical" or "Error" in a catch block depending on weather this is a recoverable error. Trace a "Stop" event when finishing the execution of the method. And also, please clarify when best to trace Verbose and Warning event types. If you have examples of code with nice trace/logging and are willing to share, that would be excelent. Note: I've found some good information here, but still not what I am looking for: http://msdn.microsoft.com/en-us/magazine/ff714589.aspx Thanks in advance!

    Read the article

  • Stagnating in programming

    - by Coder
    Time after time this question came up in my mind, but up until today I wasn't thinking about it much. I have been programming for maybe around 8 years now, and for the last two years it seems I'm not as keen to pick up new technologies anymore. Maybe that's a burnout or something, but I'd say it's experience and what I like, that's stopping me from running after the latest and greatest. I'm C++ developer, by this I mean, I love close to metal programming. I have no problems tracing problems through assembly, using tools like WinDbg or HexView. When I use constructs, I think about how they are realized underneath, how the bits are set and unset under the hood. I love battling with complex threading problems and doing everything hardcore way, even by hand if the regular solutions seem half baked. But I also love the C++0x stuff, and use it a lot. And all C++ code as long as it's not cumbersome compared to C counterparts, sometimes I also fall back to sort of "Super C" if the C++ way is ugly. And then there are all other developers who seem to be way more forward looking, .Net 4.0 MVC, WPF, all those Microsoft X#s, LINQ languages, XML and XSLT, mobile devices and so on. I have done a considerable amount of .NET, SQL, ASPX programming, but the further I go, the less I want to try those technologies. Is that bad? Almost every day I hear people saying that managed code is the only way forward, WPF is the way to go. I hear that C++ is godawful, and you can't code anything in it that's somewhat stable. But I don't buy it. With the experience I have, and the knowledge of how native code is compiled and executes, I can say I find it extremely rare that C++ code is unstable, or leaks, or causes crashes that takes more than 30 seconds to identify and fix. And to tell the truth, I've seen enough problems with other "cool" languages that I'd say C++ is even more stable and production proof than the safe languages, at least for me. The only thing that scares me in C++ is new frameworks, I don't trust them, and I use them extra sparingly. STL - yes, ATL - very sparingly, everything else... Well, not very keen on it. Most huge problems I've ran into, all were related to frameworks, not the language itself. Some overrided operator here, bad hierarchy there, poor class design here, mystical castings there. Other than that, C/C++ (yes, I use them together) still seems a very controlled and stable way to develop applications. Am I stagnating? Should I switch a profession, or force myself in all that marketing hype? Are there more developers who feel the same way?

    Read the article

  • Video stutter when using external drive

    - by psion
    When using boxee to play video files off of an external western digital 1TB drive formatted NTFS, I notice a slight stutter in the video every 5-10 seconds. When using mplayer, it doesn't stutter as often, but it still stutters occasionally. If I play the video off of the local sata drive, it plays fine even in boxee. I use this computer as my HTPC and I just switched from windows to linux on it. In windows, I never had any sort of stutter playing movies from the drive. I am using the latest intel graphics drivers (for the intel GMA 950) root@eee-htpc:/home/htpc# grep wd /etc/mtab /dev/sdb1 /mnt/wd2 fuseblk rw,nosuid,nodev,allow_other,blksize=512 0 0 I notice that despite trying to use ntfs or ntfs-3g, ubuntu uses ntfs-fuse which I've heard is slower. /dev/sdb1: Timing buffered disk reads: 80 MB in 3.07 seconds = 26.08 MB/sec root@eee-htpc:/mnt/wd2# dd if=/dev/zero of=./120mb bs=1024 count=120000 root@eee-htpc:/mnt/wd2# time mv ./120mb /home/htpc real 0m2.095s user 0m0.016s sys 0m0.736s Even though fuse has a reputation for being slow, it should easily be fast enough for playing standard definition video files. So why the video stutter? edit: The issue seems to be overhead cpu usage from either playing off of a usb device or ntfs/fuse. Watching CPU usage with top, local files use 10-40% CPU. Watching the same video on the external formatted ntfs, it spikes to 170% (over 100% because of hyperthreading). To me it seems like it must be overhead from the fuse driver, though I don't know if it has more or less overhead than ntfs-3g. It's a EEEBox B202 that has an atom 270, so not exactly the most powerful out there. edit2: I believe the solution would be to use non-fuse drivers or different fuse drivers. so far I have not been able to. edit3: I've probably edited this more times than I should, but as an update I have upgraded ntfs drivers to ntfs-3g 2010.8.8 external FUSE 28 - Third Generation NTFS Driver using the following PPA - ppa:x3lectric/team-iquik-releases. When first opening a video file in boxee that's on ntfs there's still the same amount of lag. After a few minutes of video, the lag seems to go away and the cpu usage comes down to 10-40%. Every so often though, it begins to stutter again. Also, if I skip ahead/back in the file, it begins to stutter a lot.

    Read the article

  • Canonicalization of single, small pages like reviews or product categories

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/"> HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do?

    Read the article

  • Monitoring Your Servers

    - by Grant Fritchey
    If you are the DBA in a large scale enterprise, you’re probably already monitoring your servers for up-time and performance. But if you work for a medium-sized business, a small shop, or even a one-man operation, chances are pretty good that you’re not doing that sort of monitoring. You know that you’re supposed to be doing it, but other things, more important at-the-moment things, keep getting in the way. After all, which is more important, some monitoring or backup testing?  Backup testing, of course. Monitoring is frequently one of those things that you do when can get around to it.  Well, as you can see at the right, I have your round tuit ready to go. What if I told you that you could get monitoring on your servers for up-time, job completion, performance, all the standard stuff? And what if I told you that you wouldn’t need to install and configure another server in your environment to get it done? And what if I told you that you’d be able to set up and customize your alerts so you could know if your server was offline or a drive was full? Almost nothing for you to do, and you’ll have a full-blown monitoring process. Sounds to good to be true doesn’t it? Well, it’s coming. We’re creating an online, remote, monitoring system here at Red Gate. You’ll be able to use our SQL Monitor tool (which you can see here, monitoring SQL Server Central in real time) to keep track of your systems, but without having to set up a server and a database for storing the information collected. Instead, we’re taking advantage of services available through the internet to enable collection and storage of this information remotely, off your systems. All you have to do is install a piece of software that will communicate between our service and your servers and you’ll be off and running. It’s that easy. Before you get too excited, let me break the news that this is the near future I’m talking about. We’re setting up the program and there’s a sign-up you can use to get in on the initial tests.

    Read the article

  • Should I be an algorithm developer, or java web frameworks type developer?

    - by Derek
    So - as I see it, there are really two kinds of developers. Those that do frameworks, web services, pretty-making front ends, etc etc. Then there are developers that write the algorithms that solve the problem. That is, unless the problem is "display this raw data in some meaningful way." In that case, the framework/web developer guy might be doing both jobs. So my basic problem is this. I have been an algorithms kind of software developer for a few years now. I double majored in Math and Computer science, and I have a master's in systems engineering. I have never done any web-dev work, with the exception of a couple minor jobs, and some hobby level stuff. I have been job interviewing lately, and this is what happens: Job is listed as "programmer- 5 years of experience with the following: C/C++, Java,Perl, Ruby, ant, blah blah blah" Recruiter calls me, says they want me to come in for interview In the interview, find out they have some webservices development, blah blah blah When asked in the interview, talk about my experience doing algorithms, optimization, blah blah..but very willing to learn new languages, frameworks, etc Get a call back saying "we didn't think you were a fit for the job you interviewed wtih, but our algorithm team got wind of you and wants to bring you on" This has happened to me a couple times now - see a vague-ish job description looking for a "programmer" Go in, find out they are doing some sort of web-based tool, maybe with some hardcore algorithms running in the background. interview with people for the web-based tool, but get an offer from the algorithms people. So the question is - which job is the better job? I basically just want to get a wide berth of experience at this level of my career, but are algorithm developers so much in demand? Even more so than all these supposed hot in demand web developer guys? Will I be ok in the long run if I go into the niche of math based algorithm development, and just little to no, or hobby level web-dev experience? I basically just don't want to pigeon hole myself this early. My salary is already starting to get pretty high - and I can see a company later on saying "we really need a web developer, but we'll hire this 50k/year college guy, instead of this 100k/year experience algorithm guy" Cliffs notes: I have been doing algorithm development. I consider myself to be a "good programmer." I would have no problem picking up web technologies and those sorts of frameworks. During job interviews, I keep getting "we think you've got a good skillset - talk to our algorithm team" instead of wanting me to learn new skills on the job to do their web services or whhatever other new technology they are doing. Edit: Whenever I am talking about algorithm development here - I am talking about the code that produces the answer. Typically I think of more math-based algorithms: solving a financial problem, solving a finite element method, image processing, etc

    Read the article

  • Camera not staying behind model while moving in circle

    - by ChocoMan
    I have a camera behind a model (3rd Person) and I'm having problems KEEPING it behind the model. When I first start my game, you see the back of the model. If the model moves forward, backward or strafe left or right, the camera moves along accordingly. When the model rotates (stationary), the camera rotates accordingly with the model still pointing at the model's back. So far, so good. The problem comes when the player is BOTH moving and rotating at the same time. Take for example a model moving in a circular pattern like running around a track. As the model moves in this motion, the model rotates slightly more with each complete rotation. Eventually, instead of looking at the model's back, eventually you will see the model in a profile view and before you know it, the model's front is facing the camera. And when you stop moving the model, the model stays in that position. So, as long as my model is stationary and rotating in one place, the camera rotates correctly. But as soon as there is any sort movement while rotating, the model is offset by a mysterious increasing amount. How can I keep the camera maintaining the same view no matter how I move AND rotate at the same time? // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { /* For rotating the model left or right. * Camera maintains distance from model * throughout rotation and if model moves * to a new position. */ Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(speedAngleMAX); AddRotation = Quaternion.CreateFromAxisAngle(Vector3.Up, yaw); //AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); AddPitch = Quaternion.CreateFromAxisAngle(Vector3.Up, pitch); ModelLoad.CRotation *= AddPitch; COrientation = Matrix.CreateFromQuaternion(ModelLoad.CRotation); } // Orbit (yaw) Camera around model public void cameraYaw(float yaw) { Vector3 yawAngle = ModelLoad.CameraPos - ModelLoad.camTarget; Vector3 axisYaw = Vector3.Up; ModelLoad.CameraPos = Vector3.Transform(yawAngle, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; }

    Read the article

  • Understanding Data Science: Recent Studies

    - by Joe Lamantia
    If you need such a deeper understanding of data science than Drew Conway's popular venn diagram model, or Josh Wills' tongue in cheek characterization, "Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician." two relatively recent studies are worth reading.   'Analyzing the Analyzers,' an O'Reilly e-book by Harlan Harris, Sean Patrick Murphy, and Marck Vaisman, suggests four distinct types of data scientists -- effectively personas, in a design sense -- based on analysis of self-identified skills among practitioners.  The scenario format dramatizes the different personas, making what could be a dry statistical readout of survey data more engaging.  The survey-only nature of the data,  the restriction of scope to just skills, and the suggested models of skill-profiles makes this feel like the sort of exercise that data scientists undertake as an every day task; collecting data, analyzing it using a mix of statistical techniques, and sharing the model that emerges from the data mining exercise.  That's not an indictment, simply an observation about the consistent feel of the effort as a product of data scientists, about data science.  And the paper 'Enterprise Data Analysis and Visualization: An Interview Study' by researchers Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffery Heer considers data science within the larger context of industrial data analysis, examining analytical workflows, skills, and the challenges common to enterprise analysis efforts, and identifying three archetypes of data scientist.  As an interview-based study, the data the researchers collected is richer, and there's correspondingly greater depth in the synthesis.  The scope of the study included a broader set of roles than data scientist (enterprise analysts) and involved questions of workflow and organizational context for analytical efforts in general.  I'd suggest this is useful as a primer on analytical work and workers in enterprise settings for those who need a baseline understanding; it also offers some genuinely interesting nuggets for those already familiar with discovery work. We've undertaken a considerable amount of research into discovery, analytical work/ers, and data science over the past three years -- part of our programmatic approach to laying a foundation for product strategy and highlighting innovation opportunities -- and both studies complement and confirm much of the direct research into data science that we conducted. There were a few important differences in our findings, which I'll share and discuss in upcoming posts.

    Read the article

  • Challenge: Learn One New Thing Today

    - by BuckWoody
    Most of us know that there's a lot to learn. I'm teaching a class this morning, and even on the subject where I'm the "expert" (that word always makes me nervous!) I still have a lot to learn. To learn, sometimes I take a class, read a book, or carve out a large chunk of time so that I can fully grasp the subject. But since I've been working, I really don't have a lot of opportunities to do that. Like you, I'm really busy. So what I've been able to learn is to take just a few moments each day and learn something new about SQL Server. I thought I would share that process here. First, I started with an outline of the product. You can use Books Online, a college class syllabus, a training class outline, or a comprehensive book table of contents. Then I checked off the things I felt I knew a little about. Sure, I'll come back around to those, but I want to be as efficient as I can. I then trolled various checklists to see what I needed to know about the subjects I didn't have checked off. From there (I'm doing all this in a notepad, and then later in OneNote when that came out) I developed a block of text for that subject. Every time I ran across a book, article, web site or recording on that topic I wrote that reference down. Later I went back and quickly looked over those resources and tried to figure out how I could parcel it out - 10 minutes for this one, a free seminar (like the one I'm teaching today - ironic) takes 4 hours, a web site takes an hour to grok, that sort of thing.  Then all I did was figure out how much time each day I'll give to training. Sure, it literally may be ten minutes, but it adds up. One final thing - as I used something I learned, I came back and made notes in that topic. You learn to play the piano not just from a book, but by playing the piano, after all. If you don't use what you learn, you'll lose it. So if you're interested in getting better at SQL Server, and you're willing to do a little work, try out this method. Leave a note here for others to encourage them.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Internal Mutation of Persistent Data Structures

    - by Greg Ros
    To clarify, when I mean use the terms persistent and immutable on a data structure, I mean that: The state of the data structure remains unchanged for its lifetime. It always holds the same data, and the same operations always produce the same results. The data structure allows Add, Remove, and similar methods that return new objects of its kind, modified as instructed, that may or may not share some of the data of the original object. However, while a data structure may seem to the user as persistent, it may do other things under the hood. To be sure, all data structures are, internally, at least somewhere, based on mutable storage. If I were to base a persistent vector on an array, and copy it whenever Add is invoked, it would still be persistent, as long as I modify only locally created arrays. However, sometimes, you can greatly increase performance by mutating a data structure under the hood. In more, say, insidious, dangerous, and destructive ways. Ways that might leave the abstraction untouched, not letting the user know anything has changed about the data structure, but being critical in the implementation level. For example, let's say that we have a class called ArrayVector implemented using an array. Whenever you invoke Add, you get a ArrayVector build on top of a newly allocated array that has an additional item. A sequence of such updates will involve n array copies and allocations. Here is an illustration: However, let's say we implement a lazy mechanism that stores all sorts of updates -- such as Add, Set, and others in a queue. In this case, each update requires constant time (adding an item to a queue), and no array allocation is involved. When a user tries to get an item in the array, all the queued modifications are applied under the hood, requiring a single array allocation and copy (since we know exactly what data the final array will hold, and how big it will be). Future get operations will be performed on an empty cache, so they will take a single operation. But in order to implement this, we need to 'switch' or mutate the internal array to the new one, and empty the cache -- a very dangerous action. However, considering that in many circumstances (most updates are going to occur in sequence, after all), this can save a lot of time and memory, it might be worth it -- you will need to ensure exclusive access to the internal state, of course. This isn't a question about the efficacy of such a data structure. It's a more general question. Is it ever acceptable to mutate the internal state of a supposedly persistent or immutable object in destructive and dangerous ways? Does performance justify it? Would you still be able to call it immutable? Oh, and could you implement this sort of laziness without mutating the data structure in the specified fashion?

    Read the article

  • Handy SQL Server Function Series: Part 1

    - by Most Valuable Yak (Rob Volk)
    I've been preparing to give a presentation on SQL Server for a while now, and a topic that was recommended was SQL Server functions.  More specifically, the lesser-known functions (like @@OPTIONS), and maybe some interesting ways to use well-known functions (like using PARSENAME to split IP addresses)  I think this is a veritable goldmine of useful information, and researching for the presentation has confirmed that beyond my initial expectations.I even found a few undocumented/underdocumented functions, so for the first official article in this series I thought I'd start with 2 of each, COLLATIONPROPERTY() and COLLATIONPROPERTYFROMID().COLLATIONPROPERTY() provides information about (wait for it) collations, SQL Server's method for handling foreign character sets, sort orders, and case- or accent-sensitivity when sorting character data.  The Books Online entry for  COLLATIONPROPERTY() lists 4 options for code page, locale ID, comparison style and version.  Used in conjunction with fn_helpcollations():SELECT *, COLLATIONPROPERTY(name,'LCID') LCID, COLLATIONPROPERTY(name,'CodePage') CodePage, COLLATIONPROPERTY(name,'ComparisonStyle') ComparisonStyle, COLLATIONPROPERTY(name,'Version') Version FROM fn_helpcollations()You can get some excellent information. (c'mon, be honest, did you even know about fn_helpcollations?)Collations in SQL Server have a unique name and ID, and you'll see one or both in various system tables or views like syscolumns, sys.columns, and INFORMATION_SCHEMA.COLUMNS.  Unfortunately they only link the ID and name for collations of existing columns, so if you wanted to know the collation ID of Albanian_CI_AI_WS, you'd have to declare a column with that collation and query the system table.While poking around the OBJECT_DEFINITION() of sys.columns I found a reference to COLLATIONPROPERTYFROMID(), and the unknown property "Name".  Not surprisingly, this is how sys.columns finds the name of the collation, based on the ID stored in the system tables.  (Check yourself if you don't believe me)Somewhat surprisingly, the "Name" property also works for COLLATIONPROPERTY(), although you'd already know the name at that point.  Some wild guesses and tests revealed that "CollationID" is also a valid property for both functions, so now:SELECT *, COLLATIONPROPERTY(name,'LCID') LCID, COLLATIONPROPERTY(name,'CodePage') CodePage, COLLATIONPROPERTY(name,'ComparisonStyle') ComparisonStyle, COLLATIONPROPERTY(name,'Version') Version, COLLATIONPROPERTY(name,'CollationID') CollationID FROM fn_helpcollations() Will get you the collation ID-name link you…probably didn't know or care about, but if you ever get on Jeopardy! and this question comes up, feel free to send some of your winnings my way. :)And last but not least, COLLATIONPROPERTYFROMID() uses the same properties as COLLATIONPROPERTY(), so you can use either one depending on which value you have available.Keep an eye out for Part 2!

    Read the article

  • Why is Spritebatch drawing my Textures out of order?

    - by Andrew
    I just started working with XNA Studio after programming 2D games in java. Because of this, I have absolutely no experience with Spritebatch and sprite sorting. In java, I could just layer the images by calling the draw methods in order. For a while, my Spritebatch was working fine in deferred sorting mode, but when I made a change to one of my textures, it suddenly started drawing them out of order. I have searched for a solution to this problem, but nothing seems to work. I have tried adding layer depths to the sprites and changing the sort mode to BackToFront or FrontToBack or even immediate, but nothing seems to work. Here is my drawing code: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Gray); Game1.spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.PointClamp, null, null); for (int x = 0; x < 5; x++) { for (int y = 0; y < 5; y++) { region[x, y].draw(((float)w / aw)); // Draws the Tile-Based background } } player.draw(spriteBatch, ((float)w / aw));//draws the character (This method is where the problem occurs) enemy.draw(spriteBatch, (float)w/aw); // draws a basic enemy Game1.spriteBatch.End(); base.Draw(gameTime); } player.draw method: public void draw(SpriteBatch sb, float ratio){ //draws the player base (The character without hair or equipment) sb.Draw(playerbase[0], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White,0,Vector2.Zero,SpriteEffects.None,0); //draws the player's hair sb.Draw(playerbase[3], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's shirt sb.Draw(equipment[0], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's pants sb.Draw(equipment[1], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); //draws the player's shoes sb.Draw(equipment[2], new Rectangle((int)(pos.X - (24 * ratio)), (int)(pos.Y - (48 * ratio)), (int)(48 * ratio), (int)(48 * ratio)), new Rectangle(orientation * 48, animFrame * 48, 48, 48), Color.White, 0, Vector2.Zero, SpriteEffects.None, 0); } the game has a top-down perspective much like the early legend of zelda games. It draws sections of the texture depending on which direction the character is facing and the animation frame. However, instead of drawing the character in the order the draw methods are called, it ends up drawing the character out of order. Please help me with this problem.

    Read the article

  • Advantages and disadvantages of building a single page web application

    - by ryanzec
    I'm nearing the end of a prototyping/proof of concept phase for a side project I'm working on, and trying to decide on some larger scale application design decisions. The app is a project management system tailored more towards the agile development process. One of the decisions I need to make is whether or not to go with a traditional multi-page application or a single page application. Currently my prototype is a traditional multi-page setup, however I have been looking at backbone.js to clean up and apply some structure to my Javascript (jQuery) code. It seems like while backbone.js can be used in multi-page applications, it shines more with single page applications. I am trying to come up with a list of advantages and disadvantages of using a single page application design approach. So far I have: Advantages All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application. More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing. Disadvantages Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript. Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read. Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them. There are also other things that are kind of double edged swords. For example, with single page applications, the data processed for each request can be a lot less since the application will be asking for the minimum data it needs for the particular request, however it also means that there could be a lot more small request to the server. I'm not sure if that is a good or bad thing. What are some of the advantages and disadvantages of single page web applications that I should keep in mind when deciding which way I should go for my project?

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • What tools exist for assessing an organisation's development capability?

    - by Eric Smith
    I have a bit of a challenge at work at the moment. Presently (and in fact, for some time now), we have been experiencing the following problems with some in-house maintained applications: Defects (sometimes quite serious) being released into production; The Customer (that is, the relevant business unit) perpetually changing their minds (or appearing to do so) about what issue to work on next; A situation where everyone seems to be in a "fire-fighting" mode a lot of the time; Development staff responding to operational requests from business users; ("operational" here means something that needs to be done in order to continue with business, or perhaps just to make a business user's life a little less painful, as opposed to fixing a bug in the application, or enhancing the application); Now I'm sure this doesn't sound particularly new or surprising to most of the participants on this Q&A site and no prizes for identifying the "usual suspects" when it comes to root causes. My challenge is that I have to persuade the higher-ups to do uncomfortable things in order to address all of this. The folk I need to persuade come from a mixture of the following two cultures: Accounting; IT Infrastructure. I have therefore opted for a strategy that draws from things with-which folk from such a culture would be most comfortable (at least, in my estimation), namely: numbers and tangibles. Of course modern development practitioners know all too well that this sort of thing isn't easily solved using an analytical mindset (some would argue that that mindset is, in fact, entirely inappropriate). Never-the-less, this is the dichotomy with-which I am faced, so that's the stake that I've put in the ground. I would like to be able to do research and use the outputs to present findings in the form of metrics and measures. I am finding it quite difficult, though, to find an agreed-upon methodology and set of templates for assessing an organisations development capability--the only thing that seems applicable is the Software Engineering Institute's Capability Maturity Model. The latter, however, seems dated and even then rather vague. So, the question is: Do any tools or methodologies (free or commercial) exist that would assist me in completing this assessment?

    Read the article

  • How can I convert a 2D bitmap (Used for terrain) to a 2D polygon mesh for collision?

    - by Megadanxzero
    So I'm making an artillery type game, sort of similar to Worms with all the usual stuff like destructible terrain etc... and while I could use per-pixel collision that doesn't give me collision normals or anything like that. Converting it all to a mesh would also mean I could use an existing physics library, which would be better than anything I can make by myself. I've seen people mention doing this by using Marching Squares to get contours in the bitmap, but I can't find anything which mentions how to turn these into a mesh (Unless it refers to a 3D mesh with contour lines defining different heights, which is NOT what I want). At the moment I can get a basic Marching Squares contour which looks something like this (Where the grid-like lines in the background would be the Marching Squares 'cells'): That needs to be interpolated to get a smoother, more accurate result but that's the general idea. I had a couple ideas for how to turn this into a mesh, but many of them wouldn't work in certain cases, and the one which I thought would work perfectly has turned out to be very slow and I've not even finished it yet! Ideally I'd like whatever I end up using to be fast enough to do every frame for cases such as rapidly-firing weapons, or digging tools. I'm thinking there must be some kind of existing algorithm/technique for turning something like this into a mesh, but I can't seem to find anything. I've looked at some things like Delaunay Triangulation, but as far as I can tell that won't correctly handle concave shapes like the above example, and also wouldn't account for holes within the terrain. I'll go through the technique I came up with for comparison and I guess I'll see if anyone has a better idea. First of all interpolate the Marching Squares contour lines, creating vertices from the line ends, and getting vertices where lines cross cell edges (Important). Then, for each cell containing vertices create polygons by using 2 vertices, and a cell corner as the 3rd vertex (Probably the closest corner). Do this for each cell and I think you should have a mesh which accurately represents the original bitmap (Though there will only be polygons at the edges of the bitmap, and large filled in areas in between will be empty). The only problem with this is that it involves lopping through every pixel once for the initial Marching Squares, then looping through every cell (image height + 1 x image width + 1) at least twice, which ends up being really slow for any decently sized image...

    Read the article

  • Database Mirroring – deprecated

    - by fatherjack
    Do you use mirroring on any of your databases? Do you use mirroring on SQL Server Standard Edition? I do, as a way of having a stand-by server ready to take over if there is a problem with the live server so that business can continue despite whatever disaster may strike at our primary server location. In my experience it has been a great solution for us as it is simple to implement, reliable and predictable. Mirroring has been around since SQL Server 2005 sp1 but with the release of SQL Server 2012 mirroring has now been placed on the deprecation list. That’s right, Microsoft are removing this feature from SQL Server. SQL Server 2012 had lots of improvements and new features around this sort of technology – the High Availability, Disaster recovery and Always On features described in detail here by Brent Ozar and  Microsoft’s own Customer Service and Support SQL Server Engineers . Now the bad news, the HADRON features are pretty much all wrapped up in the Enterprise Edition of SQL Server 2012. This is going to be a big issue for people, like me, who are only on Standard Edition of earlier versions mostly due to our requirements and the budget (or lack thereof) required for Enterprise Edition licenses. No mirroring in Standard Edition means no upgrade. Don’t Panic. There are two stages of deprecation and they dont happen fast. The first stage – Deprecation Announcement- means that Microsoft have decided that there is a limited future for a particular feature and this is your cue that new projects and developments should not be implemented on this technology as it will cease to exist in the future. This is where mirroring currently stands. You have time to consider your options and start work on planning how you will move away from using this feature. This can be 2 or 3 versions of SQL Server, possibly more. The next stage is Deprecation Final Support - this is where you are on your last chance, When you see this then the next version of SQL Server will not have this feature in it so you need to implement your plans to move to an alternative solution. While these two phases are taking place Microsoft are open to feedback on how people use their products and if enough people make the case for mirroring (or an equivalent technology) to be in the Standard Edition then they may make changes rather than lose customers or have customers cease upgrading in order to keep the functionality they need. Denny Cherry (@MrDenny) has published an article on this same topic here with more detail than me so I wont go over old ground. All I will say is that you should read his article now and then follow the link to his own site where he is collecting peoples information on how they use mirroring in Standard Edition so that our voice can be put to Microsoft.  

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >