Search Results

Search found 1614 results on 65 pages for 'emps (expensive managemen'.

Page 52/65 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Drawing to the canvas

    - by Mattl
    I'm writing an android application that draws directly to the canvas on the onDraw event of a View. I'm drawing something that involves drawing each pixel individually, for this I use something like: for (int x = 0; x < xMax; x++) { for (int y = 0; y < yMax; y++){ MyColour = CalculateMyPoint(x, y); canvas.drawPoint(x, y, MyColour); } } The problem here is that this takes a long time to paint as the CalculateMyPoint routine is quite an expensive method. Is there a more efficient way of painting to the canvas, for example should I draw to a bitmap and then paint the whole bitmap to the canvas on the onDraw event? Or maybe evaluate my colours and fill in an array that the onDraw method can use to paint the canvas? Users of my application will be able to change parameters that affect the drawing on the canvas. This is incredibly slow at the moment.

    Read the article

  • How to modify ASCII table in C ? [closed]

    - by drigoSkalWalker
    Like this: My ASCII Chart 0 1 2 3 4 5 6 7 8 9 A B C D E F 0 NUL SOH STX ETX EOT ENQ ACK BEL BS HT LF VT FF CR SO SI 1 DLE DC1 DC2 DC3 DC4 NAK SYN ETB CAN EM SUB ESC FS GS RS US 2 SP ! " # $ % & ' ( ) * + , - . / 3 0 1 2 3 4 5 6 7 8 9 : ; ? 4 @ A B C D E F G H I J K L M N O 5 P Q R S T U V W X Y Z [ \ ] ^ _ 6 ` a b c d e f g h i j k l m n o 7 p q r s t u v w x y z { | } ~ DEL I want to call a function, alter the ASCII sequence in this function and when it returns, the ASCII sequence back to the original. Thanks in advance! EDIT: What I want is it: I want to change the order of chars, for examble, A is 65, if I want to make A equal a 0? without to make a function to do it, for example, I could accomplish it with a function that compare an array of chars, and store it in another way with the correct value (the new table), but do it is too expensive, is there another way? thanks in advance again!

    Read the article

  • What is the most efficient way to display decoded video frames in Qt?

    - by Jason
    What is the fastest way to display images to a Qt widget? I have decoded the video using libavformat and libavcodec, so I already have raw RGB or YCbCr 4:2:0 frames. I am currently using a QGraphicsView with a QGraphicsScene object containing a QGraphicsPixmapItem. I am currently getting the frame data into a QPixmap by using the QImage constructor from a memory buffer and converting it to QPixmap using QPixmap::fromImage(). I like the results of this and it seems relatively fast, but I can't help but think that there must be a more efficient way. I've also heard that the QImage to QPixmap conversion is expensive. I have implemented a solution that uses an SDL overlay on a widget, but I'd like to stay with just Qt since I am able to easily capture clicks and other user interaction with the video display using the QGraphicsView. I am doing any required video scaling or colorspace conversions with libswscale so I would just like to know if anyone has a more efficient way to display the image data after all processing has been performed. Thanks.

    Read the article

  • mySQL Efficiency Issue - How to find the right balance of normalization...?

    - by Foo
    I'm fairly new to working with relational databases, but have read a few books and know the basics of good design. I'm facing a design decision, and I'm not sure how to continue. Here's a very over simplified version of what I'm building: People can rate photos 1-5, and I need to display the average votes on the picture while keeping track of the individual votes. For example, 12 people voted 1, 7 people voted 2, etc. etc. The normalization freak of me initially designed the table structure like this: Table pictures id* | picture | userID | Table ratings id* | pictureID | userID | rating With all the foreign key constraints and everything set as they shoudl be. Every time someone rates a picture, I just insert a new record into ratings and be done with it. To find the average rating of a picture, I'd just run something like this: SELECT AVG(rating) FROM ratings WHERE pictureID = '5' GROUP by pictureID Having it setup this way lets me run my fancy statistics to. I can easily find who rated a certain picture a 3, and what not. Now I'm thinking if there's a crapload of ratings (which is very possible in what I'm really designing), finding the average will became very expensive and painful. Using a non-normalized version would seem to be more efficient. e.g.: Table picture id | picture | userID | ratingOne | ratingTwo | ratingThree | ratingFour | ratingFive To calculate the average, I'd just have to select a single row. It seems so much more efficient, but so much more uglier. Can someone point me in the right direction of what to do? My initial research shows that I have to "find the right balance", but how do I go about finding that balance? Any articles or additional reading information would be appreciated as well. Thanks.

    Read the article

  • Is it safe to reuse javax.xml.ws.Service objects

    - by Noel Ang
    I have JAX-WS style web service client that was auto-generated with the NetBeans IDE. The generated proxy factory (extends javax.xml.ws.Service) delegates proxy creation to the various Service.getPort methods. The application that I am maintaining instantiates the factory and obtains a proxy each time it calls the targetted service. Creating the new proxy factory instances repeatedly has been shown to be expensive, given that the WSDL documentation supplied to the factory constructor, an HTTP URI, is re-retrieved for each instantiation. We had success in improving the performance by caching the WSDL. But this has ugly maintenance and packaging implications for us. I would like to explore the suitability of caching the proxy factory itself. Is it safe, e.g., can two different client classes, executing on the same JVM and targetting the same web service, safely use the same factory to obtain distinct proxy objects (or a shared, reentrant one)? I've been unable to find guidance from either the JAX-WS specification nor the javax.xml.ws API documentation. The factory-proxy multiplicity is unclear to me. Having Service.getPort rather than Service.createPort does not inspire confidence.

    Read the article

  • WIKI replacement solution for SharePoint?

    - by Jakub
    I'm trying to research a replacement for the pathetic WIKI that comes with WSS (only wiki code it has is to create url links). I have looked at a few but most 'replacements' I see are MOSS only? (or so it just states MOSS for requirements). Has anyone faced this situation? What did you end up using? I would like something that I can have all in one location (not different apps, hence WSS). With LDAP / AD Integration like WSS. Thanks appreciate any input. I would like to see ~ $3k solutions tho (nothing super expensive, hence why we don't run MOSS). EDIT: Anyone else have any suggestions? EDIT2: Actually since I haven't had much feedback (thanks to those that have). I installed mediawiki under IIS with PHP enabled, and enabled the IIS AD hack for authentication. IIS ends up prompting for authentication (user/pass) if you use a non IE browser, then it sets the $_SERVER["REMOTE_USER"] variable, and grabs some AD info (groups etc). Works rather well, only issues is the UGLY urls so far. But its fully working. Seems like a good setup. Other than having to rely on MYSQL (my company strives to be mainly SQL Server)

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • I like the way they Design/Architecture it but how do I implement this

    - by Rachel
    Summary: I have different components on homepage and each components shows some promotion to the user. I have Cart as one Component and depending upon content of the cart promotion are show. I have to track user online activities and send that information to Omniture for Report Generation. Now my components are loaded asynchronously basically are loaded when AjaxRequest is fired up and so there is not fix pattern or rather information on when components will appear on the webpages. Now in order to pass information to Omniture I need to call track function on $(document).(ready) and append information for each components(7 parameters are required by Omniture for each component). So in the init:config function of each component am calling Omniture and passing paramters but now no. of Omniture calls is directly proportional to no. of Components on the webpage but this is not acceptable as each call to Omniture is very expensive. Now I am looking for a way where in I can club the information about 7 parameters and than make one Call to Omniture wherein I pass those information. Points to note is that I do not know when the components are loaded and so there is no pre-defined time or no. of components that would be loaded. The thing is am calling track function when document is ready but components are loaded after call to Omniture has been made and so my question is Q: How can I collect the information for all the components and than just make one call to Omniture to send those information ? As mentioned, I do not know when the components are loaded as they are done on the Ajax Request. Hope I am able to explain my challenge and would appreciate if some one can provide from Design/Architect Solutions for the Challenge.

    Read the article

  • How to hide some nodes in Richfaces Tree (do not render nodes by condition)?

    - by VestniK
    I have a tree of categories and courses in my SEAM application. Courses may be active and inactive. I want to be able to show only active or all courses in my tree. I've decided to always build complete tree in my PAGE scope component since building this tree is quite expensive operation. I have boolean flag courseActive in the data wrapped by TreeNode<T>. Now I can't find the way to show courses node only if this flag is true. The best result I've achieved with the following code: <h:outputLabel for="showInactiveCheckbox" value="show all courses: "/> <h:selectBooleanCheckbox id="showInactiveCheckbox" value="#{categoryTreeEditorModel.showAllCoursesInTree}"> <a4j:support event="onchange" reRender="categoryTree"/> </h:selectBooleanCheckbox> <rich:tree id="categoryTree" value="#{categoryTree}" var="item" switchType="ajax" ajaxSubmitSelection="true" reRender="categoryTree,controls" adviseNodeOpened="#{categoryTreeActions.adviseRootOpened}" nodeSelectListener="#{categoryTreeActions.processSelection}" nodeFace="#{item.typeName}"> <rich:treeNode type="Category" icon="..." iconLeaf="..."> <h:outputText value="#{item.title}"/> </rich:treeNode> <rich:treeNode type="Course" icon="..." iconLeaf="..." rendered="#{item.courseActive or categoryTreeEditorModel.showAllCoursesInTree}"> <h:outputText rendered="#{item.courseActive}" value="#{item.title}"/> <h:outputText rendered="#{not item.courseActive}" value="#{item.title}" style="color:#{a4jSkin.inactiveTextColor}"/> </rich:treeNode> </rich:tree> the only problem is if some node is not listed in any rich:treeNode it just still shown with title obtained by Object.toString() method insted of being hidden. Does anybody know how to not show some nodes in the Richfases tree according to some condition?

    Read the article

  • is there evidence that offshoring is causing developer salaries to go down? [closed]

    - by jcollum
    I realize this is a controversial and political topic. I'm trying to decide if offshoring is something that is effecting our industry in any substantial way or if it's just some bugaboo. I've read various posts on SO about it, but none addressed the idea of evidence for offshoring. Studies, papers, opinions of people who know about such things etc. I hear a lot about offshoring and its effect on our job market. However it all seems to be hearsay and conjecture. It does seem like some people are genuinely worried about it. This offshoring thing has been going on for quite some time, should be enough time for some real data to come out. If I had to pick a number I'd say it started during the dotcom boom -- a time when the need for developers far outweighed the local talent pool. We're now in a time when the talent pool is expensive and corporate wallets are tight, seems like an ideal time to find a good cheap developer in some other country. But is that actually happening? From reading some posts here on SO, I've concluded that offshoring is a really tough thing to do right. There are a lot of companies who think (or say) they can do it right, but some small percentage of them are actually able to pull it off. Is offshoring affecting the job market in any measurable way? Is offshoring measurable at all? Do we need to stop worrying about this?

    Read the article

  • Implementing Model-level caching

    - by Byron
    I was posting some comments in a related question about MVC caching and some questions about actual implementation came up. How does one implement a Model-level cache that works transparently without the developer needing to manually cache, yet still remains efficient? I would keep my caching responsibilities firmly within the model. It is none of the controller's or view's business where the model is getting data. All they care about is that when data is requested, data is provided - this is how the MVC paradigm is supposed to work. (Source: Post by Jarrod) The reason I am skeptical is because caching should usually not be done unless there is a real need, and shouldn't be done for things like search results. So somehow the Model itself has to know whether or not the SELECT statement being issued to it worthy of being cached. Wouldn't the Model have to be astronomically smart, and/or store statistics of what is being most often queried over a long period of time in order to accurately make a decision? And wouldn't the overhead of all this make the caching useless anyway? Also, how would you uniquely identify a query from another query (or more accurately, a resultset from another resultset)? What about if you're using prepared statements, with only the parameters changing according to user input? Another poster said this: I would suggest using the md5 hash of your query combined with a serialized version of your input arguments. This would require twice the number of serialization options. I was under the impression that serialization was quite expensive, and for large inputs this might be even worse than just re-querying. And is the minuscule chance of collision worth worrying about? Conceptually, caching in the Model seems like a good idea to me, but it seems in practicality the developer should have direct control over caching and write it into the controller. Thoughts/ideas? Edit: I'm using PHP and MySQL if that helps to narrow your focus.

    Read the article

  • Should I persist images on EBS or S3?

    - by javanes
    I am migrating my Java,Tomcat, Mysql server to AWS EC2. I have already attached EBS volume for storing MySql data. In my web application people may upload images. So I should persist them. There are 2 alternatives in my mind: Save uploaded images to EBS volume. Use the S3 service. The followings are my notes, please be skeptic about them, as my expertise is not on servers, but software development. EBS plus: S3 storage is more expensive. (0.15 $/Gb 0.1$/Gb) S3 plus: Serving statics from EBS may influence my web server's performance negatively. Is this true? Does Serving images affect server performance notably? For S3 my server will not be responsible for serving statics. S3 plus: Serving statics from EBS may result I/O cost, probably it will be minor. EBS plus: People say EBS is faster. S3 plus: People say S3 is more safe for persistence. EBS plus: No need to learn API, it is straight forward to save the images to EBS volume. Namely I can not decide, will be happy if you guide. Thanks

    Read the article

  • Hierarchy / Flyweight / Instancing Problem in Python

    - by Dan
    Here is the problem I am trying to solve, (I have simplified the actual problem, but this should give you all the relevant information). I have a hierarchy like so: 1.A 1.B 1.C 2.A 3.D 4.B 5.F (This is hard to illustrate - each number is the parent, each letter is the child). Creating an instance of the 'letter' objects is expensive (IO, database costs, etc), so should only be done once. The hierarchy needs to be easy to navigate. Children in the hierarchy need to have just one parent. Modifying the contents of the letter objects should be possible directly from the objects in the hierarchy. There needs to be a central store containing all of the 'letter' objects (and only those in the hierarchy). 'letter' and 'number' objects need to be possible to create from a constructor (such as Letter(**kwargs) ). It is perfectably acceptable to expect that when a letter changes from the hierarchy, all other letters will respect the same change. Hope this isn't too abstract to illustrate the problem. What would be the best way of solving this? (Then I'll post my solution) Here's an example script: one = Number('one') a = Letter('a') one.addChild(a) two = Number('two') a = Letter('a') two.addChild(a) for child in one: child.method1() for child in two: print '%s' % child.method2()

    Read the article

  • django class with an array of "parent" foreignkeys issue

    - by user298032
    Let's say I have a class called Fruit with child classes of the different kinds of Fruit with their own specific attributes, and I want to collect them in a FruitBasket: class Fruit(models.Model):     type = models.CharField(max_length=120,default='banana',choices=FRUIT_TYPES)     ... class Banana(Fruit):     """banana (fruit type)"""     length = models.IntegerField(blank=True, null=True)     ... class Orange(Fruit):     """orange (fruit type)"""     diameter = models.IntegerField(blank=True, null=True)     ... class FruitBasket(models.Model):     fruits = models.ManyToManyField(Fruit)     ... The problem I seem to be having is when I retrieve and inspect the Fruits in a FruitBasket, I only retrieve the Fruit base class and can't get at the Fruit child class attributes. I think I understand what is happening--when the array is retrieved from the database, the only fields that are retrieved are the Fruit base class fields. But is there some way to get the child class attributes as well without multiple expensive database transactions? (For example, I could get the array, then retrieve the child Fruit classes by the id of each array element). thanks in advance, Chuck

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software? (90% of the projects are .net C#)

    Read the article

  • How to set up single array or dictionary for use in multiple datasources?

    - by Roman
    I have multiple TableViewDatasources that need to display list of objects form same pool depending of certain property. E.g. object.flag1 is set- it will show up in TableView1 object.flag2 is set- it will show up in TableView2 The obvious way would be to have separate arrays for each TableView, But same object may appear in different arrays. Also I need to update objects very often or access all objects through same array. How do I setup a single dictionary or array to have all objects in one structure? To put it in another way: When table view or selection changes, application need to redraw TableViews with the new data. Application have to access the pool of objects and search through them using iterator and accessing each object and its properties. I think that this is an expensive operation and want to avoid that. Perhaps maybe by making a global pool of objects a dictionary and exposing objects properties as dictionary fields. So instead of iterating global pool of objects I could access global pool Dicitonary in a manner of database by selecting objects that has fields that match particular criteria. Anyone know any example of doing that?

    Read the article

  • Performance question: Inverting an array of pointers in-place vs array of values

    - by Anders
    The background for asking this question is that I am solving a linearized equation system (Ax=b), where A is a matrix (typically of dimension less than 100x100) and x and b are vectors. I am using a direct method, meaning that I first invert A, then find the solution by x=A^(-1)b. This step is repated in an iterative process until convergence. The way I'm doing it now, using a matrix library (MTL4): For every iteration I copy all coeffiecients of A (values) in to the matrix object, then invert. This the easiest and safest option. Using an array of pointers instead: For my particular case, the coefficients of A happen to be updated between each iteration. These coefficients are stored in different variables (some are arrays, some are not). Would there be a potential for performance gain if I set up A as an array containing pointers to these coefficient variables, then inverting A in-place? The nice thing about the last option is that once I have set up the pointers in A before the first iteration, I would not need to copy any values between successive iterations. The values which are pointed to in A would automatically be updated between iterations. So the performance question boils down to this, as I see it: - The matrix inversion process takes roughly the same amount of time, assuming de-referencing of pointers is non-expensive. - The array of pointers does not need the extra memory for matrix A containing values. - The array of pointers option does not have to copy all NxN values of A between each iteration. - The values that are pointed to the array of pointers option are generally NOT ordered in memory. Hopefully, all values lie relatively close in memory, but *A[0][1] is generally not next to *A[0][0] etc. Any comments to this? Will the last remark affect performance negatively, thus weighing up for the positive performance effects?

    Read the article

  • Why doesn't is operator take in consideration if the explicit operator is overriden when checking ty

    - by Galilyou
    Hey Guys, Consider this code sample: public class Human { public string Value { get; set;} } public class Car { public static explicit operator Human (Car c) { Human h = new Human(); h.Value = "Value from Car"; return h; } } public class Program { public static void Mani() { Car c = new Car(); Human h = (Human)c; Console.WriteLine("h.Value = {0}", h.Value); Console.WriteLine(c is Human); } } Up I provide a possibility of an explicit cast from Car to Human, though Car and Human hierarchically are not related! The above code simply means that "Car is convertible to human" However, if you run the snippet you will find the expression c is Human evaluates to false! I used to believe that the is operator is kinda expensive cause it attempts to do an actual cast that might result in an InvalidCastException. If the operator is trying to cast, then the cast should succeed as there's an operator logic that should perform the cast! What does "is" test? Does test a hierarchical "is-a" relationship? Does test whether a variable type is convertible to a type?

    Read the article

  • How can I protect my .NET assemblies from decompilation?

    - by Holli
    One if the first things I learned when I started with C# was the most important one. You can decompile any .NET assembly with Reflector or other tools. Many developers are not aware of this fact and most of them are shocked when I show them their source code. Protection against decompilation is still a difficult task. I am still looking for a fast, easy and secure way to do it. I don't want to obfuscate my code so my method names will be a,b,c or so. Reflector or other tools should be unable to recognize my application as .NET assembly at all. I know about some tools already but they are very expensive. Is there any other way to protect my applications? EDIT: The reason for my question is not to prevent piracy. I only want to stop competitors from reading my code. I know they will and they already did. They even told me so. Maybe I am a bit paranoid but business rivals reading my code doesn't make me feel good.

    Read the article

  • Concurrent WCF calls via shared channel

    - by Kent Boogaart
    I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled. But they are not being called concurrently. If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I wanted to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible. I found this article on MSDN that states: While channels and clients created by the channels are thread-safe, they might not support writing more than one message to the wire concurrently. If you are sending large messages, particularly if streaming, the send operation might block waiting for another send to complete. Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages. Can anyone shed some light on this?

    Read the article

  • ASP.NET retrieve Average CPU Usage

    - by Sam
    Last night I did a load test on a site. I found that one of my shared caches is a bottleneck. I'm using a ReaderWriterLockSlim to control the updates of the data. Unfortunately at one point there are ~200 requests trying to update the data at approximately the same time. This also coincided with CPU usage spikes. The data being updated is in the ASP.NET Cache. What I'd like to do is if the CPU usage is around 75%, I'd like to just skip the cache and hit the database on another machine. My problem is that I don't know how expensive it is to create a new performance counter to check the cpu usage. Also, if I would probably like the average cpu usage over the last 2 or 3 seconds. However, I can't sit there and calculate the cpu time as that would take longer than it's taking to update the cache currently. Is there an easy way to get the average CPU usage? Are there any drawbacks to this? I'm also considering totaling the wait count for the lock and then at a certain threshold switch over to the database. The concern I had with this approach would be that changing hardware might allow more locks with less of a strain on the system. And also finding the right balance for the threshold would be cumbersome and it doesn't take into account any other load on the machine. But it's a simple approach, and simple is 99% of the time better.

    Read the article

  • Web page database query optimization

    - by morpheous
    I am putting together a web page which is quite 'expensive' in terms of database hits. I don't want to start optimizing at this stage - though with me trying to hit a deadline, I may end up not optimizing at all. Currently the page requires 18 (that's right eighteen) hits to the db. I am already using joins, and some of the queries are UNIONed to minimize the trips to the db. My local dev machine can handle this (page is not slow) however, I feel if I release this into the wild, the number of queries will quickly overwhelm my database (MySQL). I could always use memcache or something similar, but I would much rather continue with my other dev work that needs to be completed before the deadline - at least retrieving the page works - its simply a matter of optimization now (if required). My question therefore is - is 18 db queries for a single page retrieval completely outrageous - (i.e. I should put everything on hold and optimize the hell of the retrieval logic), or shall I continue as normal, meet the deadline and release on schedule and see what happens? [Edit] Just to clarify, I have already done the 'obvious' things like using (single and composite) indexes for fields used in the queries. What I haven't yet done is to run a query analyzer to see if my indexes etc are optimal.

    Read the article

  • How much does Website Development cost nowadays?

    - by Andreas Grech
    I am thinking of setting up my own freelance business but coming from a workplace that offers a particular service to huge clients, I do not know what are the current charges for websites are nowadays. I know that as technology just keeps changing and changing (most of the time, for the better...), the amount you charge for a single website is constantly differing. Like for example, I don't think static websites (with just static html pages) are that expensive today, no? (as i said, I might be mistaken since I haven't really touched on this freelance industry yet) So, freelance web-developers out there, can you give me estimates on how much you charge for your clients? Some examples of websites that I want to know an approx charge: ~10 static html pages ~10 dhtml pages (with maybe a flasy menu on the top/side) Database driven websites with a standard CMS (be it the one you developed, or an existing one) Database driven but with a custom-built cms for the particular client Using an existing template for a design Starting the design from scratch etc... I know that the normally clients don't really care about the technologies used to construct their websites, but do you charge differently according to which technology you use to build the website with?; as in, is the technology a factor when setting the price? ...being ASP.Net, PHP, Ruby On Rails etc... Also, how do you go on about charging your clients for your services? What are the major factors that you consider when setting a price tag for a website to a client ? And better yet, how do you even find prospective clients? <= [or should I leave this question for a different post?] Btw, in your post, also mention some numbers (in cash values, be it in USD, GBP, EUR or anything) because I want to be able to take calculate some averages from this post when some answers stack up

    Read the article

  • C: 8x8 -> 16 bit multiply precision guaranteed by integer promotions?

    - by craig-blome
    I'm trying to figure out if the C Standard (C90, though I'm working off Derek Jones' annotated C99 book) guarantees that I will not lose precision multiplying two unsigned 8-bit values and storing to a 16-bit result. An example statement is as follows: unsigned char foo; unsigned int foo_u16 = foo * 10; Our Keil 8051 compiler (v7.50 at present) will generate a MUL AB instruction which stores the MSB in the B register and the LSB in the accumulator. If I cast foo to a unsigned int first: unsigned int foo_u16 = (unsigned int)foo * 10; then the compiler correctly decides I want a unsigned int there and generates an expensive call to a 16x16 bit integer multiply routine. I would like to argue beyond reasonable doubt that this defensive measure is not necessary. As I read the integer promotions described in 6.3.1.1, the effect of the first line shall be as if foo and 10 were promoted to unsigned int, the multiplication performed, and the result stored as unsigned int in foo_u16. If the compiler knows an instruction that does 8x8-16 bit multiplications without loss of precision, so much the better; but the precision is guaranteed. Am I reading this correctly? Best regards, Craig Blome

    Read the article

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >