Search Results

Search found 5666 results on 227 pages for 'cost analysis'.

Page 38/227 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • If I Design a CPU Can I Get it Realized in Hardware at a Reasonable Cost?

    - by mudge
    I want to design a CPU and possibly memory and other hardware. I could do this with a hardware description language. Once I do this is it possible for me to send my designs to a manufacturer who will realize the design in hardware and send back to me the CPU(s) ? Can it be done at a reasonable cost? I want to design and make my own computer system and actually have it realized in hardware.

    Read the article

  • Overview of Microsoft SQL Server 2008 Upgrade Advisor

    - by Akshay Deep Lamba
    Problem Like most organizations, we are planning to upgrade our database server from SQL Server 2005 to SQL Server 2008. I would like to know is there an easy way to know in advance what kind of issues one may encounter when upgrading to a newer version of SQL Server? One way of doing this is to use the Microsoft SQL Server 2008 Upgrade Advisor to plan for upgrades from SQL Server 2000 or SQL Server 2005. In this tip we will take a look at how one can use the SQL Server 2008 Upgrade Advisor to identify potential issues before the upgrade. Solution SQL Server 2008 Upgrade Advisor is a free tool designed by Microsoft to identify potential issues before upgrading your environment to a newer version of SQL Server. Below are prerequisites which need to be installed before installing the Microsoft SQL Server 2008 Upgrade Advisor. Prerequisites for Microsoft SQL Server 2008 Upgrade Advisor .Net Framework 2.0 or a higher version Windows Installer 4.5 or a higher version Windows Server 2003 SP 1 or a higher version, Windows Server 2008, Windows XP SP2 or a higher version, Windows Vista Download SQL Server 2008 Upgrade Advisor You can download SQL Server 2008 Upgrade Advisor from the following link. Once you have successfully installed Upgrade Advisor follow the below steps to see how you can use this tool to identify potential issues before upgrading your environment. 1. Click Start -> Programs -> Microsoft SQL Server 2008 -> SQL Server 2008 Upgrade Advisor. 2. Click Launch Upgrade Advisor Analysis Wizard as highlighted below to open the wizard. 2. On the wizard welcome screen click Next to continue. 3. In SQL Server Components screen, enter the Server Name and click the Detect button to identify components which need to be analyzed and then click Next to continue with the wizard. 4. In Connection Parameters screen choose Instance Name, Authentication and then click Next to continue with the wizard. 5. In SQL Server Parameters wizard screen select the Databases which you want to analysis, trace files if any and SQL batch files if any.  Then click Next to continue with the wizard. 6. In Reporting Services Parameters screen you can specify the Reporting Server Instance name and then click next to continue with the wizard. 7. In Analysis Services Parameters screen you can specify an Analysis Server Instance name and then click Next to continue with the wizard. 8. In Confirm Upgrade Advisor Settings screen you will be able to see a quick summary of the options which you have selected so far. Click Run to start the analysis. 9. In Upgrade Advisor Progress screen you will be able to see the progress of the analysis. Basically, the upgrade advisor runs predefined rules which will help to identify potential issues that can affect your environment once you upgrade your server from a lower version of SQL Server to SQL Server 2008. 10. In the below snippet you can see that Upgrade Advisor has completed the analysis of SQL Server, Analysis Services and Reporting Services. To see the output click the Launch Report button at the bottom of the wizard screen. 11. In View Report screen you can see a summary of issues which can affect you once you upgrade. To learn more about each issue you can expand the issue and read the detailed description as shown in the below snippet.

    Read the article

  • Is there a sample set of web log data available for testing analysis against?

    - by Peter
    Sorry if this isn't strictly speaking a programming question, but I figure my best chance of success would be to ask here. I'm developing some web log file analysis algorithms, but to date I only have access to a fairly small amount of web log data to process. One algorithm I want to use makes some assumptions about 'the shape' of typical web log data, and so I'd like to test it against a larger 'exemplar' - perhaps the logs of a busy site with a good distribution of traffic from different sources etc. Is there a set of such data available somewhere? Thanks for any help.

    Read the article

  • How can I compute the average cost for this solution of the element uniqueness problem?

    - by Alceu Costa
    In the book Introduction to the Design & Analysis of Algorithms, the following solution is proposed to the element uniqueness problem: ALGORITHM UniqueElements(A[0 .. n-1]) // Determines whether all the elements in a given array are distinct // Input: An array A[0 .. n-1] // Output: Returns "true" if all the elements in A are distinct // and false otherwise. for i := 0 to n - 2 do for j := i + 1 to n - 1 do if A[i] = A[j] return false return true How can I compute the average cost (i.e. number of comparisons for a given n) for this algorithm? What is a reasonable assumption about the input?

    Read the article

  • Do "if" statements affect in the time complexity analysis?

    - by FranXh
    According to my analysis, the running time of this algorithm should be N2, because each of the loops goes once through all the elements. I am not sure whether the presence of the if statement changes the time complexity? for(int i=0; i<N; i++){ for(int j=1; j<N; j++){ System.out.println("Yayyyy"); if(i<=j){ System.out.println("Yayyy not"); } } }

    Read the article

  • Best neural network for certain type of pattern analysis?

    - by fred basset
    Hi All, I'm working on a system that will send telemetry data on machine operation back to a central server for analysis. One of the machine parameters we're measuring is motor current drawn vs time. After an operation is finished we plan to send back an array of currents vs time to the server. A successful operation would have a pattern like a trapezoid, problematic operations would have a pattern completely different, more like a large spike in values. Can anyone recommend a type of neural network that would be good at classifying these 1D vectors of current values into a pass/fail type output? Thanks, Fred

    Read the article

  • KnockoutJS showing a sorted list by item category

    - by Darksbane
    I just started learning knockout this week and everything has gone well except for this one issue. I have a list of items that I sort multiple ways but one of the ways I want to sort needs to have a different display than the standard list. As an example lets say I have this code var BetterListModel = function () { var self = this; food = [ { "name":"Apple", "quantity":"3", "category":"Fruit", "cost":"$1", },{ "name":"Ice Cream", "quantity":"1", "category":"Dairy", "cost":"$6", },{ "name":"Pear", "quantity":"2", "category":"Fruit", "cost":"$2", },{ "name":"Beef", "quantity":"1", "category":"Meat", "cost":"$3", },{ "name":"Milk", "quantity":"5", "category":"Dairy", "cost":"$4", }]; self.allItems = ko.observableArray(food); // Initial items // Initial sort self.sortMe = ko.observable("name"); ko.utils.compareItems = function (l, r) { if (self.sortMe() =="cost"){ return l.cost > r.cost ? 1 : -1 } else if (self.sortMe() =="category"){ return l.category > r.category ? 1 : -1 } else if (self.sortMe() =="quantity"){ return l.quantity > r.quantity ? 1 : -1 }else { return l.name > r.name ? 1 : -1 } }; }; ko.applyBindings(new BetterListModel()); and the HTML <p>Your values:</p> <ul class="deckContents" data-bind="foreach:allItems().sort(ko.utils.compareItems)"> <li><div style="width:100%"><div class="left" style="width:30px" data-bind="text:quantity"></div><div class="left fixedWidth" data-bind="text:name"></div> <div class="left fixedWidth" data-bind="text:cost"></div> <div class="left fixedWidth" data-bind="text:category"></div><div style="clear:both"></div></div></li> </ul> <select data-bind="value:sortMe"> <option selected="selected" value="name">Name</option> <option value="cost">Cost</option> <option value="category">Category</option> <option value="quantity">Quantity</option> </select> </div> So I can sort these just fine by any field I might sort them by name and it will display something like this 3 Apple $1 Fruit 1 Beef $3 Meat 1 Ice Cream $6 Dairy 5 Milk $4 Dairy 2 Pear $2 Fruit Here is a fiddle of what I have so far http://jsfiddle.net/Darksbane/X7KvB/ This display is fine for all the sorts except the category sort. What I want is when I sort them by category to display it like this Fruit 3 Apple $1 Fruit 2 Pear $2 Fruit Meat 1 Beef $3 Meat Dairy 1 Ice Cream $6 Dairy 5 Milk $4 Dairy Does anyone have any idea how I might be able to display this so differently for that one sort?

    Read the article

  • What do I have to do and how much does it cost to get a device driver for Windows Vista / 7 (32 and

    - by Jon Cage
    I've got some drivers which are basically LibUSB-Win32 with a new .inf file do describe product/vendor ids and strings which describe my hardware. This works fine for 32 bit windows, but 64 bit versions have problems; namely that Microsoft in their wisdom require all drivers to be digitally signed. So my questions are thus: Is there a version of the LibUSB-Win32 drivers which are already signed I could use? If there aren't already some signed ones I can canibalise, what exactly do I have to do to get my drivers signed. Do I need to get 64 and 32 bit versions signed separately and will this cost more? Is this a free alternative to getting them signed? Are there any other options I should consider besides requiring that my customers boot into test mode each time they start their machines (not an option I'd consider).

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • What is the cost in bytes for the overhead of a sql_variant column in SQL Server?

    - by Elan
    I have a table, which contains many columns of float data type with 15 digit precision. Each column consumes 8 bytes of storage. Most of the time the data does not require this amount of precision and could be stored as a real data type. In many cases the value can be 0, in which case I could get away with storing a single byte. My goal here is to optimize space storage requirements, which is an issue I am facing working with a SQL Express 4GB database size limit. If byte, real and float data types are stored in a sql_variant column there is obviously some overhead involved in storing these values. What is the cost of this overhead? I would then need to evaluate whether I would actually end up in significant space savings (or not) switching to using sql_variant column data types. Thanks, Elan

    Read the article

  • How Much Does Source Code Cost? Range?

    - by Brain Freeze
    I have taken a job selling a customized "online workplace management application." Our clients' businesses work around the application. Our clients track their time (which is how they get paid), finances and work documents through the application we provide and give their clients access to their interests throught the application. Our clients range from 2 users to 500 users. Each user probably processes 200 files per year and generates a fee for each file in the range of $500-$2500 per file. The application has been refined over a period of years and has cost around a million to develop. Does anyone know what range something like this sells for (source code, add-ons such as support and hosting)? I am trying to wrap my head around it as my background is not in software development.

    Read the article

  • What's the lowest cost, legal, Microsoft server stack you can assemble?

    - by McKAMEY
    Assuming that you have an app infrastructure that generally only requires: ASP.NET MVC / C# / .NET Database or NoSQL data store (must be accessible from C#) Here's the challenge to you server gods: What is the least expensive configuration that will allow you to deploy to production in a way that doesn't break any licensing rules? In what ways does this solution differ from the "standard" Microsoft deployment scenario? Where does this solution's performance break down once the app begins to scale? I'm not concerned about the hardware, only the server software itself. I would love to hear about any solutions you've personally put into production. Especially if they are unique alternatives. For ideas, consider some of the possible variations, a) any Microsoft server solutions where they have lowered the barrier to entry to compete with OSS, or b) any OSS alternatives to Microsoft products which perform at a similar level. An example of a): SQL Server 2008 Express Edition SP1 is a 100% free version of SQL Server which will scale to the needs of many smaller / early stage applications. An example of b): running the Mono Framework on Linux. An example of differing from the "standard" stack: running Mono on Linux will require a completely different server OS familiarity. None of the Windows-based knowledge really transfers. An example of breaking down under scale: SQL Server Express will only scale to 1GB of memory and 4GB of disk storage. After that point, the application will need to move to one of the paid versions of SQL Server.

    Read the article

  • What's more cost effective, Hosting your web startup on Foss or Windows?

    - by user37899
    Hi, Not coming from the windows world, I'm confused about licensing. I think my knowledge may be out of date. Before we gave up with windows web servers (IIS 2), we used to have to pay Client Access licence's. This worked out quite expensive. Is it cheaper to host 1000's of users on Windows than use Free open source software tools? http://serverfault.com/questions/124329/network-load-balancing-efficience-and-limits. This post suggests I can pay $15 a month, for unlimited users. I certainly hope that this is unbiased view, I am a professional and use the right technology for the right job. I hope i am not feeling the wrath of a windows (or linux for that matter) fanboy. Perhaps a Microsoft certified Licensing person can clear this up. Should i be recommending to startups windows servers and products over lamp? Cheers!

    Read the article

  • Most cost efficient way to backup Subversion data to S3?

    - by sludge
    I'm looking at using S3 as an offsite backup repo for my Subversion database. When I dump my SVN database, it's about 10 gigabytes. I would like to avoid the charge of uploading that data repeatedly. The anatomy of this large file such that new changes to Subversion modify the tail of the file, with everything else staying the same. Because Amazon S3 does not allow you to "patch" files with changes, I will have to upload ten gigs every time I instantiate a backup after doing a simple submit to Subversion. Here are the options as I see them: Option 1 I am looking at duplicity which has --volsize which splits data over an amount of megs. Is it possible to split the Subversion dumps using this so further incremental backups are measured in megabytes? Option 2 Can I just backup the hot subversion repository? This seems like a bad idea if it is in the middle of writing a submit. However, I have the option of taking the repo offline between the hours of midnight and 4am. Each revision in my Berkeley DB uses a file as its record.

    Read the article

  • What is the value/cost of enabling "spread spectrum clocking" on my hard drives?

    - by Stu Thompson
    I'm building up a biggish NAS box (10x WD RE4 2TB SATA RAID10) and ran into some problems. During the course of my research, debugging, investigations, etc, I discovered a jumper on the physical drives labeled "spread spectrum clocking". After some googling about this on teh internets, it seems to be a feature that some suggest (without reference or explanation) enabling in 'a storage configuration' that makes the drive less sussesptable to EMI. But why? I've got three core questions: Why is this feature not enabled by default? What are the actual benefits? Are there any costs?

    Read the article

  • Reducing storage cost by moving old files to external USB HDDs. Your thoughts?

    - by cparker4486
    I've got about 300GB of pictures and marketing data that is rarely accessed and I'd like to get it off my main storage. I was thinking to simply add two external USB HDDs to the server and move all the files to one of the drives. The second drive would be the backup destination for the first drive. I'm working with Server 2003 R2 SP2. This will help me free a good amount of space on my main storage as well as reduce the complexity, backup window, and usage of my backups to tape.

    Read the article

  • open flash chart rails x-axis issue

    - by Jimmy
    Hey guys, I am using open flash chart 2 in my rails application. Everything is looking smooth except for the range on my x axis. I am creating a line to represent cell phone plan cost over a specific amount of usage and I'm generate 8 values, 1-5 are below the allowed usage while 6-8 are demonstrations of the cost for usage over the limit. The problem I'm encountering is how to set the range of the X axis in ruby on rails to something specific to the data. Right now the values being displayed are the indexes of the array that I'm giving. When I try to hand a hash to the values the chart doesn't even load at all. So basically I need help getting a way to set the data for my line properly so that it displays correctly, right now it is treating every value as if it represents the x value of the index of the array. Here is a screen shot which may be a better description than what I am saying: http://i163.photobucket.com/albums/t286/Xeno56/Screenshot.png Note that those values are correct just the range on the x-axis is incorrect, it should be something like 100, 200, 300, 400, 500, 600, 700 Code: y = YAxis.new y.set_range(0,100, 20) x_legend = XLegend.new("Usage") x_legend.set_style('{font-size: 20px; color: #778877}') y_legend = YLegend.new("Cost") y_legend.set_style('{font-size: 20px; color: #770077}') chart =OpenFlashChart.new chart.set_x_legend(x_legend) chart.set_y_legend(y_legend) chart.y_axis = y line = Line.new line.text = plan.name line.width = 2 line.color = '#006633' line.dot_size = 2 line.values = generate_data(plan) chart.add_element(line) def generate_data(plan) values = [] #generate below threshold numbers 5.times do |x| usage = plan.usage / 5 * x cost = plan.cost * 10 values << cost end #generate above threshold numbers 3.times do |x| usage = plan.usage + ((plan.usage / 5) * x) cost = plan.cost + (usage * plan.overage) values << cost end return values end

    Read the article

  • Is it best to try to work at a company where your software directly makes the company money?

    - by Ryan Hayes
    I was told once that the best place to work as a developer is a company where the software you write is what makes the company money, whether it be software production or software services like consulting. This is opposed to a company where the software you write is just to support some other part of the business that makes the money, like manufacturing or finance. I know there are always exceptions, but in general, are employees treated better if they are on the front lines of profit generation, as opposed to being just another cost center? cost center (n.) - A cost center is part of an organization that does not produce direct profit and adds to the cost of running a company. Examples of cost centers include research and development departments, marketing departments, help desks and customer service/contact centers.

    Read the article

  • Sentiment analysis with NLTK python for sentences using sample data or webservice?

    - by Ke
    I am embarking upon a NLP project for sentiment analysis. I have successfully installed NLTK for python (seems like a great piece of software for this). However,I am having trouble understanding how it can be used to accomplish my task. Here is my task: I start with one long piece of data (lets say several hundred tweets on the subject of the UK election from their webservice) I would like to break this up into sentences (or info no longer than 100 or so chars) (I guess i can just do this in python??) Then to search through all the sentences for specific instances within that sentence e.g. "David Cameron" Then I would like to check for positive/negative sentiment in each sentence and count them accordingly NB: I am not really worried too much about accuracy because my data sets are large and also not worried too much about sarcasm. Here are the troubles I am having: All the data sets I can find e.g. the corpus movie review data that comes with NLTK arent in webservice format. It looks like this has had some processing done already. As far as I can see the processing (by stanford) was done with WEKA. Is it not possible for NLTK to do all this on its own? Here all the data sets have already been organised into positive/negative already e.g. polarity dataset http://www.cs.cornell.edu/People/pabo/movie-review-data/ How is this done? (to organise the sentences by sentiment, is it definitely WEKA? or something else?) I am not sure I understand why WEKA and NLTK would be used together. Seems like they do much the same thing. If im processing the data with WEKA first to find sentiment why would I need NLTK? Is it possible to explain why this might be necessary? I have found a few scripts that get somewhat near this task, but all are using the same pre-processed data. Is it not possible to process this data myself to find sentiment in sentences rather than using the data samples given in the link? Any help is much appreciated and will save me much hair! Cheers Ke

    Read the article

  • Most efficient sorting of calculation on DataTable column calculation

    - by byte
    Lets say you have a DataTable that has columns of "id", "cost", "qty": DataTable dt = new DataTable(); dt.Columns.Add("id", typeof(int)); dt.Columns.Add("cost", typeof(double)); dt.Columns.Add("qty", typeof(int)); And it's keyed on "id": dt.PrimaryKey = new DataColumn[1] { dt.Columns["id"] }; Now what we are interested in is the cost per quantity. So, in other words if you had a row of: id | cost | qty ---------------- 42 | 10.00 | 2 The cost per quantity is 5.00. My question then is, given the preceeding table, assume it's constructed with many thousands of rows, and you're interested in the top 3 cost per quantity rows. The information needed is the id, cost per quantity. You cannot use LINQ. In SQL it would be trivial; how BEST (most efficiently) would you accomplish it in C# without LINQ?

    Read the article

  • Overriding Object.Equals() instance method in C#; now Code Analysis / FxCop warning CA2218: "should

    - by Chris W. Rea
    I've got a complex class in my C# project on which I want to be able to do equality tests. It is not a trivial class; it contains a variety of scalar properties as well as references to other objects and collections (e.g. IDictionary). For what it's worth, my class is sealed. To enable a performance optimization elsewhere in my system (an optimization that avoids a costly network round-trip), I need to be able to compare instances of these objects to each other for equality – other than the built-in reference equality – and so I'm overriding the Object.Equals() instance method. However, now that I've done that, Visual Studio 2008's Code Analysis a.k.a. FxCop, which I keep enabled by default, is raising the following warning: warning : CA2218 : Microsoft.Usage : Since 'MySuperDuperClass' redefines Equals, it should also redefine GetHashCode. I think I understand the rationale for this warning: If I am going to be using such objects as the key in a collection, the hash code is important. i.e. see this question. However, I am not going to be using these objects as the key in a collection. Ever. Feeling justified to suppress the warning, I looked up code CA2218 in the MSDN documentation to get the full name of the warning so I could apply a SuppressMessage attribute to my class as follows: [SuppressMessage("Microsoft.Naming", "CA2218:OverrideGetHashCodeOnOverridingEquals", Justification="This class is not to be used as key in a hashtable.")] However, while reading further, I noticed the following: How to Fix Violations To fix a violation of this rule, provide an implementation of GetHashCode. For a pair of objects of the same type, you must ensure that the implementation returns the same value if your implementation of Equals returns true for the pair. When to Suppress Warnings ----- Do not suppress a warning from this rule. [arrow & emphasis mine] So, I'd like to know: Why shouldn't I suppress this warning as I was planning to? Doesn't my case warrant suppression? I don't want to code up an implementation of GetHashCode() for this object that will never get called, since my object will never be the key in a collection. If I wanted to be pedantic, instead of suppressing, would it be more reasonable for me to override GetHashCode() with an implementation that throws a NotImplementedException? Update: I just looked this subject up again in Bill Wagner's good book Effective C#, and he states in "Item 10: Understand the Pitfalls of GetHashCode()": If you're defining a type that won't ever be used as the key in a container, this won't matter. Types that represent window controls, web page controls, or database connections are unlikely to be used as keys in a collection. In those cases, do nothing. All reference types will have a hash code that is correct, even if it is very inefficient. [...] In most types that you create, the best approach is to avoid the existence of GetHashCode() entirely. ... that's where I originally got this idea that I need not be concerned about GetHashCode() always.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >