Search Results

Search found 5166 results on 207 pages for 'cost benefit'.

Page 164/207 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • Disaster Recovery Standby Server

    - by user64300
    Hi, I work for a small business with 25 users and 2 servers. 1 server is the DC running Windows Server 2003/Exchange 2003. We want a reliable disaster recovery strategy for this server without having to spend a lot of money. We take regular backups but I have been advised that only an identical server will allow them to be restored easily. I'm trying to come up with a solution that means we don't have to buy two servers at twice the cost everytime we upgrade. I'm toying with the idea of upgrading our DC more frequently (say every 3 years) and then using the old server as the recovery server (temporarily - until we can source a replacement server). However, I won't know whether the backups will restore on the old server until I try it! We're planning to upgrade to Server 2008R2 in the near future so I'm hoping the backup tools will give me some success in restoring to different hardware (or perhaps I can use hyper-v if not). So what I am wondering is whether it is a idea to use old hardware as a disaster recovery strategy (providing we regular test it obviously!).

    Read the article

  • Could replacing an old hard drive's circuit board make it work again?

    - by oscilatingcretin
    I have a 12-year-old, 10gb Maxtor drive that died on me around 7 years ago, but I have not had the heart to throw it away. When the computer powers on, it whirrs silently as it tries to spin up and then it stops. So, a few years ago, I sent it off for professional data recovery. They were able to retrieve quite a bit from it, but I know there's a bunch more there. It only cost $700, so I just chalked up the lackluster recovery effort to "you get what you pay for" considering that most companies will charge you several thousands of dollars for this kind of data recovery. When they sent the drive back, I couldn't help but plug it back in just to see if maybe they unjammed something in the process of disassembling/reassembling the drive. To my surprise, the drive had a much healthier spin-up sound and actually stayed spinning for several minutes before winding down to a halt. Windows is even able to detect and interact with the drive, but I get I/O errors after so many minutes of waiting for it to mount. Before I start doing stupid stuff with it like dropping it on the ground, freezing it, crapping on it, etc, I decided to buy the exact same model off Ebay so that I could swap the circuit boards as a last-ditch effort. While it's en route, I thought I'd come here to ask if this is even a worthwhile effort and, if even remotely so, what should I know before ripping off the old board and slapping on the new?

    Read the article

  • Are these hardwares compatible?

    - by Tom Kaufmann
    I am trying to upgrade my new machine but I want to do it myself. This is my 1st attempt at building system. After carefully reading reviewing feedback and my budget I have decided to select the below listed components. Can anybody let me know are they compatible or not? Transcend 64 GB 2.5" SATA Solid State Drive Asus GeForce GTX550 1GB DDR5 ENGTX550 TI DI/1GD5 Graphics Card Seagate Barracuda 1 TB HDD Internal Hard Drive Cooler Master eXtreme Power Pro 600 Power Supply Intel Core i5 2500K Sandy Bridge 3.30 GHz 95 W 4 Core Desktop Processor Intel DX79TO Motherboard Corsair CMZ8GX3M2A1600C9 8 GB DDR3 SDRAM 1600 MHz Dual Channel Kit Desktop Memory Sony AD-7260S-ZS Internal DVD Writer - Black Cooler Master Hyper TX3 EVO Intel CPU Cooler Cooler Master Elite 335U Cabinet LG E2051T 20.1 Inch SuperSlim Monitor Is any of these hardware components incompatible with I5 2500K? If you have any other suggestions for selecting any other harwdware that can boost up my performance or lower my cost while having the same performance, please suggest. But my primary questions is whether they are compatible or not! Any help is appreciated. Thank you.

    Read the article

  • How to change font size on display

    - by Tim
    My laptop is Lenovo T400, whose screen size is 14.1 inches and default resolution is 1440 x 900. My main OS is Ubuntu 10.10. The default font size on the display is somehow small, which might contribute to the fatigue of my eyes. My previous laptop is Acer 5000, whose screen size is 15.4 inches and the default resolution is 1024 x 768. I like reading on my old laptop better than on my new one. Is it possible to change the setting of my new one to look like reading my old one? What are the parameters that control the font size? Are screen size, resolution part of them? In Windows, there are choices for font size, while in Ubuntu I haven't find out where I can change the setting and would like know if someone here knows about it. I also wonder if I can use a separate bigger display (perhaps just like a desktop display) as the display of my laptop, in case I don't want to enlarge font size at the cost of sacrificing the amount of the content to display, and how I shall do it? Thanks and regards!

    Read the article

  • Deployment and monitoring tools for java/tomcat/linux environment

    - by Ran
    I'm a developer for many years, but don't have tons of experience in ops, so apology if this is a newbe question. In my company we run a web service written in Java mainly based on a Tomcat web server. We have two datacenters with about 10 hosts each. Hosts are of several types: Dababase, Tomcats, some offline java processes, memcached servers. All hosts are Linux CentOS Up until now, when releasing a new version to production we've been using a set of inhouse shell script that copy jars/wars and restart the tomcats. The company has gotten bigger so it has become more and more difficult operating all this and taking code from development, through QA, staging and to production. A typical release many times involves human errors that cost us precious uptime. Sometimes we need to revert to last known good and this isn't easy to say the least... We're looking for a tool, a framework, a solution that would provide the following: Supports the given list of technology (java, tomcat, linux etc) Provides easy deployment through different stages, including QA and production Provides configuration management. E.g. setting server properties (what's the connection URL of each host etc), server.xml or context configuration etc Monitoring. If we can get monitoring in the same package, that'll be nice. If not, then yet another tool we can use to monitor our servers. Preferably, open source with tons of documentation ;) Can anyone share their experience? Suggest a few tools? Thanks!

    Read the article

  • How and where do you manage your domain names?

    - by Saif Bechan
    In the past several years doing web development I often times needed to buy new domain names. I changed registrars a lot also so over the years I have multiple domain names scattered over different registrars all over the world. Now I want to bring a little structure into my business, and I am at the point that I want to be able to have easy control over my domain names in a convenient way. Does anyone have an idea on what the best way is to give structure on this. I have made some suggestions maybe you can comment on them for me. 1) Just leave it as it is I can leave everything as it is. To make adjustments I have to log into different panels, and for some registrars I have to email the changes. 2) Transfer all the domains to one registar This will cost a lot, about 10 usd per domain name. But if I can find a registar where I have full control over DNS this is worth looking at. Can you give me some comments on how you are doing things now. Maybe also which registrar you prefer on doing things.

    Read the article

  • Deploying and publishing my first asp.net mvc 3 web application

    - by john G
    I want to deploy and publish my first asp.net mvc 3 web application at my client side (the client is a small office with 2-4 employees that need to access the application) , currently i finished developing my web application using the free Microsoft Visual Web Developer 2010 express and the free SQL Server 2008 R2 Express database . So my concerns are :- 1. The free sql express database that i am currently using have a limitation of 10 GB i think So i want to buy SQL server for small business to remove the database limitations. So my questions that i need help with them are:- **1. If i use the sql server for small business then will my web application have other limitations on production i am unaware of ? 2. Will be using SQL server for small business my right choice? baring in my that the system will be used by 2-4 clients only? 3. How much does “approximate ” the sql server database will cost in US dollars? 4. Are there any other software that i need to buy to be able to deploy and publish the application on intranet and the internet ?** Appreciate any help Best Regards

    Read the article

  • Finding Font Issues in Acrobat

    - by Jayme
    Ok let me try this again, so sorry for not be clear. We create our PDFs through Quark, then send to print. I usually create outlines on my EPS files before I load in Quark but forgot this time. We bypassed the font error that Quark gave us by accident and found out our PDF was bad too late and it cost a lot of money to fix. We are trying to find a way to check our PDF for font problems before we send it to print, in case this problem happens again. We just want to be extra sure that we have tried everything. What I see in Quark is what the font is supposed to look like. When I view my PDF, the text is mixed up. Its readable but doesn't look like its supposed to and the spacing is all off within the text. My boss told me about the preflight in Quark and the Internal Structure for the fonts. She was asking me if this would help and what the lingo all meant. (which is where my first question started) The image on the left is my EPS that is correct, the image on the right is from the PDF. The white text in the top right and the website at the bottom left is what is messed up. I am running Mac 10.5.8, Quark 7.5 and Acrobat 8.3.1. Thanks, Jayme

    Read the article

  • ASP.NET MVC - Html.DropDownList - Value not set via ViewData.Model

    - by chrisb
    Have just started playing with ASP.NET MVC and have stumbled over the following situation. It feels a lot like a bug but if its not, an explanation would be appreciated :) The View contains pretty basic stuff <%=Html.DropDownList("MyList", ViewData["MyListItems"] as SelectList)%> <%=Html.TextBox("MyTextBox")%> When not using a model, the value and selected item are set as expected: //works fine public ActionResult MyAction(){ ViewData["MyListItems"] = new SelectList(items, "Value", "Text"); //items is an ienumerable of {Value="XXX", Text="YYY"} ViewData["MyList"] = "XXX"; //set the selected item to be the one with value 'XXX' ViewData["MyTextBox"] = "ABC"; //sets textbox value to 'ABC' return View(); } But when trying to load via a model, the textbox has the value set as expected, but the dropdown doesnt get a selected item set. //doesnt work public ActionResult MyAction(){ ViewData["MyListItems"] = new SelectList(items, "Value", "Text"); //items is an ienumerable of {Value="XXX", Text="YYY"} var model = new { MyList = "XXX", //set the selected item to be the one with value 'XXX' MyTextBox = "ABC" //sets textbox value to 'ABC' } return View(model); } Any ideas? My current thoughts on it are that perhaps when using a model, we're restricted to setting the selected item on the SelectList constructor instead of using the viewdata (which works fine) and passing the selectlist in with the model - which would have the benefit of cleaning the code up a little - I'm just wondering why this method doesnt work.... Many thanks for any suggestions

    Read the article

  • XtraGrid Suite - is there a way to add a button or hyperlink to a cell?

    - by calico-cat
    I'm working with the XtraGrid Suite made by DevExpress. I can't find any sort of functionality to do this, but I'm curious if you can add a button or hyperlink to a grid cell. Context: I've got an Events list. Each Event has a Time, Start/End, and a Category (Utility and Maintenance). There can be Start events and Stop events. Having done my analysis of the problem, I've decided that having a StartTime and EndTime for each event would not work. So if an event starts, I'd record the current time to the Event object, and set it as a 'Start' event. I'd like to add a "Stop" button/hyperlink to a cell in that row. If the user wishes to log an Ends event, the event type, etc would be copied to a new Event with the type 'Stop' and the button would disappear. I hope this makes sense. EDIT: Aaronaught's answer is actually better than what I was originally asking (a button) so I've updated the question. That way, anyone looking for putting a hyperlink in a cell can benefit from his example : )

    Read the article

  • Pop-up modal with UITableView on iPhone

    - by Meltemi
    I need to pop up a quick dialog for the user to select one option in a UITableView from a list of roughly 2-5 items. Dialog will be modal and only take up about 1/2 of screen. I go back and forth between how to handle this. Should I subclass UIView and make it a UITableViewDelegate & DataSource? I'd also prefer to lay out this view in IB. So to display I'd do something like this from my view controller (assume I have a property in my view controller for DialogView *myDialog;) NSArray* nibViews = [[NSBundle mainBundle] loadNibNamed:@"DialogView" owner:myDialog options:nil]; myDialog = [nibViews objectAtIndex:0]; [self.view addSubview:myDialog]; problem is i'm trying to pass owner:myDialog which is nil as it hasn't been instantiated...i could pass owner:self but that would make my view controller the File's Owner and that's not how that dialog view is wired in IB. So that leads me to think this dialog wants to be another full blown UIViewController... But, from all I've read you should only have ONE UIViewController per screen so this confuses me because I could benefit from viewDidLoad, etc. that come along with view controllers... Can someone please straighten this out for me?

    Read the article

  • Version control a content management system?

    - by Mike
    I have the following directory structure in the CMS application we have written: /application /modules /cms /filemanager /block /pages /sitemap /youtube /rss /skin /backend /default /css /js /images /frontend /default /css /js /images Application contains code specific to the current CMS implementation, i.e code for this specific cms. Modules contain reusable portions of code that we share across projects, such as libraries to work with youtube or rss feeds. We include these as git submodules, so that we can update the module in any website and push the changes back across all other projects. It makes it really easy to apply a change to our code and distribute it. We wanted to turn the CMS into a module so we get the same benefit - we can run the entire project under source control, then update the cms as required through a git-submodule. We have run into a problem however: the cms requires javascript/images/css in order for it to work correctly. Things we have thought about: We could create 2 submodules, one for cms-skin and one for cms, but this means you cannot "git pull" one version without having some idea of which versions of skin work with which versions of cms. i.e version 1.2.2 CMS might have issues with 1.0.3 CMS-Skin We could add the skin to the cms module but this has the following problems: Skin should be available on the document root, module code shouldn't be, and if it is it should probably be secured via .htaccess It doesn't seem to make any sense bundling assets with php code We could create a symlink between /skin/backend/ to go to /modules/cms/skin but does this cause any security problems, and do we want to require something like a symlink for the application to work? We could create a hook for git or a shell script that copies files from modules/cms/skin to skin/backend when an update occurs, but this means we lose the ability to edit CMS core files in a project then push them back How is this typically done in large scale cms's? How is it possible to get the source code for a cms under version control, work on the application for a client, then update the sourcecode as releases and given by the vendor? How do applications like Magento or Drupal do this?

    Read the article

  • Improve XPath efficiency for repeated, parameterized queries

    - by Chris Allan
    Hi, I am repeatedly performing the following XPath query (though parameterized by 'keywordText') around 40,000 times: String query = SystemGlobal.YAHOO_KEYWORDSSUBNODE + "/" + SystemGlobal.YAHOO_KEYWORDNODE + "[" + SystemGlobal.YAHOO_ATTRKEYPHRASE + "='" + keywordText + "']"; CachedXPathAPI cachedXPathAPI = new CachedXPathAPI(); NodeIterator nl = cachedXPathAPI.selectNodeIterator(doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), query); Node n; if ((n = nl.nextNode()) != null) { keyword.setKeywordId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYID).getTextContent())); keyword.setKeyPhrase(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRKEYPHRASE).getTextContent()); keyword.setStatus(mapStatus(cachedXPathAPI.selectSingleNode(n, SystemGlobal.YAHOO_ATTRSTATUS).getTextContent())); keyword.setCampaignId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../../" + SystemGlobal.YAHOO_ATTRCAMPAIGNID).getTextContent())); keyword.setAdGroupId(Long.parseLong(cachedXPathAPI.selectSingleNode(n, "../" + SystemGlobal.YAHOO_ATTRADGROUPID).getTextContent())); On the first run of the script, all 40,000 runs of this piece of code will have nl.nextNode() == null, and everything runs quite quickly. However, on the following runs, when nl.nextNode() != null, then things slow down a lot - this takes around an additional 40min to run (whereas the first run takes maybe 1 minute). Oh, and the doc is constructed like so: InputSource in = new InputSource(new FileInputStream(filename)); DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance(); dfactory.setNamespaceAware(true); doc = dfactory.newDocumentBuilder().parse(in); I tried including the following lines reportEvaluator = new XPathEvaluatorImpl(reportDoc); reportResolver = reportEvaluator.createNSResolver(reportDoc); and rather creating a NodeIterator, instead creating an XPathResult: XPathResult result = (XPathResult)reportEvaluator.evaluate(query, doc.getElementsByTagName(SystemGlobal.YAHOO_KEYWORDSROOT).item(0), reportResolver, XPathResult.UNORDERED_NODE_ITERATOR_TYPE, null); however this ran even slower Is there a way in which I can speed up the running of this script? I have seen references to precompiled queries, though I haven't seen many actual details. Also, as seen in the code, I am using CachedXPathAPI, though the benefit for this case is not so great. Any help is much appreciated! Chris Allan

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • How do you prove your cleverness and programming skills?

    - by Sebi
    Lately, there were a lot of questions related to career planning and how to decide which languages to learn, how to learn new languages and so on. But I was thinking about a way, how do you will proof later that you're really learned something. Ok you can mentione it in your application for a job or in the interview, but by just stating that I was learning e.g. C++ at home, I don't think this will really be very sucessful. So some suggested to create something (homepage, application, whatsoever) to prove that you also can use these skills in practice. But still I'm not really sure if this will provide any benefit, because it would require a very special project to show all your skills and I don't think you can easily invent such a project (that sould also be useful). Others suggested to solve for example the Project Euler questions but still I'm not sure how this will be useful in career-planning. Are you going to mention at your job interview that you solved all this question in the company's favorite programming language?? :D I can't imagine. The reason why I'm asking is that I have some spare time and I would really like to learn some new programming languages (C++ and/or Python if this matters) and I'm looking for a smart way to do is while concurrently assure that it will be useful in my future career. (there are 3-4 companies id like to try to get a job andIi know all of them are using mainly C++/Pyhton...)

    Read the article

  • Async.Parallel or Array.Parallel.Map ?

    - by gurteen2
    Hello- I'm trying to implement a pattern I read from Don Syme's blog (http://blogs.msdn.com/dsyme/archive/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-i-o-computations.aspx) which suggests that there are opportunities for massive performance improvements from leveraging asynchronous I/O. I am currently trying to take a piece of code that "works" one way, using Array.Parallel.Map, and see if I can somehow achieve the same result using Async.Parallel, but I really don't understand Async.Parallel, and cannot get anything to work. I have a piece of code (simplified below to illustrate the point) that successfully retrieves an array of data for one cusip. (A price series, for example) let getStockData cusip = let D = DataProvider() let arr = D.GetPriceSeries(cusip) return arr let data = Array.Parallel.map (fun x -> getStockData x) stockCusips So this approach contructs an array of arrays, by making a connection over the internet to my data vendor for each stock (which could be as many as 3000) and returns me an array of arrays (1 per stock, with a price series for each one). I admittedly don't understand what goes on underneath Array.Parallel.map, but am wondering if this is a scenario where there are resources wasted under the hood, and it actually could be faster using asynchronous I/O? So to test this out, I have attempted to make this function using asyncs, and I think that the function below follows the pattern in Don Syme's article using the URLs, but it won't compile with "let!". let getStockDataAsync cusip = async { let D = DataProvider() let! arr = D.GetData(cusip) return arr } The error I get is: This expression was expected to have type Async<'a but here has type obj It compiles fine with "let" instead of "let!", but I had thought the whole point was that you need the exclamation point in order for the command to run without blocking a thread. So the first question really is, what's wrong with my syntax above, in getStockDataAsync, and then at a higher level, can anyone offer some additional insight about asychronous I/O and whether the scenario I have presented would benefit from it, making it potentially much, much faster than Array.Parallel.map? Thanks so much.

    Read the article

  • Question regarding the iPhone In App Purchase capabilities

    - by Wesley
    Hi everyone. I am working on releasing an app that will greatly benefit from using the in app purchase model. The app is a sort of book viewer, and the content I would like to make available for purchase will be more books in various languages. Each book is stored in sqlite format, in separate .db files. Now, the way that my developer has set it all up is that he added a line onto the bottom of the info.plist file called databases. Inside that database section, I can type in 'es' as the key, and for the value the name of the spanish database, and it populates in a uitableview in spanish, using NSLocale. So it is very easy to setup and implement different databases, which is great, but now I am confused about how I can implement my in app purchase model. Sorry about the long winded intro, here is my question: Is it possible to have an in app purchase add a line to my info.plist file? Or if not, is it possible to have the purchase reveal a new and updated plist file altogether that I would have already setup correctly? Any help here would be appreciated. Thanks

    Read the article

  • What do I do about recurring billing?

    - by phidah
    This might be a subjective question, but I'll give it a go. There are already a number of questions on SO that revolves around subscription billing management. I am currently working on a SaaS solution that will require a fully automated billing system. What I am not looking for when asking this question is not advice on implementing towards a specific payment gateway or stuff like that. Instead I'd like advice on what kind of approach to take. The functionality that I need is a system that can handle upgrades, downgrades, recurring billing, cancellations, etc. Initially for one product only, but it might over time be a requirement that the system can handle multiple products (by products I mean fundamentally different products, not different variations of the same product). As I see it there are a number of possible approaches when you need a solution like this: Code a billing server yourself that supports this and is decoupled from each product so that it can handle multiple independent products. Use a hosted solution like Recurly, Chargify, Spreedly or CheddarGetter. The advantage of using a hosted solution is obviously that you don't need PCI certification, the concern is outsourced and it is a lot faster to get up and running. These advantages come at a cost however: The most important support function for your product - i.e. the billing is not in your control. Additionally you have less control and flexibility. What would you do? If we look beyond the PCI requirements I would definately prefer to have a system coded in-house that could do this kind of job. On the other hand I've heard from numerous sources that coding a system like this is a pain. Any advice is highly appreciated. Also, if you advice to code it yourself, any experiences on how to do it or if there are any opensource projects (no matter the language, what I'm after is not the code but the structure) that I can benefit from would really mean alot. Thanks in advance for your inputs! :-)

    Read the article

  • What is the difference between the Lego Mindstorms 1.0 and 2.0

    - by Gavimoss
    I am thinking about buying a mindstorms kit (I don't currently own one but I have used 1.0 at university) and I am a bit unsure as to the benefits of 2.0 over 1.0. I have seen other posts on the subject all saying generally 2.0 is better but I have some more specific questions about this that I can't seem to find any answers on. Apart from the different lego pieces and sensors you get with the 2.0 kit, is there any difference between a 1.0 nxt brick and 2.0 nxt brick? From what I can determine from other sources, they are the same except for the firmware installed. Am I right in saying I could buy a 1.0 kit and install the same firmware that comes with the 2.0 kit and the bricks would be the same or is the 1.0 brick not compatible with the 2.0 firmware??? Also, I plan to use a different programming language like c or java so I need to install specific firmware for that anyway like librcx or lejos right? So if using c or java as opposed to the provided lego coding methods it doesn't matter if I am using 1.0 or 2.0 (except for the lego pieces in the kit) am I right? In a nutshell, assuming I am using librcx or lejos and I don't care about the sensors and lego pieces included, is there any benefit to buying a 2.0 kit over the 1.0 kit? Thanks in advance

    Read the article

  • Building a Drupal Newsletter Module for handling Newsletter Articles

    - by Michael T. Smith
    We're building a module for generating HTML for email newsletters. We've looked into using a few other modules (SimpleNews, MailChimp, among others), but due to various requirements, it'll be easier and better for us to build a custom solution. Being a new Drupal developer, I'm a bit worried about handling this in a "non-Drupal" way. That being said, my plan is to setup a vocabulary with Newsletters as a term and the actual Newsletters as sub-terms, like so: Newsletters (term) - Newsletter A (sub-term) - Newsletter B (sub-term) This has the added benefit of being able to organize where articles were published (besides just on the site.) The question, though, is how to handle the different Newsletter issues. I could go another level deeper in the vocabulary, like so: Newsletters (term) - Newsletter A (sub-term) - Issue - 2010-03-01 - Issue - 2010-03-02 - Newsletter B (sub-term) - Issue - 2010-03-01 - Issue - 2010-03-08 But I'm wondering if this is adding a bit too much complexity. Once I have this taxonomy setup, when the user went to add new newsletters it would also create a node (content type: newsletter), and when he/she went to add new issues, it would also create a node (content type: issue.) Those would then be the landing pages for that content. So, the question is is there a better way for handling this structure? Is this a Drupal-like solution?

    Read the article

  • C# using namespace directive in nested namespaces

    - by MoSlo
    Right, I've usually used 'using' directives as follows using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq } i've recently seen examples of such as using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq namespace DataLibrary { using System.Data; //Data access layers and whatnot } } Granted, i understand that i can put USING inside of my namespace declaration. Such a thing makes sense to me if your namespaces are in the same root (they organized). System; namespace 1 {} namespace 2 { System.data; } But what of nested namespaces? Personally, I would leave all USING declarations at the top where you can find them easily. Instead, it looks like they're being spread all over the source file. Is there benefit to the USING directives being used this way in nested namespaces? Such as memory management or the JIT compiler?

    Read the article

  • Assistance with building an inverted-index

    - by tipu
    It's part of an information retrieval thing I'm doing for school. The plan is to create a hashmap of words using the the first two letters of the word as a key and any words with the two letters saved as a string value. So, hashmap["ba"] = "bad barley base" Once I'm done tokenizing a line I take that hashmap, serialize it, and append it to the text file named after the key. The idea is that if I take my data and spread it over hundreds of files I'll lessen the time it takes to fulfill a search by lessening the density of each file. The problem I am running into is when I'm making 100+ files in each run it happens to choke on creating a few files for whatever reason and so those entries are empty. Is there any way to make this more efficient? Is it worth continuing this, or should I abandon it? I'd like to mention I'm using PHP. The two languages I know relatively intimately are PHP and Java. I chose PHP because the front end will be very simple to do and I will be able to add features like autocompletion/suggested search without a problem. I also see no benefit in using Java. Any help is appreciated, thanks.

    Read the article

  • Choosing random numbers efficiently

    - by Frederik Wordenskjold
    I have a method, which uses random samples to approximate a calculation. This method is called millions of times, so its very important that the process of choosing the random numbers is efficient. I'm not sure how fast javas Random().nextInt really are, but my program does not seem to benefit as much as I would like it too. When choosing the random numbers, I do the following (in semi pseudo-code): // Repeat this 300000 times Set set = new Set(); while(set.length != 5) set.add(randomNumber(MIN,MAX)); Now, this obviously has a bad worst-case running time, as the random-function in theory can add duplicated numbers for an eternity, thus staying in the while-loop forever. However, the numbers are chosen from {0..45}, so a duplicated value is for the most part unlikely. When I use the above method, its only 40% faster than my other method, which does not approximate, but yields the correct result. This is ran ~ 1 million times, so I was expecting this new method to be at least 50% faster. Do you have any suggestions for a faster method? Or maybe you know of a more efficient way of generation a set of random numbers.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >