Search Results

Search found 19365 results on 775 pages for 'machine vision'.

Page 9/775 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Why is this an invalid Turing machine?

    - by Danny King
    Whilst doing exam revision I am having trouble answering the following question from the book, "An Introduction to the Theory of Computation" by Sipser. Unfortunately there's no solution to this question in the book. Explain why the following is not a legitimate Turing machine. M = { The input is a polynomial p over variables x1, ..., xn Try all possible settings of x1, ..., xn to integer values Evaluate p on all of these settings If any of these settings evaluates to 0, accept; otherwise reject. } This is driving me crazy! I suspect it is because the set of integers is infinite? Does this somehow exceed the alphabet's allowable size? Thanks!

    Read the article

  • Build Machine Configuration Recommendations?

    - by IPX Ares
    We have a new build machine to start using for our programming team. We are still trying to figure out how we want to organize everything to get the best configuration for building EXEs and DLLs. We are using VB6 and VB.Net 2005, and VSS2005. We were thinking of making working folders set for each project, release and support tickets. Does anyone have experience with a similar set up? What were your likes/dislikes? Any recommendations (New VSS IDs, folder configuration, setting working folder, updating/building files)?

    Read the article

  • How to engineer features for machine learning

    - by Ivo Danihelka
    Do you have some advices or reading how to engineer features for a machine learning task? Good input features are important even for a neural network. The chosen features will affect the needed number of hidden neurons and the needed number of training examples. The following is an example problem, but I'm interested in feature engineering in general. A motivation example: What would be a good input when looking at a puzzle (e.g., 15-puzzle or Sokoban)? Would it be possible to recognize which of two states is closer to the goal?

    Read the article

  • How to quantify your "slow" development machine?

    - by lance
    ( Please provide the question this one duplicates. I'm disappointed I couldn't find it. ) My development machine is "slow". I wait on it "a lot". I've been asked by decision makers who want to help to fairly and accurately measure that time. How do you quantify the amount of time you spend waiting on the computer (during compiles, waiting for apps to open every day, etc). Is there software which effectively reports on this sort of thing? Is there an OS metric (I/O something something, pagefile swapping frequency, etc, etc) that captures and communicates this particularly well? Some sort of benchmark you'd recommend me testing against?

    Read the article

  • Opening Time-Machine OSX backup files on Windows 7?

    - by user39279
    Hi, Have Time Machine backups on a Western Digital External HD. The Time Machine backups were done on my now dead Mac G4 running OSX Leopard- I am waiting on a new iMac but in the meantime I need to access some of my backup files urgently. I have a laptop running Windows 7 so is there any safe way of accessing some of the files from the Time Machine backup on my laptop and still be able to do a full restore when the iMac arrives? Thanks -

    Read the article

  • How do I restore a non-system hard drive using Time Machine under OSX?

    - by richardtallent
    I dropped one of the external drives on my Mac Pro and it started making noises... so I bought a replacement drive. No biggie, that's why I have Time Machine, right? So now that I have the new drive up and initialized, how do I actually restore the drive from backup? Time Machine is intuitive when it comes to restoring the system drive or restoring individual folders/files on the same literal device, but I'm a bit stuck in how to properly restore an entire drive that is not the boot drive. I saw one suggestion to use the same volume name as the old drive and then go into Time Machine. Haven't tried that since the information is unconfirmed. For now, I just went to the Time Machine volume, found the latest backup folder for that volume, and I'm copying the files via Finder. Of couse, I expect this to work just fine, but I feel like I'm missing something if that's the "proper" way to do this.

    Read the article

  • Move virtual machine hard disk to a separate physical hard disk for better performance?

    - by joeeoj
    I have a dual-core machine with the host OS and many guest virtual OSs. Although I have 8GB of RAM, I notice a slowdown when I turn some virtual machine on (and it takes only 1GB RAM). I was told that I should move virtual machine hard disk file to a separate (another) physical hard drive in my PC to get better performance. This way the head of the hard disk would not have to jump from the virtual OS to the host OS as each hard drive would have its own head to deal with the OS: hard drive 1 head for host OS and hard drive 2 head for guest OS. Is this true? Should I get another hard disk only for virtual machine hard disk files?

    Read the article

  • How to reuse backup on Time Machine on Snow Leopard after a logic board change, after choosing wrong

    - by kmiffy
    After my logic board was replaced, I connected my laptop back to my network, and Time Machine gave me a popup, as shown on this thread: http://superuser.com/questions/78068/recycle-time-machine-for-new-machine/78264#78264 I misread the question and clicked on "Create New Backup" when I should have clicked on "Reuse Backup" to connect to my old backup file. How can I trigger that popup again? Turning Time Machine on and off does not work, and the instructions on forums to fix via terminal doesn't work because snow leopard is missing the fsaclctl command (and I'm also not familiar with terminal commands.) Thanks.

    Read the article

  • SSH'ing to my machine attaches an existing screen session and detaching it ends my SSH session

    - by jsplaine
    ssh'ing to my Ubuntu machine automatically attaches an existing screen session and detaching ends my ssh session What I want is to be able to ssh to my Ubuntu machine without automatically attaching to the screen session on that machine. Or at least, I should be able to to detach from that screen session w/o ending my ssh session .. right? Doesn't seem to work. This so that I can attempt to run firefox --display <whichever one is being forwarded to my ssh session>, so that I can debug a website that the remote Ubuntu machine is running (via localhost). Best case scenario is that I could just remote-desktop to my Ubuntu machine. But it's not set up to allow remote-desktop, and I see no way to set it up remotely via shell/ssh. Also, it sounds like you need a static IP in order to remote desktop to an Ubuntu machine (so I keep reading).

    Read the article

  • Please Help to bring back power to my machine

    - by Acess Denied
    I Have a samsung N150plus netbook That I have been using for a while now. I left it on and plugged to a wall outlet and went to bed. I dual boot ubuntu and win7. I tried to update the win7 to sp1 and I dozed off. I woke up and saw the machine has been booted to ubuntu and logged in as guest, which translate to mean one of my room mates have tried to use the machine and they all have denied using my machine. I tried to reboot to windows and then it appears to have no cpu, hard disk and cpu fan activity. only one led seems to come on when i plug it in. its only the led that indicate the machine is powered on powers steadily. I really cant afford to buy a new machine now and I need the machine to complete my last project in school for my last year. Help Please

    Read the article

  • Machine Learning Algorithm for Predicting Order of Events?

    - by user213060
    Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the order that events have come in in the past. The predictor will then be told what the next event actually was: Predictor=new_predictor() prev_event=False while True: event=get_event() if prev_event is not False: Predictor.last_event_was(prev_event) predicted_event=Predictor.predict_next_event(event) The question arises of how long of a history that the predictor should maintain, since maintaining infinite history will not be possible. I'll leave this up to you to answer. The answer can't be infinte though for practicality. So I believe that the predictions will have to be done with some kind of rolling history. Adding a new event and expiring an old event should therefore be rather efficient, and not require rebuilding the entire predictor model, for example. Specific code, instead of research papers, would add for me immense value to your responses. Python or C libraries are nice, but anything will do. Thanks! Update: And what if more than one event can happen simultaneously on each round. Does that change the solution?

    Read the article

  • When to choose which machine learning classifier?

    - by LM
    Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.) How do I know which classifier I should use? (Decision tree, SVM, Bayesian, logistic regression, etc.) In which cases is one of them the "natural" first choice, and what are the principles for choosing that one? Examples of the type of answers I'm looking for (from Manning et al.'s "Introduction to Information Retrieval book": http://nlp.stanford.edu/IR-book/html/htmledition/choosing-what-kind-of-classifier-to-use-1.html): a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). [I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.] b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)? [Not sure if stackoverflow is the best place to ask this question, since it's more machine learning than actual programming -- if not, any suggestions for where else?]

    Read the article

  • Doing coding in Linux through a virtual machine on Windows VS partitioning

    - by ihaveitnow
    I already have experience with setting up virtual machines, running them and other minor tasks. Im a gamer, so I wont get rid of windows (for now at least...) but I do want to be a great programmer and to be involved with the Open-Source community. Id like to know if its a good idea to do my programming in linux through a virtual machine, vs giving it a partitioned section of the HDD. Id like to know about performance pros and cons and functionality. All responses are appreciated, thanks in advance. The type of programming I intend to dive into : Android Dev, Web Dev, Desktop Dev...More Android and Web right now though. So im looking at C#,C,C++,Java,PHP,HTML,MySQL...Off the top of the dome. I do web designing as well, so dreamweaver is added as an "essential". But im sure I can do dreamweaver files and upload them to the server after programming in Linux...Right? And any info on IDE's in Linux for the above mentioned are appreciated, but i would prefer going the coding route and understanding the essence of whats happening "under the covers" Thanks to all for reading, I appreciate it. Hope this isnt confusing :S

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on ServerFault to get feedback from the administator side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Which components should I invest in.. for a backup machine.

    - by Senthil
    I am a freelance developer. I have a PC, a laptop and an old testing and file server machine. I might add one or two in future. I want to have an on-site backup machine that can handle backups of ALL these machines - file backups, MySQL backups, backup of subversion repository, etc.. When building the machine, which components should I invest more in? Examples: The cabinet should have lots of room for expansion. Hard disk size should be large. But I guess hard disk speed need not be high (?) But other components like, RAM, PSU, Processor, Network card, Cooling, etc.. how much relative importance do these have in a backup machine? Which of these components should be high-end or large, and which ones need not be? Some Idea of the load: There will TBs of data. File backups and subversion repository backups will at least be done daily. MySQL backups done weekly. assume 3 machines at the moment and somewhere around 10 machines in the future.

    Read the article

  • Implementing a linear, binary SVM (support vector machine)

    - by static_rtti
    I want to implement a simple SVM classifier, in the case of high-dimensional binary data (text), for which I think a simple linear SVM is best. The reason for implementing it myself is basically that I want to learn how it works, so using a library is not what I want. The problem is that most tutorials go up to an equation that can be solved as a "quadratic problem", but they never show an actual algorithm! So could you point me either to a very simple implementation I could study, or (better) to a tutorial that goes all the way to the implementation details? Thanks a lot!

    Read the article

  • How does a virtual machine work?

    - by Martin
    I've been looking into how programming languages work, and some of them have a so-called virtual machines. I understand that this is some form of emulation of the programming language within another programming language, and that it works like how a compiled language would be executed, with a stack. Did I get that right? With the proviso that I did, what bamboozles me is that many non-compiled languages allow variables with "liberal" type systems. In Python for example, I can write this: x = "Hello world!" x = 2**1000 Strings and big integers are completely unrelated and occupy different amounts of space in memory, so how can this code even be represented in a stack-based environment? What exactly happens here? Is x pointed to a new place on the stack and the old string data left unreferenced? Do these languages not use a stack? If not, how do they represent variables internally?

    Read the article

  • Delphi low-level machine parameter access

    - by tonyhooley.mp
    There are many very low-level parameters measured by PCs and their processors (e.g. core temperatures, fan-speeds, voltage levels at various parts of the motherboard and processor internals) which are available and displayed by the BIOS, and by some aaplication programs. How does one access these low-level (real-time) data via Delphi? Is there a library? Is there a Windows API?

    Read the article

  • Virtual-machine running from DVD ?

    - by umanga
    Greetings all, I have this application which uses Tomcat and PostgreSQL (only involve database reads, no writes). I need to make this application runnable from a DVD.(target platform is Windows). So I was thinking to do these: 1) In a VirtualMachine (i prefer virtualbox) install lightweight linux distro. 2) Install Tomcat and Postgre, 3) Write virtualmachine into DVD which loads above virtualmachine image automatically when executed. But I am not quite sure whether I can do step 3.Or is it possible ? Any tips?

    Read the article

  • Machine learning in OCaml or Haskell?

    - by griffin
    I'm hoping to use either Haskell or OCaml on a new project because R is too slow. I need to be able to use support vectory machines, ideally separating out each execution to run in parallel. I want to use a functional language and I have the feeling that these two are the best so far as performance and elegance are concerned (I like Clojure, but it wasn't as fast in a short test). I am leaning towards OCaml because there appears to be more support for integration with other languages so it could be a better fit in the long run (e.g. OCaml-R). Does anyone know of a good tutorial for this kind of analysis, or a code example, in either Haskell or OCaml?

    Read the article

  • Machine learning - training step

    - by palau1
    When you're using Haar-like features for your training data for an Adaboost algorithm, how do you build your data sets? Do you literally have to find thousands of positive and negative samples? There must be a more efficient way of doing this... I'm trying to analyze images in matlab (not faces) and am relatively new to image processing.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >