Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 518/854 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • What's a good Minimal Server-Side Javascript Framework?

    - by Nick Retallack
    So I was writing a web app with web.py that uses plenty of client-side javascript, and my database is on couchdb so the queries are in javascript too, and eventually I just got to thinking, why not skip the python and go all javascript? Besides, some functions need to run once on the client and again on the server to make sure you're not spoofing, so why translate between javascript and python? So I'm looking for a simple lightweight javascript web framework. All I really need is the url routing, request and response stuff (standard wsgi?), and a way to hook into a big http server like nginx. What do you guys recommend?

    Read the article

  • How to use numpy with OpenBLAS instead of Atlas in Ubuntu?

    - by pierotiste
    I have looked for an easy way to install/compile Numpy with OpenBLAS but didn't find an easy answer. All the documentation I have seen takes too much knowledge as granted for someone like me who is not used to compile software. There are two packages in Ubuntu related to OpenBLAS : libopenblas-base and libopenblas-dev. Once they are installed, what should I do to install Numpy again with them? Thanks! Note that when these OpenBLAS packages are installed, Numpy doesn't work anymore: it can't be imported: ImportError: /usr/lib/liblapack.so.3gf: undefined symbol: ATL_chemv. This was noticed here already. Answer to the question: Run sudo update-alternatives --all and set liblapack.so.3gf to /usr/lib/lapack/liblapack.so.3gf

    Read the article

  • C# GridView dynamically built columns with textboxes ontextchanged

    - by tnriverfish
    My page is a bulk order form that has many products and various size options. I've got a gridview that has a 3 static columns with labels. There are then some dynamically built columns. Each of the dynamically built columns have a textbox in them. The textbox is for quantity. Trying to either update the server with the quantity entered each time a textbox is changed (possibly ontextchanged event) or loop though each of the rows column by column and gather all the items that have a quantity and process those items and their quantities all at once (via button onclick). If I put the process that builds the GridView behind a if(!Page.IsPostBack) then the when a textchanged event fires the gridview only gets the static fields and the dynamic ones are gone. If I remove the if(!Page.IsPostBack) the process to gather and build the page is too heavy on processing and takes too long to render the page again. Some advice would be appreciated. Thanks

    Read the article

  • SO_LINGER and closing sockets(WINSOCK)

    - by Johnny Walked
    hey. im writing a multithreaded winsock application and im having some issues with closing the sockets. first of all, is there a limit for a number of simultaneously open sockets? lets say like 32 sockets all in once. i establish a connection on one of the sockets, and passing information and it all goes right. problem is when i disconnect the socket and then reconnect to the same destination, i get a RST from the server after my SYN. i dont have the code for the server app so i cant debug it. when i used SO_LINGER and it sent a RST flag at the end of each session - it worked. but i dont want to end my connections this way. when not using SO_LINGER a FIN flag was sent but it seems the connection was not really closed. any help? thanks

    Read the article

  • Style: Dot notation vs. message notation in Objective-C 2.0

    - by groundhog
    In Objective-C 2.0 we got the "dot" notation for properties. I've seen various back and forths about the merits of dot notation vs. message notation. To keep the responses untainted I'm not going to respond either way in the question. What is your thought about dot notation vs. message notation for property accessing? Please try to keep it focused on Objective-C - my one bias I'll put forth is that Objective-C is Objective-C, so your preference that it be like Java or JavaScript aren't valid. Valid commentary is to do with technical issues (operation ordering, cast precedence, performance, etc), clarity (structure vs. object nature, both pro and con!), succinctness, etc. Note, I'm of the school of rigorous quality and readability in code having worked on huge projects where code convention and quality is paramount (the write once read a thousand times paradigm).

    Read the article

  • MongoDB, Carrierwave, GridFS and prevention of files' duplication

    - by Arkan
    I am dealing with Mongoid, carrierwave and gridFS to store my uploads. For example, I have a model Article, containing a file upload(a picture). class Article include Mongoid::Document field :title, :type => String field :content, :type => String mount_uploader :asset, AssetUploader end But I would like to only store the file once, in the case where I'll upload many times the same file for differents articles. I saw GridFS has a MD5 checksum. What would be the best way to prevent duplication of identicals files ? Thanks

    Read the article

  • $.Post with Form submit

    - by Michael
    ...Some form gets submitted... $.("form").submit(function() { saveFormValues($(this), "./../.."; } function saveFormValues(form, path) { var inputs = getFormData(form); var params = createAction("saveFormData", inputs); var url = path + "/scripts/sessions.php"; $.post(url, params); } The weird thing is that if i add a function to the $.post(url, params, function(data) { alert(data); } I get a blank alert statement. Within scripts/sessions.php i have a function to save whatever the $_POST information is to a file, and the sessions.php never saves this saveFormValues call. It never shows up to the file. But if i keep trying to get it to save, about once every 15 will actually allow it to be saved. This leads me to believe that the forms POST is somehow blocking this value saving post. Any help?

    Read the article

  • jQuery closing pop-up after successful form submit

    - by RyanP13
    Hi, I have a pop up form that i wish to close on successfull submit of the form. Currently we are only validating on the server side which will create an unordered list of validation errors with the class .error_message. Once the form has been successfully submitted i want to close the window. Currently the jQuery i am using for this is as follows but does not work because the first time the form has been submitted the number of error messages will always be zero. $(document).ready(function() { $('form#emailQuote').submit(function() { var $emailErrors = $('form#emailQuote ul.error_message').length; alert($emailErrors); if ($emailErrors == 0) { //window.close(); alert('no errors'); } else { //alert('errors'); alert('errors'); } }); });

    Read the article

  • FLEX: Make LineChart DATATIP constrain to vertical axis.

    - by user60310
    When making a line chart, Lets say its for business sales for different depts and horizontally is days and vertically is dollars. When you hover over a line it tells a dataTip tells you the sales for that dept. on that day. I want it to show all the depts at the same time, so say you hover over day 3, I want the dataTips for all depts on day 3 to display so you can compare the values for all the sales on the same day. I set the mouseSensitivity for the dataTips to display all the lines at once but I end up getting day 2 for one dept and day 3 for another which is not wanted. This is actually posted as a bug and explained better here: http://bugs.adobe.com/jira/browse/FLEXDMV-1853 I am wondering if anyone can come up with a work-around for this? Thanks!

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Using LINQ Group By to return new XElements

    - by Jon
    I have the following code and got myself confused: I have a query that returns a set of records that have been identified as duplicates and I then want to create a XElement for each one. This should be done in one query I think but I'm now lost. var f = (from x in MyDocument.Descendants("RECORD") where itemsThatWasDuplicated.Contains((int)x.Element("DOCUMENTID")) group x by x.Element("DOCUMENTID").Value into g let item = g.Skip(1) //Ignore first as that is the valid one select item ); var errorQuery = (from x in f let sequenceNumber = x.Element("DOCUMENTID").Value let detail = "Sequence number " + sequenceNumber + " was read more than once" select new XElement("ERROR", new XElement("DATETIME", time), new XElement("DETAIL", detail), new XAttribute("TYPE", "DUP"), new XElement("ID", x.Element("ID").Value) ) );

    Read the article

  • Unique identifier for an email

    - by Skywalker
    I am writing a C# application which allows users to store emails in a MS SQL Server database. Many times, multiple users will be copied on an email from a customer. If they all try to add the same email to the database, I want to make sure that the email is only added once. MD5 springs to mind as a way to do this. I don't need to worry about malicious tampering, only to make sure that the same email will map to the same hash and that no two emails with different content will map to the same hash. My question really boils down to how one would combine multiple fields into one MD5 (or other) hash value. Some of these fields will have a single value per email (e.g. subject, body, sender email address) while others will have multiple values (varying numbers of attachments, recipients). I want to develop a way of uniquely identifying an email that will be platform and language independent (not based on serialization). Any advice?

    Read the article

  • Sitemaps on multiple front end servers using a http handler, is that a good idea?

    - by Rihan Meij
    Question 1 We would like to generate a site map for our CMS site We have multiple front end servers with approx a million articles. Environment multiple MS SQL servers multiple front end servers (load balanced) ASP.net - and IIS 6 Windows 2003 To have the site maps (the site map index file, and the site map files) physically on the front end servers will be a operations nightmare and error prone. So we are considering using http handlers instead so that it does not matter what server gets the request, it will be able to serve the correct xml file. Question 2 If we ping Google each time we publish a new article will that effect us negatively if that happens more than once a hour. Thanks!

    Read the article

  • How to Buy an SD Card: Speed Classes, Sizes, and Capacities Explained

    - by Chris Hoffman
    Memory cards are used in digital cameras, music players, smartphones, tablets, and even laptops. But not all SD cards are created equal — there are different speed classes, physical sizes, and capacities to consider. Different devices require different types of SD cards. Here are the differences you’ll need to keep in mind when picking out the right SD card for your device. Speed Class In a nutshell, not all SD cards offer the same speeds. This matters for some tasks more than it matters for others. For example, if you’re a professional photographer taking photos in rapid succession on a DSLR camera saving them in high-resolution RAW format, you’ll want a fast SD card so your camera can save them as fast as possible. A fast SD card is also important if you want to record high-resolution video and save it directly to the SD card. If you’re just taking a few photos on a typical consumer camera or you’re just using an SD card to store some media files on your smartphone, the speed isn’t as important. Manufacturers use “speed classes” to measure an SD card’s speed. The SD Association that defines the SD card standard doesn’t actually define the exact speeds associated with these classes, but they do provide guidelines. There are four different speed classes — 10, 8, 4, and 2. 10 is the fastest, while 2 is the slowest. Class 2 is suitable for standard definition video recording, while classes 4 and 6 are suitable for high-definition video recording. Class 10 is suitable for “full HD video recording” and “HD still consecutive recording.” There are also two Ultra High Speed (UHS) speed classes, but they’re more expensive and are designed for professional use. UHS cards are designed for devices that support UHS. Here are the associated logos, in order from slowest to fastest:       You’ll probably be okay with a class 4 or 6 card for typical use in a digital camera, smartphone, or tablet. Class 10 cards are ideal if you’re shooting high-resolution videos or RAW photos. Class 2 cards are a bit on the slow side these days, so you may want to avoid them for all but the cheapest digital cameras. Even a cheap smartphone can record HD video, after all. An SD card’s speed class is identified on the SD card itself. You’ll also see the speed class on the online store listing or on the card’s packaging when purchasing it. For example, in the below photo, the middle SD card is speed class 4, while the two other cards are speed class 6. If you see no speed class symbol, you have a class 0 SD card. These cards were designed and produced before the speed class rating system was introduced. They may be slower than even a class 2 card. Physical Size Different devices use different sizes of SD cards. You’ll find standard-size CD cards, miniSD cards, and microSD cards. Standard SD cards are the largest, although they’re still very small. They measure 32x24x2.1 mm and weigh just two grams. Most consumer digital cameras for sale today still use standard SD cards. They have the standard “cut corner”  design. miniSD cards are smaller than standard SD cards, measuring 21.5x20x1.4 mm and weighing about 0.8 grams. This is the least common size today. miniSD cards were designed to be especially small for mobile phones, but we now have a smaller size. microSD cards are the smallest size of SD card, measuring 15x11x1 mm and weighing just 0.25 grams. These cards are used in most cell phones and smartphones that support SD cards. They’re also used in many other devices, such as tablets. SD cards will only fit into marching slots. You can’t plug a microSD card into a standard SD card slot — it won’t fit. However, you can purchase an adapter that allows you to plug a smaller SD card into a larger SD card’s form and fit it into the appropriate slot. Capacity Like USB flash drives, hard drives, solid-state drives, and other storage media, different SD cards can have different amounts of storage. But the differences between SD card capacities don’t stop there. Standard SDSC (SD) cards are 1 MB to 2 GB in size, or perhaps 4 GB in size — although 4 GB is non-standard. The SDHC standard was created later, and allows cards 2 GB to 32 GB in size. SDXC is a more recent standard that allows cards 32 GB to 2 TB in size. You’ll need a device that supports SDHC or SDXC cards to use them. At this point, the vast majority of devices should support SDHC. In fact, the SD cards you have are probably SDHC cards. SDXC is newer and less common. When buying an SD card, you’ll need to buy the right speed class, size, and capacity for your needs. Be sure to check what your device supports and consider what speed and capacity you’ll actually need. Image Credit: Ryosuke SEKIDO on Flickr, Clive Darra on Flickr, Steven Depolo on Flickr

    Read the article

  • Tricks to avoid losing motivation?

    - by AareP
    Motivation is a tricky thing to upkeep. Once I thought that ambitious projects will keep programmer motivated, and too simple tasks will hinder his motivation. Now I have plenty of experience with small and large projects, desktop/web/database programming, c++/c#/java/php languages, oop/non-oop paradigms, day-job/free-time programming.. but I still can't answer the question of motivation. Which programming tasks I like, and which don't? It seems to depend on too many variables. One thing remains constant though. It's that starting everything from scratch is always more motivating than extending some existing system. Unfortunately it's hard to use this trick in productive programming. :) So my question is, what tricks programmer can use to stay motivated? For example should we use pen and paper as much as possible, in order not to get fed up with monitor and keyboard?

    Read the article

  • How to organize a large number of objects

    - by shane
    We have a large number of documents and metadata (xml files) associated with these documents. What is the best way to organize them? Currently we put them into a series of nested folders: /repository/category/date(when they were loaded into our db)/document_number.pdf and .xml We use the path as a unique identifier for the document in our system. This is more versatile than putting them all in a single flat folder. also it is independent from our database/application, so we can reload them in case of failure. Yet, it introduces some limitations. for example we can't move the files once they've been placed in this structure, also it takes work to put them this way. What is the best practice? How websites such as Scribd deal with this problem?

    Read the article

  • UnitTest++ constructing fixtures multiple times?

    - by Peter
    I'm writing some unit tests in UnitTest++ and want to write a bunch of tests which share some common resources. I thought that this should work via their TEST_FIXTURE setup, but it seems to be constructing a new fixture for every test. Sample code: #include <UnitTest++.h> struct SomeFixture { SomeFixture() { // this line is hit twice } }; TEST_FIXTURE(SomeFixture, FirstTest) { } TEST_FIXTURE(SomeFixture, SecondTest) { } I feel like I must be doing something wrong; I had thought that the whole point of having the fixture was so that the setup/teardown code only happens once. Am I wrong on this? Is there something else I have to do to make it work that way?

    Read the article

  • Background processing in rails

    - by hashpipe
    Hi, This might seem like a FAQ on stackoverflow, but my requirements are a little different. While I have previously used BackgroundRB and DJ for running background processes in ruby, my requirement this time is to run some heavy analytics and mathematical computations on a huge set of data, and I need to do this only about the first 15 days of the month. Going by this, I am tempted to use cron and run a ruby script to accomplish this goal. What I would like to know / understand is: 1 - Is using cron a good idea (cause I'm not a system admin, and so while I have basic idea of cron, I'm not overly confident of doing it perfectly) 2 - Can we somehow modify DJ to run only on the first 15 days of the month (with / without using cron), and then just stop and exit once all the jobs in the queue for the day are over (don't want it to ping the DB every time for a new job...whatever the jobs will be in the queue when DJ starts, that will be all). I'm not sure if I have put the question in the right manner, but any help in this direction will be much appreciated. Thanks

    Read the article

  • Best practices for combining Lucene.NET and a relational database?

    - by FlySwat
    I'm working on a project where I will have a LOT of data, and it will be searchable by several forms that are very efficiently expressed as SQL Queries, but it also needs to be searched via natural language processing. My plan is to build an index using Lucene for this form of search. My question is that if I do this, and perform a search, Lucene will then return the ID's of matching documents in the index, I then have to lookup these entities from the relational database. This could be done in two ways (That I can think of so far): N amount of queries (Horrible) Pass all the ID's to a stored procedure at once (Perhaps as a comma delimited parameter). This has the downside of being limited to the max parameter size, and the slow performance of a UDF to split the string into a temporary table. I'm almost tempted to mirror everything into lucenes index, so that I can periodicly generate the index from the backing store, but only need to access it for the frontend. Advice?

    Read the article

  • Is it possible to make the AntiForgeryToken value in ASP.NET MVC change after each verification?

    - by jmcd
    We've just had some Penetration Testing carried out on an application we've built using ASP.NET MVC, and one of the recommendations that came back was that the value of the AntiForgeryToken in the Form could be resubmitted multiple times and did not expire after a single use. According to the OWASP recommendations around the Synchronizer Token Pattern: "In general, developers need only generate this token once for the current session." Which is how I think the ASP.NET MVC AntiForgeryToken works. In case we have to fight the battle, is it possible to cause the AntiForgeryToken to regenerate a new value after each validation?

    Read the article

  • Transitioning to Branching with TFS

    - by Rob
    Our team is currently using plain old TFS 2005, no branching, shared checkouts etc... I would like to introduce a DEV/MAIN/PROD branching system simillar to the basic flavor in the TFS Guidance document so that we can do some parallel dev, isolation, and firm up review and deployment processes. I have read most of the whitepapers etc. Do you guys have any practical advice, suggested tools, gotchas or reccomendations. Also, we plan to migrate to 2010 once it comes out - not sure if that would affect anything. I appreciate all the suggestions and help I can get as I am a branching neophyte. Thanks in advance.

    Read the article

  • What useful GDB scripts have you used/written?

    - by nik
    People use gdb on and off for debugging, of course there are lots of other debugging tools across the varied OSes, with and without GUI and, maybe other fancy IDE features. I would like to know what useful gdb scripts you have written and liked. While, I do not mean a dump of commands in a something.gdb file that you source to pull out a bunch of data, if that made your day, go ahead and talk about it. Lets think conditional processing, control loops and functions written for more elegant and refined programming to debug and, maybe even for whitebox testing Things get interesting when you start debugging remote systems (say, over a serial/ethernet interface) And, what if the target is a multi-processor (and, multithreaded) system Let me put a simple case as an example... Say, A script that traversed serially over entries to locate a bad entry in a large hash-table that is implemented on an embedded platform. That helped me debug a broken hash-table once.

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Avoid XmlDocument validating namespaces in C#

    - by Abbey Kingston
    Hello, I'm trying to find a way of indenting a HTML file, I've been using XMLDocument and just using a XmlTextWriter. However I am unable to format it correctly for HTML documents because it checks the doctype and tries to download it. Is there a "dumb" indenting mechanism that doesnt validate or check the document and does a best effort indentation? The files are 4-10Mb in size and they are autogenerated, we have to handle it internal - its fine, the user can wait, I just want to avoid forking to a new process etc. Essentially, right now I use a MemoryStream, XmlTextWriter and XmlDocument, once indented I read it back from the MemoryStream and return it as a string. Failures happen for XHTML documents and some HTML 4 documents because its trying to grab the dtds. I tried setting XmlResolver as null but to no avail :(

    Read the article

  • When is calculating or variable-reading faster?

    - by Andreas Hornig
    hi, to be honest, I don't really know what the "small green men" in my cpu and compiler do, so I sometimes would like to know :). Currently I would like to know what's faster, so that I can design my code in a more efficient way. So for example I want to calclate something at different points in my sourcecode, when will it be faster to calculate it once and store it in a variable that's read and used for the next points it's needed and when is it faster to calculate it everytime? I think it's depending on how "complex" and "long" the calculation is and how fast then cache is, where variables are stored, but I don't have any clue what's faster :). Thanks for any reply to my tiny but important question! Andreas PS: perhaps it's important to know that I code in JAVA, but it's more a genral question.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >