Search Results

Search found 31800 results on 1272 pages for 'nrf big show'.

Page 293/1272 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Imperative Programming in F#

    This article is taken from the book F# in Action. The authors discuss basics of imperative programming in F# and develop a simple application to show how this type of programming works. They also feature some of the interoperability among languages on .NET platform.

    Read the article

  • Using Apps Script with Twilio

    Using Apps Script with Twilio In this episode we talk about integrating SMS and phone calls with Google Apps via Twilio, a voice and SMS provider. We show you the basics of the API as well as how to bring voice calls and SMS into spreadsheets and docs. You can download the source code for the demos here: github.com From: GoogleDevelopers Views: 1247 34 ratings Time: 27:57 More in Science & Technology

    Read the article

  • Do you know any studies on relation of productivity of a programmer and the workstation used?

    - by Tomasz Blachowicz
    I was wondering if there are any studies (formal or not-so-formal) that show correlation between a developer productivity and the workstation used to develop software. It is often heard as argument that the high spec workstations increase the productivity (or the low spec machines impact productivity to the greater extent). To me it sound reasonable, however I'd like to verify the statement with some studies if such exists. Can you help me with that?

    Read the article

  • google chrome : http authentication issue on iframe

    - by Daniel Dzussa
    I have an HTML file with 2 links http://versionplus.in/pass/new.html, both links are in an iframe to load the contents inside the iframe. I have two protected directories one on the same server and other on another server. If you click on either link it will popup the login box, same for both links in all browsers except google chorme. google chrome doesn't show login box for the protected folder on another server, how can I fix this ?

    Read the article

  • Menus intermittantly take two clicks

    - by heynnema
    Intermittently, when I left-click on the Applications/Places/System menus, the first click does nothing, and it takes a second click to drop the menu. I've also noticed that hiarchial menus sometimes don't show their expanded menus unless I move the mouse pointer away to another menu item, and then come back into the hiarchial menu. And lately, I've even noticed that in some applications, menus are acting the same way. Any ideas? Cheers, Al

    Read the article

  • How can I plot a radius of all reachable points with pathfinding for a Mob?

    - by PugWrath
    I am designing a tactical turn based game. The maps are 2d, but do have varying level-layers and blocking objects/terrain. I'm looking for an algorithm for pathfinding which will allow me to show an opaque shape representing all of the possible max-distance pixels that a mob can move to, knowing the mob's max pixel distance. Any thoughts on this, or do I just need to write a good pathfinding algorithm and use it to find the cutoff points for any direction in which an obstacle exists?

    Read the article

  • Are forks are treated differently by GitHub?

    - by IQAndreas
    I found that GitHub does not allow you to use the "search" feature on forks (issues are still searchable, just not code). [screenshot] Are there any other cases where forks are treated as "inferior" or at least differently by GitHub? For instance, (assuming you haven't created a website specific to your fork), will forks still show up in Google search results, or will GitHub only provide results for the parent repository?

    Read the article

  • AdSense sent an email saying my account has been approved when it already was approved

    - by moomoochoo
    My account has been approved and running adverts for quite sometime now. However, today I just got a message (it seems legitimate) from Google AdSense saying: Congratulations, your AdSense account has been approved to show AdSense ads on your own website. Within a few hours, you will begin to see live ads. Should I be concerned? They say that they review accounts to check for compliance, could this be some weird way of saying they rechecked my sites and they complied?

    Read the article

  • Motion Sickness – What is It? [Video]

    - by Asian Angel
    Experiencing motion sickness is unpleasant and frustrating, but have you ever wondered what causes you to feel it? This video from AsapSCIENCE explains what causes you to feel motion sickness and shows some ‘events’ you might avoid that can trigger it… Motion Sickness – What is it? [via Neatorama] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Ralink Bluetooth not working in Ubuntu 13.04

    - by Sourabh
    My Bluetooth is not working in Ubuntu 13.04. I am unable to turn it on and also, Bluetooth icon doesn't show up in the top bar. I asked about this on #ubuntu IRC and they said that I am missing the proprietary drivers for it. There is nothing under Additional Drivers in Software and Updates Bluetooth does work 'Try Ubuntu mode' when I boot Ubuntu from DVD/USB. How can I get the required drivers?

    Read the article

  • NUMA-aware constructs for java.util.concurrent

    - by Dave
    The constructs in the java.util.concurrent JSR-166 "JUC" concurrency library are currently NUMA-oblivious. That's because we currently don't have the topology discovery infrastructure and underpinnings in place that would allow and enable NUMA-awareness. But some quick throw-away prototypes show that it's possible to write NUMA-aware library code. I happened to use JUC Exchanger as a research vehicle. Another interesting idea is to adapt fork-join work-stealing to favor stealing from queues associated with 'nearby' threads.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Visual Studio Tips and Tricks

    - by deadlydog
    Just found a few websites that show some Visual Studio tips that I haven't seen before, so I thought I'd share: 1 - Tips and Tricks for the Visual Studio .NET IDE 2 - Essential Visual Studio Tips & Tricks that Every Developer Should Know 3 - Channel 9's Visual Studio Toolbox - weekly series dedicated to showing all the cool stuff that Visual Studio can do and how to be more productive with it.

    Read the article

  • HTG Explains: What Group Policy Is and How You Can Use It

    - by Chris Hoffman
    Group Policy is a Windows feature that contains a variety of advanced settings, particularly for network administrators. However, local Group Policy can also be used to adjust settings on a single computer. Group Policy isn’t designed for home users, so it’s only available on Professional, Ultimate, and Enterprise versions of Windows. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Do you know any studies on relation of productivity of a programmer and the workstation used?

    - by Tomasz Blachowicz
    I was wondering if there are any studies (formal or not-so-formal) that show correlation between a developer productivity and the workstation used to develop software. It is often heard as argument that the high spec workstations increase the productivity (or the low spec machines impact productivity to the greater extent). To me it sound reasonable, however I'd like to verify the statement with some studies if such exists. Can you help me with that?

    Read the article

  • How to make Unity's Dash save the results filtering?

    - by Vagrant232
    The dash can remember the settings for filtered results for the entire session, but not beyond that. Once the profile is logged out and back in the results would reset to their original settings; displaying all the results without filtering. How can I make the results filtering more permanent? IE: How to make the photo lens for instance always show photos from "This Computer" and not from Picasa or Facebook across sessions and not just temporarily?

    Read the article

  • Building package from binary files - what's wrong with my control file

    - by Hannes de Jager
    I'm busy trying to build a .deb package from the binaries of my application (non open source) and I'm having trouble getting the correct info to display in the Ubuntu Software Centre (when you click on the .deb file). Please see screenshot below of control file and Software Centre View. It seems like the package name and the package description is swapped. I'm expecting the part in bold to read "attix5pro" and not "Cloud backup agent". Can someone show my my mistake or guide me?

    Read the article

  • Is it wrong to tell mobile users to view a site on their computer?

    - by betamax
    I am creating a web application that doesn't work correctly on mobile. I don't want to make it work on mobile because I would rather mobile users have a fully integrated experience and not have to use the web version. This mobile version will be released at a later date based on reaction to the initial web-based version. So, my question is: Is it wrong to not allow mobile users to use the site and instead show them some sort of splash screen telling them to come back to the site on a computer?

    Read the article

  • Getting More Website Traffic From Google - How to Know Which Keywords Will Make You a Profit

    When it comes to making a profit with Google AdWords everyone knows that you need to make sure you are using the right keywords to make it happen. But the problem is that unless you have a proven strategy for finding the right keywords you are going to end up losing a lot of money and pick the wrong search terms. In this article I want to show you exactly how to find the right search terms so you can maximize your profits.

    Read the article

  • ASP.NET Querystring: Basic Dynamic URL Formations

    If you are a beginner to ASP.NET 3.5 you might ask How are dynamic URLs using queries generated in ASP.NET In developing dynamic websites those that strongly depend on using a database to present content it is of the utmost importance that you clearly understand how to work with query-based URLs. This article will show you how.... Reach Millions of Netbook Users Easily create and sell netbook apps with the Intel? Atom? Developer program

    Read the article

  • Calgary .NET User Group &ndash; Entity Framework Code First - December 11th

    - by David Paquette
    I will be presenting at the Calgary .NET User Group on December 11th. We will start from scratch in this intro to Entity Framework Code First. We will build a simple application using ASP.NET MVC and Entity Framework and evolve the application to show how we can build scalable applications using Entity Framework Code First. Topics covered will include database initialization, code based migrations, performance profiling and performance tuning. Register at http://www.dotnetcalgary.com/

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >