Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 23/2416 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • How do I compile a Code Composer project which was created using a different version of Code Generation tools?

    - by snakile
    I have a Code Composer project I received from a friend. When I try to build it I get the following error message: This project was created using a version of Code Generation tools that is not currently installed: 6.1.12 [C6000]. Please install the Code Generation tools of this version, or migrate the project to one of the supported versions. How do I migrate the project to my version?

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • NSURLConnection receives data even if no data was thrown back

    - by Anna Fortuna
    Let me explain my situation. Currently, I am experimenting long-polling using NSURLConnection. I found this and I decided to try it. What I do is send a request to the server with a timeout interval of 300 secs. (or 5 mins.) Here is a code snippet: NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLCacheStorageAllowedInMemoryOnly timeoutInterval:300]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&resp error:&err]; Now I want to test if the connection will "hold" the request if no data was thrown back from the server, so what I did was this: if (data != nil) [self performSelectorOnMainThread:@selector(dataReceived:) withObject:data waitUntilDone:YES]; And the function dataReceived: looks like this: - (void)dataReceived:(NSData *)data { NSLog(@"DATA RECEIVED!"); NSString *string = [NSString stringWithUTF8String:[data bytes]]; NSLog(@"THE DATA: %@", string); } Server-side, I created a function that will return a data once it fits the arguments and returns none if nothing fits. Here is a snippet of the PHP function: function retrieveMessages($vardata) { if (!empty($vardata)) { $result = check_data($vardata) //check_data is the function which returns 1 if $vardata //fits the arguments, and 0 if it fails to fit if ($result == 1) { $jsonArray = array('Data' => $vardata); echo json_encode($jsonArray); } } } As you can see, the function will only return data if the $result is equal to 1. However, even if the function returns nothing, NSURLConnection will still perform the function dataReceived: meaning the NSURLConnection still receives data, albeit an empty one. So can anyone help me here? How will I perform long-polling using NSURLConnection? Basically, I want to maintain the connection as long as no data is returned. So how will I do it? NOTE: I am new to PHP, so if my code is wrong, please point it out so I can correct it.

    Read the article

  • How to maintain an ordered table with Core Data (or SQL) with insertions/deletions?

    - by Jean-Denis Muys
    This question is in the context of Core Data, but if I am not mistaken, it applies equally well to a more general SQL case. I want to maintain an ordered table using Core Data, with the possibility for the user to: reorder rows insert new lines anywhere delete any existing line What's the best data model to do that? I can see two ways: 1) Model it as an array: I add an int position property to my entity 2) Model it as a linked list: I add two one-to-one relations, next and previous from my entity to itself 1) makes it easy to sort, but painful to insert or delete as you then have to update the position of all objects that come after 2) makes it easy to insert or delete, but very difficult to sort. In fact, I don't think I know how to express a Sort Descriptor (SQL ORDER BY clause) for that case. Now I can imagine a variation on 1): 3) add an int ordering property to the entity, but instead of having it count one-by-one, have it count 100 by 100 (for example). Then inserting is as simple as finding any number between the ordering of the previous and next existing objects. The expensive renumbering only has to occur when the 100 holes have been filled. Making that property a float rather than an int makes it even better: it's almost always possible to find a new float midway between two floats. Am I on the right track with solution 3), or is there something smarter?

    Read the article

  • How can I scrape specific data from a website

    - by Stoney
    I'm trying to scrape data from a website for research. The urls are nicely organized in an example.com/x format, with x as an ascending number and all of the pages are structured in the same way. I just need to grab certain headings and a few numbers which are always in the same locations. I'll then need to get this data into structured form for analysis in Excel. I have used wget before to download pages, but I can't figure out how to grab specific lines of text. Excel has a feature to grab data from the web (Data-From Web) but from what I can see it only allows me to download tables. Unfortunately, the data I need is not in tables.

    Read the article

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • Sneak peek at next generation Three MiFi unit – Huawei E585

    - by Liam Westley
    Last Wednesday I was fortunate to be invited to a sneak preview of the next generation Three MiFi unit, the Huawei E585. Many thanks to all those who posted questions both via this blog or via @westleyl on Twitter. I think I made sure I asked every question posed to the MiFi product manager from Three UK, and so here's the answers you were after. What is a MiFi? For those who are wondering, a MiFi unit is a 3G broadband modem combined with a WiFi access point, providing 3G broadband data access to up to five devices simultaneously via standard WiFi connections. What is different? It appears the prime task of enhancing the MiFi was to improve the user experience and user interface, both in terms of the device hardware and within the management software to configure the device.  I think this was a very sensible decision as these areas had substantial room for improvement. Single button operation to switch on, enable WiFi and connect to 3G Improved OELD display (see below), replacing the multi coloured LEDs; including signal strength, SMS notifications, the number of connected clients and data usage Management is via a web based dashboard accessible from any web browser. This is a big win for those running Linux, Mac OS/X, iPad users and, for me, as I can now configure the device from Windows 7 64-bit Charging is via micro USB, the new standard for small USB devices; you cannot use your old charger for the new MiFi unit Automatic reconnection when regaining a signal Improved charging time, which should allow recharging of the device when in use Although subjective, the black and silver design does look more classy than the silver and white plastic of the original MiFi What is the same? Virtually the same size and weight The battery is the same unit as the original MiFi so you’ll have a handy spare if you upgrade Data plans remain the same as the current MiFi, so cheapest price for upgraders will be £49 pay as you go Still only works on 3G networks, with no fallback to GPRS or EDGE There is no specific upgrade path for existing three customers, either from dongle or from the original MiFi My opinion I think three have concentrated on the correct areas of usability and user experience rather than trying to add new whizz bang technology features which aren’t of interest to mainstream users. The one button operation and the improved device display will make it much easier to use when out and about. If the automatic reconnection proves reliable that will remove a major bugbear that I experienced the previous evening when travelling on the First Great Western line from Paddington to Didcot Parkway.  The signal was repeatedly lost as we sped through tunnels and cuttings, and without automatic reconnection is was a real pain to keep pressing the data button on the MiFi to re-establish my data connection. And finally, the web based dashboard will mean I no longer need to resort to my XP based netbook to configure the SSID and password. My everyday laptop runs Windows 7 64-bit which appears to confuse the older 3 WiFi manager which cannot locate the MiFi when connected. Links to other sites, and other images of the device Good first impressions from Ben Smith, http://thereallymobileproject.com/2010/06/3uk-announce-a-new-mifi-with-a-screen/ Also, a round up of other sneak preview posts, http://www.3mobilebuzz.com/2010/06/11/mifi-round-two-your-view/ Pictures Here is a comparison of the old MiFi device next to the new device, complete with OLED display and the Huawei logo now being a prominent feature on the front of the device. One of my fellow bloggers had a Linux based netbook, showing off the web based dashboard complete with Text messages panel to manage SMS. And finally, I never thought that my blog sub title would ever end up printed onto a cup cake, ... and here's some of the other cup cakes ...

    Read the article

  • Calculating percentiles in Excel with "buckets" data instead of the data list itself

    - by G B
    I have a bunch of data in Excel that I need to get certain percentile information from. The problem is that instead of having the data set made up of each value, I instead have info on the number of or "bucket" data. For example, imagine that my actual data set looks like this: 1,1,2,2,2,2,3,3,4,4,4 The data set that I have is this: Value No. of occurrences 1 2 2 4 3 2 4 3 Is there an easy way for me to calculate percentile information (as well as the median) without having to explode the summary data out to full data set? (Once I did that, I know that I could just use the Percentile(A1:A5, p) function) This is important because my data set is very large. If I exploded the data out, I would have hundreds of thousands of rows and I would have to do it for a couple of hundred data sets. Help!

    Read the article

  • How do I setup a WCF Data Service with an ADO.NET Entity Entity Model in another assembly?

    - by lsb
    Hi! I have an ASP.NET 4.0 website that has an Entity Data Model hooked up to WCF Data Service. When the Service and Model are in the same assembly everything works. Unfortunately, when I move the Model to another "shared" assembly (and change the namespace) the service compiles but throws a 500 error when launched in a browser. The reason I want to have the Model in a common assembly (lets call it RiaTest.Shared) is that I want share common validation code between the client and service (by checking "Reuse types in referenced assemblies" in the Advanced tab of the Add Service Reference dialog). Anyway, I've spent a couple of hours on this to no avail so any help in the regard would be appreciated...

    Read the article

  • Open Data, Government and Transparency

    - by Tori Wieldt
    A new track at TDC (The Developer's Conference in Sao Paulo, Brazil) is titled Open Data. It deals with open data, government and transparency. Saturday will be a "transparency hacker day" where developers are invited to create applications using open data from the Brazilian government.  Alexandre Gomes, co-lead of the track, says "I want to inspire developers to become "Civic hackers:" developers who create apps to make society better." It is a chance for developers to do well and do good. There are many opportunities for developers, including monitoring government expenditures and getting citizens involved via social networks. The open data movement is growing worldwide. One initiative, the Open Government Partnership, is working to make government data easier to find and access. Making this data easily available means that with the right applications, it will be easier for people to make decisions and suggestions about government policies based on detailed information. Last April, the Open Government Partnership held its annual meeting in Brasilia, the capitol of Brazil. It was a great success showcasing the innovative work being done in open data by governments, civil societies and individuals around the world. For example, Bulgaria now publishes daily data on budget spending for all public institutions. Alexandre Gomes Explains Open Data At TDC, the Open Data track will include a presentation of examples of successful open data projects, an introduction to the semantic web, how to handle big data sets, techniques of data visualization, and how to design APIs.The other track lead is Christian Moryah Miranda, a systems analyst for the Brazilian Government's Ministry of Planning. "The Brazilian government wholeheartedly supports this effort. In order to make our data available to the public, it forces us to be more consistent with our data across ministries, and that's a good step forward for us," he said. He explained the government knows they cannot achieve everything they would like without help from the public. "It is not the government versus the people, rather citizens are partners with the government, and together we can achieve great things!" Miranda exclaimed. Saturday at TDC will be a "transparency hacker day" where developers will be invited to create applications using open data from the Brazilian government. Attendees are invited to pitch their ideas, work in small groups, and present their project at the end of the conference. "For example," Gomes said, "the Brazilian government just released the salaries of all government employees and I can't wait to see what developers can do with that." Resources Open Government Partnership  U.S. Government Open Data ProjectBrazilian Government Open Data ProjectU.K. Government Open Data Project 2012 International Open Government Data Conference 

    Read the article

  • Master Data Management and Cloud Computing

    - by david.butler(at)oracle.com
    Cloud Computing is all the rage these days. There are many reasons why this is so. But like its predecessor, Service Oriented Architecture, it can fall on hard times if the underlying data is left unmanaged. Master Data Management is the perfect Cloud companion. It can materially increase the chances for successful Cloud initiatives. In this blog, I'll review the nature of the Cloud and show how MDM fits in.   Here's the National Institute of Standards and Technology Cloud definition: •          Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.   Cloud architectures have three main layers: applications or Software as a Service (SaaS), Platforms as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS generally refers to applications that are delivered to end-users over the Internet. Oracle CRM On Demand is an example of a SaaS application. Today there are hundreds of SaaS providers covering a wide variety of applications including Salesforce.com, Workday, and Netsuite. Oracle MDM applications are located in this layer of Oracle's On Demand enterprise Cloud platform. We call it Master Data as a Service (MDaaS). PaaS generally refers to an application deployment platform delivered as a service. They are often built on a grid computing architecture and include database and middleware. Oracle Fusion Middleware is in this category and includes the SOA and Data Integration products used to connect SaaS applications including MDM. Finally, IaaS generally refers to computing hardware (servers, storage and network) delivered as a service.  This typically includes the associated software as well: operating systems, virtualization, clustering, etc.    Cloud Computing benefits are compelling for a large number of organizations. These include significant cost savings, increased flexibility, and fast deployments. Cost advantages include paying for just what you use. This is especially critical for organizations with variable or seasonal usage. Companies don't have to invest to support peak computing periods. Costs are also more predictable and controllable. Increased agility includes access to the latest technology and experts without making significant up front investments.   While Cloud Computing is certainly very alluring with a clear value proposition, it is not without its challenges. An IDC survey of 244 IT executives/CIOs and their line-of-business (LOB) colleagues identified a number of issues:   Security - 74% identified security as an issue involving data privacy and resource access control. Integration - 61% found that it is hard to integrate Cloud Apps with in-house applications. Operational Costs - 50% are worried that On Demand will actually cost more given the impact of poor data quality on the rest of the enterprise. Compliance - 49% felt that compliance with required regulatory, legal and general industry requirements (such as PCI, HIPAA and Sarbanes-Oxley) would be a major issue. When control is lost, the ability of a provider to directly manage how and where data is deployed, used and destroyed is negatively impacted.  There are others, but I singled out these four top issues because Master Data Management, properly incorporated into a Cloud Computing infrastructure, can significantly ameliorate all of these problems. Cloud Computing can literally rain raw data across the enterprise.   According to fellow blogger, Mike Ferguson, "the fracturing of data caused by the adoption of cloud computing raises the importance of MDM in keeping disparate data synchronized."   David Linthicum, CTO Blue Mountain Labs blogs that "the lack of MDM will become more of an issue as cloud computing rises. We're moving from complex federated on-premise systems, to complex federated on-premise and cloud-delivered systems."    Left unmanaged, non-standard, inconsistent, ungoverned data with questionable quality can pollute analytical systems, increase operational costs, and reduce the ROI in Cloud and On-Premise applications. As cloud computing becomes more relevant, and more data, applications, services, and processes are moved out to cloud computing platforms, the need for MDM becomes ever more important. Oracle's MDM suite is designed to deal with all four of the above Cloud issues listed in the IDC survey.   Security - MDM manages all master data attribute privacy and resource access control issues. Integration - MDM pre-integrates Cloud Apps with each other and with On Premise applications at the data level. Operational Costs - MDM significantly reduces operational costs by increasing data quality, thereby improving enterprise business processes efficiency. Compliance - MDM, with its built in Data Governance capabilities, insures that the data is governed according to organizational standards. This facilitates rapid and accurate reporting for compliance purposes. Oracle MDM creates governed high quality master data. A unified cleansed and standardized data view is produced. The Oracle Customer Hub creates a single view of the customer. The Oracle Product Hub creates high quality product data designed to support all go-to-market processes. Oracle Supplier Hub dramatically reduces the chances of 'supplier exceptions'. Oracle Site Hub masters locations. And Oracle Hyperion Data Relationship Management masters financial reference data and manages enterprise hierarchies across operational areas from ERP to EPM and CRM to SCM. Oracle Fusion Middleware connects Cloud and On Premise applications to MDM Hubs and brings high quality master data to your enterprise business processes.   An independent analyst once said "Poor data quality is like dirt on the windshield. You may be able to drive for a long time with slowly degrading vision, but at some point, you either have to stop and clear the windshield or risk everything."  Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed. This will in turn insure that expected returns on the investment in Cloud Computing will be realized.       Oracle MDM has proven its metal in this area and has the customers to back that up. In fact, I will be hosting a webcast on Tuesday, April 10th at 10 am PT with one of our top Cloud customers, the Church Pension Group. They have moved all mainline applications to a hosted model and use Oracle MDM to insure the master data is managed and cleansed before it is propagated to other cloud and internal systems. I invite you join Martin Hossfeld, VP, IT Operations, and Danette Patterson, Enterprise Data Manager as they review business drivers for MDM and hosted applications, how they did it, the benefits achieved, and lessons learned. You can register for this free webcast here.  Hope to see you there.

    Read the article

  • Random Map Generation in Java

    - by Thomas Owers
    I'm starting/started a 2D tilemap RPG game in Java and I want to implement random map generation. I have a list of different tiles, (dirt/sand/stone/grass/gravel etc) along with water tiles and path tiles, the problem I have is that I have no idea where to start on generating a map randomly. It would need to have terrain sections (Like a part of it will be sand, part dirt etc) Similar to how Minecraft is where you have different biomes and they seamlessly transform into each other. Lastly I would also need to add random paths into this as well going in different directions all over the map. I'm not asking anyone to write me all the code or anything, just piont me into the right direction please. tl;dr - Generate a tile map with biomes, paths and make sure the biomes seamlessly go into each other. Any help would be much appreciated! :) Thank you.

    Read the article

  • Can anyone explain to me what problem Core Data solves?

    - by Curtis Sumpter
    Core Data seems to add a needless layer of complexity. If you want to save data created natively by the user in an app why not just use an object and then write the data all to SQLite or back to a server using a RESTful script if necessary. Android doesn't have Core Data (though if it has something similar I haven't seen it.). What the heck is the point of buggy CD except useless needless overhead for people who can't write SQL or CGI scripts?

    Read the article

  • Random map generation

    - by Thomas Owers
    I'm starting/started a 2D tilemap RPG game in Java and I want to implement random map generation. I have a list of different tiles, (dirt/sand/stone/grass/gravel etc) along with water tiles and path tiles, the problem I have is that I have no idea where to start on generating a map randomly. It would need to have terrain sections (Like a part of it will be sand, part dirt, etc.) Similar to how Minecraft is where you have different biomes and they seamlessly transform into each other. Lastly I would also need to add random paths into this as well going in different directions all over the map. I'm not asking anyone to write me all the code or anything, just piont me into the right direction please. tl;dr - Generate a tile map with biomes, paths and make sure the biomes seamlessly go into each other.

    Read the article

  • Island Generation Library

    - by thatguy
    Can anyone recommend a tile map generator (written in Java is a plus), where one can control some land types? For example: islands, large continents, singe large continent, archipelago, etc. I've been reading through many posts on the subject, it almost seems like many are just rolling their own. Before creating my own, I'm wondering if there's already an open source implementation that I might not be finding. If not, it seems like using Perlin Noise is a popular choice. Some articles I've been reading: http://simblob.blogspot.com/2010/01/simple-map-generation.html Generate islands/continents with simplex noise https://sites.google.com/site/minecraftlandgenerator/

    Read the article

  • Difference between "Data Binding'","Data Hiding","Data Wraping" and "Encapsulation"?

    - by krishna Chandra
    I have been studying the conpects of Object oriented programming. Still I am not able to distinguish between the following concepts of object oriented programming.. a) Data Binding b) Data Hiding c) Data Wrapping d) encapsulation e) Data Abstraction I have gone through a lot of books ,and I also search the difference in google. but still I am not able to make the difference between these? Could anyone please help me ?

    Read the article

  • HTML5 für APEX Entwickler: Anwendungen der nächsten Generation

    - by Carsten Czarski
    HTML5 ist nicht umsonst eines der meistdiskutierten Themen in der Anwendungsentwicklung: Es eröffnet dem Anwendungsentwickler völlig neue Möglichkeiten zur Gestaltung von Web-Benutzeroberflächen. So ist es möglich, mit HTML5 auf das GPS eines mobilen Geräts zuzugreifen - aber das ist bei weitem nicht alles: Mit SVG und CANVAS-Objekten wird es möglich, frei auf der Browseroberfläche zu zeichnen - die FileReader API erlaubt es, vom Anwender ausgewählte Dateien noch vor dem Hochladen auszulesen. Diese Möglichkeiten erlauben es, völlig neue Anwendungen zu entwickeln - eben Anwendungen der nächsten Generation. Im Webseminar am 8. November wurden einige der Möglichkeiten von HTML5 vorgestellt und gezeigt, wie man sie in APEX-Anwendungen nutzen kann. Die Foliensätze und APEX-Beispielanwendungen sind ab sofort verfügbar: https://apex.oracle.com/folien Schlüsselwort: apex-html5

    Read the article

  • Recover harddrive data

    - by gameshints
    I have a dell laptop that recently "died" (It would get the blue screen of death upon starting) and the hard drive would make a weird cyclic clicking noises. I wanted to see if I could use some tools on my linux machine to recover the data, so I plugged it into there. If I run "fdisk" I get: Disk /dev/sdb: 20.0 GB, 20003880960 bytes 64 heads, 32 sectors/track, 19077 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk identifier: 0x64651a0a Disk /dev/sdb doesn't contain a valid partition table Fine, the partition table is messed up. However if I run "testdisk" in attempt to fix the table, it freezes at this point, making the same cyclical clicking noises: Disk /dev/sdb - 20 GB / 18 GiB - CHS 19078 64 32 Analyse cylinder 158/19077: 00% I don't really care about the hard drive working again, and just the data, so I ran "gpart" to figure out where the partitions used to be. I got this: dev(/dev/sdb) mss(512) chs(19077/64/32)(LBA) #s(39069696) size(19077mb) * Warning: strange partition table magic 0x2A55. Primary partition(1) type: 222(0xDE)(UNKNOWN) size: 15mb #s(31429) s(63-31491) chs: (0/1/1)-(3/126/63)d (0/1/32)-(15/24/4)r hex: 00 01 01 00 DE 7E 3F 03 3F 00 00 00 C5 7A 00 00 Primary partition(2) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) (BOOT) size: 19021mb #s(38956987) s(31492-38988478) chs: (4/0/1)-(895/126/63)d (15/24/5)-(19037/21/31)r hex: 80 00 01 04 07 7E FF 7F 04 7B 00 00 BB 6F 52 02 So I tried to mount just to the old NTFS partition, but got an error: sudo mount -o loop,ro,offset=16123904 -t ntfs /dev/sdb /mnt/usb NTFS signature is missing. Ugh. Okay. But then I tried to get a raw data dump by running dd if=/dev/sdb of=/home/erik/brokenhd skip=31492 count=38956987 But the file got up to 59885568 bytes, and made the same cyclical clicking noises. Obviously there is a bad sector, but I don't know what to do about it! The data is still there... if I view that 57MB file in textpad... I can see raw data from files. How can I get my data back? Thanks for any suggestions, Solution: I was able to recover about 90% of my data: Froze harddrive in freezer Used Ddrescue to make a copy of the drive Since Ddrescue wasn't able to get enough of my drive to use testdisk to recover my partitions/file system, I ended up using photorec to recover most of my files

    Read the article

  • Generation 4 Modular Data Center

    - by kaleidoscope
    Microsoft’s launched Generation 4 Modular Data Center design at the PDC 09 - The 20-foot container built on container-based model. Microsoft says the use of server-packed containers – known as Pre-Assembled Components (PACs) – will allow it to slash the cost of building its new data centers. Completely optimized for outdoor use, with a design that relies upon fresh air (”free cooling”) rather than air conditioning. Its exterior is designed to draw fresh air into the cold aisle and expel hot air from the rear of the hot aisle. More details can be found at: http://www.datacenterknowledge.com/archives/2009/11/18/microsofts-windows-azure-cloud-container/   Rituraj, J

    Read the article

  • Random/Procedural vs. Previously Made Level Generation

    - by PythonInProgress
    I am making a game (called "Glory") that is a top-down explorer game, and am wondering what the advantages/disadvantages of using random/procedural generation vs. pre-made levels are. There seems to be few that i can think of, other than the fact that items may be a problem to distribute in randomly generated terrain, and that the generated terrain may look weird. The downside to previously made levels is that I would need to make a level editor, though. I cannot decide what is better to use.

    Read the article

  • Product Value Chain Management: How Oracle is Taking the Lead on Next Generation Enterprise PLM

    To manage growing product complexity and innovation challenges, Product Lifecycle Management solutions have become staple IT investments over the years. But as product information continues to span more and more functions inside the company and out, we've seen many customer PLM implementations adapt and expand to serve new needs in a fully connected world. We call this next level of PLM the Product Value Chain, an integrated business model that offers powerful new strategies for executives to collectively leverage PLM other industry leading Oracle applications to achieve further incremental value. In this Appcast, hear Terri Hiskey, Director PLM Product Marketing, and John Kelley, VP PLM Product Strategy, discuss Oracle's vision for next generation enterprise PLM: the Product Value Chain.

    Read the article

  • Drivers are not detecting on my Dell inspiron 5420 (14R 3rd generation)

    - by Ranjan Mallik
    I'm using DELL inspiron 5420 (14R 3rd generation), and tried to install ubuntu 10.10, 11.10, 12.04, 12.04.1 but each and every time it doesn't support any of my driver such as network card, wireless driver, video driver and as well as audio and touch-pad driver. audio, video and touch-pad works with their basic functionality but don't work with their full functionality. I'm a new user of ubuntu, and willing to use it permanently. In this condition I tried some solutions from the web but didn't get out from this problem. For this I'm knocking to you, If you give me the proper solution for getting out of this problem, I'll be very helpful to you all. Thanks

    Read the article

  • Reporting Solution in PHP / CodeIgniter - Server side logic vs client side

    - by dot
    I'm building a report for an end user. They would like to see a list of all widgets... but then also like to see widgets with missing attributes, like missing names, or missing size. So i was thinking of creating one method that returns json data containing all widgets... and then using javascript to let them filter the data for missing data, instead of requerying the database. Ultimately, they need to be able to save all "reports" (filtered versions of data) inside a csv file. These are the two options I'm mulling over: Design 1 Create 3 separate methods in my controller/model like: get_all_data() get_records_with_missing_names() get_records_with_missing_size() And then when these methods are called, I would display the data on screen and give them a button to save to csv file. Design 2 Create one method called get_all_data() and then somehow, give them tools in the view to filter the json data using tables etc... and then letting them save subsets of the data. The reality is, in order to display all data, I still need to massage the data, and therefore, I know which records are missing attributes. So i'd rather not create separate methods by each filter. I'm not sure how I would do that just yet but at this point, i would like to know some pros/cons of each method. Thanks.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >