Search Results

Search found 60488 results on 2420 pages for 'saving data'.

Page 100/2420 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • The Oracle MDM Portfolio & Strategy Session - It All Comes Down to Master Data

    - by Mala Narasimharajan
     By Narayana Machiraju We are less than a week now from the start of Oracle Open World 2012 and I would like to introduce you all to one of the most awaited MDM strategy sessions this year titled “What’s there to Know about Oracle’s Master Data Management Portfolio and Roadmap?”. Manouj Tahiliani, Senior Director of MDM Product Strategy provides you a complete picture of the Oracle MDM Portfolio, the Product releases, the Strategy and the Roadmaps. Manoj will be discussing Oracle Fusion MDM applications, the first enterprise-grade SaaS MDM product suite. You’ll hear strategies for leveraging MDM and data quality in the enterprise and how you can derive business value by deploying an MDM foundation for strategic initiatives such as customer experience management, product innovation, and financial transformation. And as a bonus, he is also going to discuss the confluence of MDM with emerging technologies such as big data, social, and mobile. The session is co-presented by GEHC and Westpac. Tony Craddock from Westpac is going to share the insights of their MDM Implementation in the lines of Business drivers, data governance, ROI and other important implementation considerations. A reprsentative from GEHC is going to talk about their MDM journey and the multi-domain MDM story. I strongly recommend yo not miss this important session The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several Customer Panels, Conference sessions and Customer round table sessions featuring a lot of marquee Customers. You can see an overview of MDM sessions here. 

    Read the article

  • Sorting a Grid of Data in ASP.NET MVC

    Last week's article, Displaying a Grid of Data in ASP.NET MVC, showed, step-by-step, how to display a grid of data in an ASP.NET MVC application. Last week's article started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model). This article builds on the demo application created in Displaying a Grid of Data in ASP.NET MVC, enhancing the grid to include bi-directional sorting. If you come from an ASP.NET WebForms background, you know that the GridView control makes implementing sorting as easy as ticking a checkbox. Unfortunately, implementing sorting in ASP.NET MVC involves a bit more work than simply checking a checkbox, but the quantity of work isn't significantly greater and with ASP.NET MVC we have more control over the grid and sorting interface's layout and markup, as well as the mechanism through which sorting is implemented. With the GridView control, sorting is handled through form postbacks with the sorting parameters - what column to sort by and whether to sort in ascending or descending order - being submitted as hidden form fields. In this article we'll use querystring parameters to indicate the sorting parameters, which means a particular sort order can be indexed by search engines, bookmarked, emailed to a colleague, and so on - things that are not possible with the GridView's built-in sorting capabilities. Like with its predecessor, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • SQLAuthority News – Download Whitepaper – Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    - by pinaldave
    Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. Microsoft has recently released very interesting whitepaper which covers a sample scenario that validates the connectivity of the Power View reports to both PowerPivot workbooks and tabular models. This white paper talks about following important concepts about Power View: Understanding the hardware and software requirements and their download locations Installing and configuring the required infrastructure when Power View and its data models are on the same computer and on different computer Installing and configuring a computer used for client access to Power View reports, models, Sharepoint 2012 and Power View in a workgroup Configuring single sign-on access for double-hop scenarios with and without Kerberos You can download the whitepaper from here. This whitepaper talks about many interesting scenarios. It would be really interesting to know if you are using Power View in your production environment. If yes, would you please share your experience over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Data Pump: Consistent Export?

    - by Mike Dietrich
    Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent. Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research": MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent Background:Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode To get a consistent data pump export with expdp you'll have to set: FLASHBACK_TIME=SYSTIMESTAMPin your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact. So sorry if I had proclaimed a feature which unfortunately is not there by default - Mike

    Read the article

  • Hack a Linksys Router into a Ambient Data Monitor

    - by Jason Fitzpatrick
    If you have a data source (like a weather report, bus schedule, or other changing data set) you can pull it and display it with an ambient data monitor; this fun build combines a hacked Linksys router and a modified toy bus to display transit arrival times. John Graham-Cumming wanted to keep an eye on the current bus arrival time tables without constantly visiting the web site to check them. His workaround turns a hacked Linksys router, a display, a modified London city bus (you could hack apart a more project-specific enclosure, of course), and a simple bit code that polls the bus schedule’s API, into a cool ambient data monitor that displays the arrival time, in minutes, of the next two buses that will pass by his stop. The whole thing could easily be adapted to another API to display anything from stock prices to weather temps. Hit up the link below for more information on the project. Ambient Bus Arrival Monitor Hacked from Linksys Router [via Make] Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Updating an ADF Web Service Data Control When Service Structure or Location Change

    - by Shay Shmeltzer
    The web service data control in Oracle ADF gives you a simplified approach to consuming services in ADF applications, and now with ADF Mobile the usage of this service seems to be growing. A frequent question we get is what happens if the service that I'm consuming changes - how do I update my data control? Well, first we should mention that if you do a good design of your application before you actually code - then things like Web service method signature shouldn't change. The signature is the contract between the publisher and the consumer, and contracts shouldn't be broken. But in reality things do change during development stages, so here is how you can update both method signatures and service location with the Web service data control: After watching this video you might be tempted to not copy the WSDLs to your project - which lets you use the right click update on a data control. However there is a reason why the copy is on by default, it reduces network traffic when you are actually running your application since ADF doesn't need to go to the server to find out the service structure. So for runtime performance, you probably should keep the WSDL local.  I encourage you to further look into both the connections.xml file where your service location is saved, and the datacontrols.dcx file where its definition is kept to get an even deeper understanding of how ADF works underneath the declarative layers.

    Read the article

  • Flashback Data Archives: Ein gutes Gedächtnis für DBA und Entwickler

    - by Heinz-Wilhelm Fabry (DBA Community)
    Daten werden gespeichert und zum Teil lange aufbewahrt. Mitunter werden Daten nach ihrer ersten Speicherung geändert, vielleicht sogar mehrfach. Je nach gesetzlicher oder betrieblicher Vorgabe müssen die Veränderungen sogar nachverfolgbar sein. Damit sind zugleich Mechanismen gefordert, die sicherstellen, dass die Folge der Versionen lückenlos ist. Und implizit bedeutet das zusätzlich, dass die Versionen auch vor Löschen und Verändern geschützt sein müssen. Das Versionieren kann über die Anwendung, mit der die Daten auch erfasst werden, erfolgen, über Trigger oder über besondere Werkzeuge. Jede dieser Lösungen hat ihre eigenen Schwächen. Zusätzlich steht die Frage nach dem Schutz vor unerlaubtem Löschen oder Ändern versionierter Daten im Raum. Flashback Data Archives lösen diese Frage, denn sie bieten nicht nur einen wirksamen Mechanismus zum Versionieren von Datensätzen, sondern sie schützen diese Versionen auch vor Veränderung und löschen sie schließlich sogar automatisch nach Ablauf ihrer Aufbewahrungsfrist.Ursprünglich wurden die Archive als eigenständige Option zur Enterprise Edition der Oracle Database 11g unter dem Namen Total Recall eingeführt. Ende Juni 2012 verloren die Flashback Data Archives ihren Status als eigenständige Option. Weil die Archive aber grundsätzlich komprimiert wurden, hat Oracle sie stattdessen zu einem Feature der Advanced Compression Option der Enterprise Edition (ACO) gemacht. Seit der Version 11.2.0.4 der Datenbank ist das Komprimieren aber für die Archive nicht mehr zwangsläufig, sondern optional. Damit gibt es lizenzrechtlich erneut eine Änderung: Wer die Kompression verwendet, der muss nach wie vor ACO lizensieren. Wer die Flashback Data Archives dagegen ohne Kompression verwendet - also zum Beispiel Entwickler -, dem stehen sie ab 11.2.0.4 aufwärts im Lieferumfang aller Editionen der Datenbank zur Verfügung. Diese Änderung ist in den Handbüchern zur Lizensierung der Versionen 11.2 und 12.1 der Datenbank dokumentiert. Im Rahmen der DBA Community ist bereits über die Flashback Data Archives berichtet worden. Der hier vorliegende Artikel ersetzt alle vorangegangenen Beiträge zum Thema.

    Read the article

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Non use of persisted data – Part deux

    - by Dave Ballantyne
    In my last blog I showed how persisted data may not be used if you have used the base data on an include on an index. That wasn't the only problem ive had that showed the same symptom.  Using the same code as before,  I was executing similar to the below : select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID But,  due to a distribution error in statistics i found it necessary to use a table hint.  In this case, I wanted to force a loop join select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH inner loop join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID   But, being the diligent  TSQL developer that I am ,looking at the execution plan I noticed that the ‘compute scalar’ operator was again calling the function.  Again,  profiler is a more graphic way to view this…..   All very odd,  just because ive forced a join , that has NOTHING, to do with my persisted data then something is causing the data to be re-evaluated. Not sure if there is any easy fix you can do to the TSQL here, but again its a lesson learned (or rather reinforced) examine the execution plan of every query you write to ensure that it is operating as you thought it would.

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • Generate a Word document from list data

    - by PeterBrunone
    This came up on a discussion list lately, so I threw together some code to meet the need.  In short, a colleague needed to take the results of an InfoPath form survey and give them to the user in Word format.  The form data was already in a list item, so it was a simple matter of using the SharePoint API to get the list item, formatting the data appropriately, and using response headers to make the client machine treat the response as MS Word content.  The following rudimentary code can be run in an ASPX (or an assembly) in the 12 hive.  When you link to the page, send the list name and item ID in the querystring and use them to grab the appropriate data. // Clear the current response headers and set them up to look like a word doc.HttpContext.Current.Response.Clear();HttpContext.Current.Response.Charset ="";HttpContext.Current.Response.ContentType ="application/msword";string strFileName = "ThatWordFileYouWanted"+ ".doc";HttpContext.Current.Response.AddHeader("Content-Disposition", "inline;filename=" + strFileName);// Using the current site, get the List by name and then the Item by ID (from the URL).string myListName = HttpContext.Current.Request.Querystring["listName"];int myID = Convert.ToInt32(HttpContext.Current.Request.Querystring["itemID"]);SPSite oSite = SPContext.Current.Site;SPWeb oWeb = oSite.OpenWeb();SPList oList = oWeb.Lists["MyListName"];SPListItem oListItem = oList.Items.GetItemById(myID);// Build a string with the data -- format it with HTML if you like. StringBuilder strHTMLContent = newStringBuilder();// *// Here's where you pull individual fields out of the list item.// *// Once everything is ready, spit it out to the client machine.HttpContext.Current.Response.Write(strHTMLContent);HttpContext.Current.Response.End();HttpContext.Current.Response.Flush();

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • Saving image to database as varbinary, arraylength (part 2)

    - by Thomas Schoof
    This is a followup to my previous question, which got solved (thank you for that) but now I am stuck at another error. I'm trying to save an image in my database (called 'Afbeelding'), for that I made a table which excists of: id: int souce: varbinary(max) I then created a wcf service to save an 'Afbeelding' to the database. private static DataClassesDataContext dc = new DataClassesDataContext(); [OperationContract] public void setAfbeelding(Afbeelding a) { //Afbeelding a = new Afbeelding(); //a.id = 1; //a.source = new Binary(bytes); dc.Afbeeldings.InsertOnSubmit(a); dc.SubmitChanges(); } I then put a reference to the service in my project and when I press the button I try to save it to the datbase. private void btnUpload_Click(object sender, RoutedEventArgs e) { Afbeelding a = new Afbeelding(); OpenFileDialog openFileDialog = new OpenFileDialog(); openFileDialog.Filter = "JPEG files|*.jpg"; if (openFileDialog.ShowDialog() == true) { //string imagePath = openFileDialog.File.Name; //FileStream fileStream = new FileStream(imagePath, FileMode.Open, FileAccess.Read); //byte[] buffer = new byte[fileStream.Length]; //fileStream.Read(buffer, 0, (int)fileStream.Length); //fileStream.Close(); Stream stream = (Stream)openFileDialog.File.OpenRead(); Byte[] bytes = new Byte[stream.Length]; stream.Read(bytes, 0, (int)stream.Length); string fileName = openFileDialog.File.Name; a.id = 1; a.source = new Binary { Bytes = bytes }; } EditAfbeeldingServiceClient client = new EditAfbeeldingServiceClient(); client.setAfbeeldingCompleted +=new EventHandler<System.ComponentModel.AsyncCompletedEventArgs>(client_setAfbeeldingCompleted); client.setAfbeeldingAsync(a); } void client_setAfbeeldingCompleted(object sender, System.ComponentModel.AsyncCompletedEventArgs e) { if (e.Error != null) txtEmail.Text = e.Error.ToString(); else MessageBox.Show("WIN"); } However, when I do this, I get the following error: System.ServiceModel.FaultException: The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter :a. The InnerException message was 'There was an error deserializing the object of type OndernemersAward.Web.Afbeelding. The maximum array length quota (16384) has been exceeded while reading XML data. This quota may be increased by changing the MaxArrayLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.'. Please see InnerException for more details. at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result) at System.ServiceModel.ClientBase`1.ChannelBase`1.EndInvoke(String methodName, Object[] args, IAsyncResult result) atOndernemersAward.EditAfbeeldingServiceReference.EditAfbeeldingServiceClient.EditAfbeeldingServiceClientChannel.EndsetAfbeelding(IAsyncResult result) at OndernemersAward.EditAfbeeldingServiceReference.EditAfbeeldingServiceClient.OndernemersAward.EditAfbeeldingServiceReference.EditAfbeeldingService.EndsetAfbeelding(IAsyncResult result) at OndernemersAward.EditAfbeeldingServiceReference.EditAfbeeldingServiceClient.OnEndsetAfbeelding(IAsyncResult result) at System.ServiceModel.ClientBase`1.OnAsyncCallCompleted(IAsyncResult result) I'm not sure what's causing this but I think it has something to do with the way I write the image to the database? (The array length is too big big, I don't really know how to change it) Thank you for your help, Thomas

    Read the article

  • How can data not stored in a DB be accessed from any activity in Android?

    - by jul
    hi, I'm passing data to a ListView to display some restaurant names. Now when clicking on an item I'd like to start another activity to display more restaurant data. I'm not sure about how to do it. Shall I pass all the restaurant data in a bundle through the intent object? Or shall I just pass the restaurant id and get the data in the other activity? In that case, how can I access my restaurantList from the other activity? In any case, how can I get data from the restaurant I clicked on (the view only contains the name)? Any help, pointers welcome! Thanks Jul ListView lv= (ListView)findViewById(R.id.listview); lv.setAdapter( new ArrayAdapter<String>(this,android.R.layout.simple_list_item_1,restaurantList.getRestaurantNames())); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { Intent i = new Intent(Atable.this, RestaurantEdit.class); Bundle b = new Bundle(); //b.putInt("id", ? ); startActivityForResult(i, ACTIVITY_EDIT); } }); RestaurantList.java package org.digitalfarm.atable; import java.util.ArrayList; import java.util.List; public class RestaurantList { private List<Restaurant> restaurants = new ArrayList<Restaurant>(); public List<Restaurant> getRestaurants() { return this.restaurants; } public void setRestaurants(List<Restaurant> restaurants) { this.restaurants = restaurants; } public List<String> getRestaurantNames() { List<String> restaurantNames = new ArrayList<String>(); for (int i=0; i<this.restaurants.size(); i++) { restaurantNames.add(this.restaurants.get(i).getName()); } return restaurantNames; } } Restaurant.java package org.digitalfarm.atable; public class Restaurant { private int id; private String name; private float latitude; private float longitude; public int getId() { return this.id; } public void setId(int id) { this.id = id; } public String getName() { return this.name; } public void setName(String name) { this.name = name; } public float getLatitude() { return this.latitude; } public void setLatitude(float latitude) { this.latitude = latitude; } public float getLongitude() { return this.longitude; } public void setLongitude(float longitude) { this.longitude = longitude; } }

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • Saving images only available when logged in

    - by James
    Hi, I've been having some trouble getting images to download when logged into a website that requires you to be logged in. The images can only be viewed when you are logged in to the site, but you cannot seem to view them directly in the browser if you copy its location into a tab/new window (it redirects to an error page - so I guess the containing folder has be .htaccess-ed). Anyway, the code I have below allows me to log in and grab the HTML content, which works well - but I cannot grab the images ... this is where I need help! <? // curl.php class Curl { public $cookieJar = ""; public function __construct($cookieJarFile = 'cookies.txt') { $this->cookieJar = $cookieJarFile; } function setup() { $header = array(); $header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/gif;q=0.8,image/x-bitmap;q=0.8,image/jpeg;q=0.8,image/png,*/*;q=0.5"; $header[] = "Cache-Control: max-age=0"; $header[] = "Connection: keep-alive"; $header[] = "Keep-Alive: 300"; $header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[] = "Accept-Language: en-us,en;q=0.5"; $header[] = "Pragma: "; // browsers keep this blank. curl_setopt($this->curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7'); curl_setopt($this->curl, CURLOPT_HTTPHEADER, $header); curl_setopt($this->curl, CURLOPT_COOKIEJAR, $this->cookieJar); curl_setopt($this->curl, CURLOPT_COOKIEFILE, $this->cookieJar); curl_setopt($this->curl, CURLOPT_AUTOREFERER, true); curl_setopt($this->curl, CURLOPT_FOLLOWLOCATION, true); curl_setopt($this->curl, CURLOPT_RETURNTRANSFER, true); } function get($url) { $this->curl = curl_init($url); $this->setup(); return $this->request(); } function getAll($reg, $str) { preg_match_all($reg, $str, $matches); return $matches[1]; } function postForm($url, $fields, $referer = '') { $this->curl = curl_init($url); $this->setup(); curl_setopt($this->curl, CURLOPT_URL, $url); curl_setopt($this->curl, CURLOPT_POST, 1); curl_setopt($this->curl, CURLOPT_REFERER, $referer); curl_setopt($this->curl, CURLOPT_POSTFIELDS, $fields); return $this->request(); } function getInfo($info) { $info = ($info == 'lasturl') ? curl_getinfo($this->curl, CURLINFO_EFFECTIVE_URL) : curl_getinfo($this->curl, $info); return $info; } function request() { return curl_exec($this->curl); } } ?> And below is the page that uses it. <? // data.php include('curl.php'); $curl = new Curl(); $url = "http://domain.com/login.php"; $newURL = "http://domain.com/go_here.php"; $username = "user"; $password = "pass"; $fields = "user=$username&pass=$password"; // Calling URL $referer = "http://domain.com/refering_page.php"; $html = $curl->postForm($url, $fields, $referer); $html = $curl->get($newURL); echo $html; ?> I've tried putting the direct URL for the image into $newURL but that doesn't get the image - it simply returns an error saying since that folder is not available to view directly. I've tried varying the above in different ways, but I haven't been successful in getting an image, though I have managed to get a screen through basically saying error 405 and/or 406 (but not the image I want). Any help would be great!

    Read the article

  • About Data Objects and DAO Design when using Hibernate

    - by X. Ma
    I'm hesitating between two designs of a database project using Hibernate. Design #1. (1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it. (2) Create a database library with Hibernate. This library implements the data provider interface in (1). The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects. Design #2. Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate. I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider? Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2? What do you think about this? Thanks for your time!

    Read the article

  • How do I bind an iTunes style source list to an NSTableView using Core Data?

    - by Austin
    I have an iTunes style interface in my application: Source list (NSOutlineView) on the left that contains different libraries and playlists with an NSTableView on the right side of the interface displaying information for "Presentations". Similar to iTunes, I am showing the same type of information in the table view whether a library or playlist is selected (title, author, date created, etc). I currently have an NSArrayController connected to my NSTableView and was setting the fetch predicate based on what was selected in the source list. This works fine when selecting a library because I can just set the fetch predicate to filter by the "type" field in my Presentation Core Data entity. When I try to adjust the fetch predicate for the playlist however, it doesn't look like there is any way to set the fetch predicate because I've got a table in between Playlists and Presentations to keep up with the order within the Playlist. According to the Apple docs, these type of predicates are not doable with Core Data (it basically doesn't multiple inner joins). Below is the relevant portion of my Data Model. Is my data model setup incorrectly? Should I drop the NSArrayController and handle connecting the NSTableView up by hand? I'm trying to figure out if there is a simple fix, or really a design flaw.

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

  • [C#] How to receive uncrackable data or so ? ;P

    - by Prix
    Hi, I am working on an C# application to communicate with my website and retrieve some information from it, using SSL which is working just fine. Now what i want/need is a way to receive encrypted or codified or obfuscated data that if some one cracks my application they will not be able to decrypt the data because it needs something from the server (api, website) but yet the application needs to decrypt it in order to use it... initally i was thinking of an inside RSA pair or keys, to send and receive the encrypt data but let's consider that someone has cracked the application, they could just replace those keys for keys they have made, so i was looking into some methods but havent found or been able to think of any way to harder this... I was learning about RSA, encryption and such and started developing this as a self learning and got involved with it and now i am trying to figure out a way to receive data like that... I have considered obfuscating and compiling my code with packers and etc but this is not about packing it etc... i am more interested in knowing a better way to secure what i described i know it may or is impossible but yet i am looking forward to some approch. I would appreciate advices, suggestions and C# code samples, if you need more information or anything please let me know.

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >