Search Results

Search found 95201 results on 3809 pages for 'system data sqlite'.

Page 157/3809 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Access to Salesforce.com Data Through Tableau Desktop

    - by dataintegration
    This article will explain how to connect to any of the RSSBus OData Connectors with Tableau's business intelligence tool. While the example uses the Salesforce Connector, the same process can be followed for any of the OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and Tableau Desktop from Tableau. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce.com account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open Tableau and select the Connect to data option at the top left of the window. Step 4: Here you will click on the option labeled OData under the section labeled On a server. Step 5: A new pop up will appear. The box under Step 1 of the pop-up must contain the OData URL of the Salesforce Connector table. You can find this by clicking on the Settings tab of the Salesforce Connector. Once you have found the OData entry URL, you will need to append the table name that you want Tableau to connect with to the OData entry URL. In this example, we will connect to the Account table. Thus, the URL we enter will be: http://localhost:8181/sfconnector/data/conn/odata.rsc/Account. You will also need to add authentication options in this step. To do this, select the Use a Username and Password option in Step 2 of the pop-up and enter the Username and Password of the user who has access to the Salesforce Connector. When you are done, click the Connect button in Step 3 of the pop-up. Step 6: When the connection to the Salesforce Connector is successful, give the connection a name and click the OK button. Step 7: The table columns will be listed on the left side under the Dimensions section of the workspace. Step 8: To view your Salesforce.com data, you can right click under the table name in the Data section at the top left of the dashboard and select the View Data option. Your Saleforce.com data will appear in Tableau.

    Read the article

  • Loading from Multiple Data Sources with Oracle Loader for Hadoop

    - by mannamal
    Oracle Loader for Hadoop can be used to load data from multiple data sources (for example Hive, HBase), and data in multiple formats (for example Apache weblogs, JSON files).   There are two ways to do this: (1) Use an input format implementation.  Oracle Loader for Hadoop includes several input format implementations.  In addition, a user can develop their own input format implementation for proprietary data sources and formats. (2) Leverage the capabilities of Hive, and use Oracle Loader for Hadoop to load from Hive. These approaches are discussed in our Oracle Open World 2013 presentation. 

    Read the article

  • Linux: Limiting data throughput (pipe) in bytes per second?

    - by sdaau
    Hi all, I was wandering if there is a Linux program that can limit data throughput of a pipe - in actual bytes per second?. From what I gather, applicable for the purposes would be bfr, however, it has been removed from Debian (Removal candidate: bfr) cpipe, however, it seems the lowest resolution it will support is kB/s, meaning that buffer writes can still reach MB/s ([SOLVED] Is there a program to limit terminal pipe speed? - Page 2 - Ubuntu Forums) What I'd want is to be able to specify something like cat example.txt | ratelimit -Bps 100 > /dev/ttyUSB0 ... and actually have a single byte from example.txt sent each 1/100 = 0.01 sec (or 10 ms) to 'output'.. Thanks in advance for any suggestions, Cheers!

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data.?

    - by Milanix
    This is not really a coding question since, I am not adding any code in here. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. Atleast I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance? It would really be a great help if anyone can give me better overview about it. Or may be counter argument against each. Many thanks!

    Read the article

  • Validation and Error Generation when using the Data Mapper Pattern

    - by AndyPerlitch
    I am working on saving state of an object to a database using the data mapper pattern, but I am looking for suggestions/guidance on the validation and error message generation step (step 4 below). Here are the general steps as I see them for doing this: (1) The data mapper is used to get current info (assoc array) about the object in db: +=====================================================+ | person_id | name | favorite_color | age | +=====================================================+ | 1 | Andy | Green | 24 | +-----------------------------------------------------+ mapper returns associative array, eg. Person_Mapper::getPersonById($id) : $person_row = array( 'person_id' => 1, 'name' => 'Andy', 'favorite_color' => 'Green', 'age' => '24', ); (2) the Person object constructor takes this array as an argument, populating its fields. class Person { protected $person_id; protected $name; protected $favorite_color; protected $age; function __construct(array $person_row) { $this->person_id = $person_row['person_id']; $this->name = $person_row['name']; $this->favorite_color = $person_row['favorite_color']; $this->age = $person_row['age']; } // getters and setters... public function toArray() { return array( 'person_id' => $this->person_id, 'name' => $this->name, 'favorite_color' => $this->favorite_color, 'age' => $this->age, ); } } (3a) (GET request) Inputs of an HTML form that is used to change info about the person is populated using Person::getters <form> <input type="text" name="name" value="<?=$person->getName()?>" /> <input type="text" name="favorite_color" value="<?=$person->getFavColor()?>" /> <input type="text" name="age" value="<?=$person->getAge()?>" /> </form> (3b) (POST request) Person object is altered with the POST data using Person::setters $person->setName($_POST['name']); $person->setFavColor($_POST['favorite_color']); $person->setAge($_POST['age']); *(4) Validation and error message generation on a per-field basis - Should this take place in the person object or the person mapper object? - Should data be validated BEFORE being placed into fields of the person object? (5) Data mapper saves the person object (updates row in the database): $person_mapper->savePerson($person); // the savePerson method uses $person->toArray() // to get data in a more digestible format for the // db gateway used by person_mapper Any guidance, suggestions, criticism, or name-calling would be greatly appreciated.

    Read the article

  • How to automatically restart Tomcat7 on system reboots?

    - by coding crow
    I have installed Tomcat 7 on Ubuntu 12.04 LTS which run on an Amzon EC2 instance. Now I wish tomcat should restart automatically on system reboot. I read this blog which suggest adding below script to /etc/init.d/tomcat7 # Tomcat auto-start # # description: Auto-starts tomcat # processname: tomcat # pidfile: /var/run/tomcat.pid case $1 in start) sh /usr/share/tomcat7/bin/startup.sh ;; stop) sh /usr/share/tomcat7/bin/shutdown.sh ;; restart) sh /usr/share/tomcat7/bin/shutdown.sh sh /usr/share/tomcat7/bin/startup.sh ;; esac exit 0 and issue the following commands sudo chmod 755 /etc/init.d/tomcat7 sudo ln -s /etc/init.d/tomcat7 /etc/rc1.d/K99tomcat sudo ln -s /etc/init.d/tomcat7 /etc/rc2.d/S99tomcat sudo /etc/init.d/tomcat7 restart My Questions The tomcat7 already has script in it, where do we have to paste the suggested script? Is the suggested procedure correct?

    Read the article

  • Get Started with Silverlight 3 Data Binding

    If you ve learned about data binding from other Microsoft technologies you ll be glad to hear that Silverlight 3 also gives you a smooth way to handle data binding. This article the first one in a multi-part series gets you started by teaching you some of the techniques you ll need to handle data binding successfully.... Test Drive the Next Wave of Productivity Find Microsoft Office 2010 and SharePoint 2010 trials, demos, videos, and more.

    Read the article

  • Estimating costs in a GOAP system

    - by fullwall
    I'm currently developing a GOAP system in Java. An explanation of GOAP can be found at http://web.media.mit.edu/~jorkin/goap.html. Essentially, it's using A* to plot between Actions that mutate the world state. To provide a fair chance for all Actions and Goals to execute, I'm using a heuristic function to estimate the cost of doing something. What is the best way to estimate this cost so that it is comparable to all the other costs? As an example, estimating the cost of running away from an enemy versus attacking it - how should the cost be calculated to be comparable?

    Read the article

  • windows system (bootloader) partition accidently deleted during multiple installs

    - by S.Y.T.
    After experimenting with multiple variations of backtrack and xbmcbuntu variations of Ubuntu with dual boot successfully, my windows partition became unrecognizable to grub. I used my windows boot CD to try to correct the problem. However, I deleted all partitions except for the NFTS one that contains my old windows install. (And, merged all other ones into that in hopes of getting back to the windows boot loader and out of grub) Now, all I get is a grub command prompt when I try and boot the system (how??? - I thought I deleted grub) And, now the windows boot disc doesn't even recognize the install. I've tried TRK to try and resolve the problem. Though I must admit ignorance in correctly using this utility. I've searched for other answers to this problem. Any help would be much appreciated. S.Y.

    Read the article

  • Rebuilding system databases in 2008 R2

    - by TiborKaraszi
    All my attempts so far to rebuild the system databases in 2008 R2 has failed. I first tried to run setup from below path: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Release But above turns out to be the 2008 setup program, not 2008R2 setup; even though I have no 2008 instanced installed (I have only R2 instances installed). Apparently, the 2008 setup program does a version check of the instance to be rebuilt and since it is > 10.50.0, the rebuild fails. Books Online for R2 the section...(read more)

    Read the article

  • Generating SQL Server Test Data with Visual Studio 2010

    As a database developer or tester sometimes you need to have production like data in your environment for your development or testing, but you cannot have the production data because of security and privacy issues. So how you can generate test data or replicate similar data as in production for your development or test environment? Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • How to deal with meta data with drop downs?

    - by Mangesh Jogade
    Please advise how to handle following scenario in web application. I have a drop-down which is populated using meta-data from table A. When form is submitted this drop down data is stored in table B. While displaying existing data, it is populated using data stored in table B. While copying existing data, it is copied using data stored in table B. I want to achieve following goals: While displaying existing data I must display data irrespective of current meta data (to explain, even if some options are removed from metadata I still display them). When I copy existing data only current data should be copied(that is if some options are removed from metadata I should not copy them). I understand that I can do this by scanning metadata every time I copy existing data, however if there are thousands of such drop down exist, it is definitely not desirable to scan complete metadata for every drop down. How can I handle such situation in web application?

    Read the article

  • Generating Data for Database Tests

    It is more and more essential for developers to work on development databases that have realistic data in both type and quantity, but without using real data. It isn't exactly easy, even with third-party tools to hand. Phil Factor shows how it can be done, taking the classic PUBS database and giving it a more realistic set of data. Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • Identifying elements from data feeds generated by affiliate sites

    - by SPI
    I am working with data feeds from affiliate sites. The basic idea is to provide an interface where the user can paste a link to an XML datafeed (these are huge btw, around 60 mb) that would then be streamed, parsed into small chunks, and mined for the required data which would then be stored in the database. The problem is that different affiliate sites have different Schemas for their XML's. It is a little hard mapping the elements in an XML to your database attributes when you don't actually know which element contains what. My Solution: Use XPath to traverse through the first set of parent and it's descendent's, fetch the elements as well as the data and and ask the user to map this data to the attributes in the database by selecting from a set of radio buttons that represent the attributes from the database. This will be done just once for each new Feed, once the system know's what's what it will automatically upload the data from the XML to the database. Does this sound viable? Is there a better solution? I realize this leaves an uncomfortable opening for human error.. Thanks.

    Read the article

  • Combining a content management system with ASP.NET

    - by Ek0nomik
    I am going to be creating a site that seems like it requires a blend of a content management system (CMS) and some custom web development (which is done in ASP.NET MVC). I have plenty of web development experience to understand the ASP.NET MVC side of the fence, but, I don't have a lot of CMS knowledge aside from getting one stood up. Right now my biggest question is around integrating security from ASP.NET with the CMS. I currently have an ASP.NET MVC site that handles the authentication for multiple production sites and creates an authentication cookie under our domain (*.example.com). The page acts like a single sign on page since the cookie is a wildcard and can be used in any other applications of the same domain. I'd really like to avoid having users put in their credentials twice. Is there a CMS that will play well with the ASP.NET Forms Authentication given how I have these existing applications structured? As an aside, right now I am leaning towards Drupal, but, that isn't finalized.

    Read the article

  • Archiving SQL Server Data Using Partitioning

    Many companies now have a requirement to keep data for long periods of time. While this data does have to be available if requested, it usually does not need to be accessible by the application for any current transactions. Data that falls into this category are a good candidate for archival. Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • Silverlight 3.0 and ADO.NET data service framework(An error occurred while processing this request)

    - by ybbest
    Today , I try to write a Silverlight app that talks to SharePoint 2010 using the REST API.However after deploy the silverlight app and run the code , I got the following error.In order to fix this I need to make the target framework of your caller application to 4.0,in this case I need to use Silverlight 4.0 instead 3.0.After I have done that and redeploy the solution to the SharePoint.It works like a charm.   Exceptions details: System.Data.Services.Client.DataServiceQueryException: An error occurred while processing this request. Request version ’1.0′ is too low for the response. The lowest supported version is ’2.0′. However , if you got the error like this: Could not load type ‘System.Data.Services.Providers.IDataServiceUpdateProvider’ from assembly ‘System.Data.Services, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089′,then you need to install ADO.NET Data Services Update for .NET Framework 3.5 SP1 ,you can download here (for windowns 7 and server 2008 r2 ) or here (for windows vista of server 2008).

    Read the article

  • Master Data and Data Quality - Do You Really Know Your Customer?

    - by Mala Narasimharajan
    How well do you know your customer?   Data Quality and MDM go hand-in-hand when it comes to fully knowing your customers and information related to the Take a look at this video and see why customer data needs a consolidated management strategy that incorporates data quality.  Oracle MDM, specifically Oracle Customer Hub enables enterprises to truly understand their customers and ensure a single, trusted view for any customer.  It offers the most complete MDM application suite in the marketplace.  For more information on Oracle MDM and Oracle Enterprise Data Quality visit us on the web.  

    Read the article

  • What is the good way of sharing specific data between ViewModels

    - by voroninp
    We have IAppContext which is injected into ViewModel. This service contains shared data: global filters and other application wide properties. But there are cases when data is very specific. For example one VM implements Master and the second one - Details of selected tree item. Thus DetailsVm must know about the selected item and its changes. We can store this information either in IAppContext or inside each concerned VM. In both cases update notifications are sent via Messenger. I see pros and cons for any of the approaches and can not decide which one is better. 1st: + explicitly exposed shared proerties, easy to follow dependencies - IAppContxt becomes cluttered with very specific data. 2nd: the exact opposite of the first and more memory load due to data duplication. May be someone can offer design alternatives or tell that one of the variants is objectively superior to the other cause I miss something important?

    Read the article

  • Performance Tuning in the Age of Big Data

    Database Administrators must now deal with large volumes of data and new forms of high-speed data analysis. If your responsibility includes performance tuning, here are the areas to focus on that will become more and more important in the age of Big Data. Total DeploymentEnjoy easy release management for your .NET apps, services, and databases with Deployment Manager. Get your free Starter edition now

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >