Search Results

Search found 108959 results on 4359 pages for 'ado net data services'.

Page 345/4359 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • How often to authenticate iOS app in web service

    - by jeraldov
    I am trying to build an iOS app that connects to a PHP+MySQL web service. My question is how often should I check for user's authentication to get data from the web service. My app requires a login at start up, but I am wondering if how often should I check if he can still validly get data from the web service. Should I check for his username and password each time the user views a table view that get its data from the web service?

    Read the article

  • Why is there no generic implementation of OrderedDictionary in .net?

    - by nonot1
    Why did Microsoft not provide generic implementation of OrderedDictionary? There are a few custom implementations I've seen, including: http://www.codeproject.com/KB/recipes/GenericOrderedDictionary.aspx But why did Microsoft not include it in the base .net library? Surely they had a reason for not building a generic.... but what is it? Prior to posting this message, I did see: http://stackoverflow.com/questions/2629027/no-generic-implementation-of-ordereddictionary But that just confirms that it does not exist. Not why it does not exist. Thanks

    Read the article

  • Big Data Appliance

    - by David Dorf
    Today Oracle announced the next release of it's Big Data Appliance, an engineered system composed of hardware and software targeting the efficient processing of big data.  The solution leverages 288 Intel cores running Cloudera's distribution of Apache Hadoop in 1.1 TB of main memory.  This monster helps companies acquire, organize, and analyze large volumes of structured and un-structured data. Additionally a new versions of the Oracle Big Data Connectors and Oracle NoSQL Database were released. Why is this important to retailers?  As the infographic below conveys, mobile and social have added even more data to the already huge collections of POS transactions and e-commerce weblogs.  Retailers know that mining that data will help them make better decisions that lead to increased sales, better customer service, and ultimately a successful retail business. Monetate

    Read the article

  • Twitter like character counter - jQuery version

    - by bipinjoshi
    My recent article titled "Displaying a Character Counter for Multiline Textboxes" shows you how to create a character counter like Twitter for multiline textboxes. The articles does so using ASP.NET AJAX client behavior. Here is a jQuery version of the code that does similar job. Note, however, that unlike ASP.NET AJAX client behavior as illustrated in the article the following code takes a "function" based approach to quickly implement similar functionality.http://www.bipinjoshi.net/articles/84e691b2-0306-4911-87bb-875806ba981b.aspx

    Read the article

  • SOA, Java EE and data organization

    - by jolasveinn
    At the company I work for, we're currently splitting up our monolith solution into a number of small services (SOA). Many of the services are small, so we'd like to deploy a number of these services on the same application server, JBoss 7.1 in this case. As per the SOA philosophy, the independence of each service and the teams working on them is very important. What would be the best way to organize the data? Use one schema per service Would you use one datasource per schema in the application server? Or use one datasource, prefixing all DB object names with the schema name in some transparent manner? Use a shared schema, but evading any naming collisions by requiring each service to use a distinct prefix for all DB objects Other options? Am I maybe thinking this completely wrong here? :)

    Read the article

  • Testing Reference Data Mappings

    - by Michael Stephenson
    Background Mapping reference data is one of the common scenarios in BizTalk development and its usually a bit of a pain when you need to manage a lot of reference data whether it be through the BizTalk Cross Referencing features or some kind of custom solution. I have seen many cases where only a couple of the mapping conditions are ever tested. Approach As usual I like to see these things tested in isolation before you start using them in your BizTalk maps so you know your mapping functions are working as expected. This approach can be used for almost all of your reference data type mapping functions where you can take advantage of MSTests data driven tests to test lots of conditions without having to write millions of tests. Walk Through Rather than go into the details of this here, I'm going to call out to one of my colleagues who wrote a nice little walk through about using data driven tests a while back. Check out Callum's blog: http://callumhibbert.blogspot.com/2009/07/data-driven-tests-with-mstest.html

    Read the article

  • Best Language for the job? Database | C++, .NET, Java

    - by Randy E
    Ok, quick overview. I'm pretty brand new to software design. I have experience reading and editing/customizing PHP things for online scripts/software; Such as CMS, Wordpress, some forum solutions. I'm about to begin my degree in Software Design, the school I'm going to will allow us to kind of focus on an area, C++, Java, or .NET. I've played around a little with VB over the past week, mostly just trying to get a slight feel for it, however nothing extensive. I've been through Herbert Schildt's "C++, A Beginner's Guide." but I was mainly reading it, not doing anything with it beyond a couple basic Console Apps (and getting frustrated with auto-close :/ ). Now, where I decide to focus more in with my degree will depend on what the best language for the job is for my first piece of software I want to develop on my own. Assume I haven't looked at any of the languages at all, please help with the following: My first piece of software will be a database program. Everything has to do with users inputting and retrieving data, and calling that data to help with another function of the software, automatically calculating billing information based on information inputted in the other portion of the program. I won't go into too many details as I'm targeting a niche that doesn't have too much competition, but the competition that is there is established. I want to offer more features, scalable solutions, and the ability to port it to an online version. Ok, basically, it is a complete case management with integrated billing for Private Investigators. I would like the case management to be able to check the Database to see if certain information has been inputted before (such as Names/SSN's), and then the billing will pull hours inputted in the case portion for investigative work, multiplying by an already inputted amount for the fee, and then calculate sales tax. I also want to provide potential clients with an easily scalable solution, that is, a basic option for start ups that costs the least amount, with no additional users, ran on one machine. A middle option with the ability to create users and place them in two groups (User or Admin), as well as adding a few additional features, ran on one machine, but this will allow it to be accessed after being mapped on a network drive. And a third option to allow the placement into 4 different groups (Investigators, Billing, Managers, Admins) and more features. And then, a couple of years after launch, a 4th option that is browser based allowing the same 4 groups to login, as well as clients (view things concerning their case, with some admin customizable objects that can be added for clients view), over the internet. The only licensing security I would like to employ right off the bat will be serial key generated after ordering online (received in an email after the successful purchase). The program will access a database stored on a server periodically to verify license. I would like it to be able to check to make sure it's the most updated version and automatically update if not.

    Read the article

  • Calling home, receiving calls and smartphone data from the US

    - by Rob Farley
    I got asked about calling home from the US, by someone going to the PASS Summit. I found myself thinking “there should be a blog post about this”... The easiest way to phone home is Skype - no question. Use WiFi, and if you’re calling someone who has Skype on their phone at the other end, it’s free. Even if they don’t, it’s still pretty good price-wise. The PASS Summit conference centre has good WiFI, as do the hotels, and plenty of other places (like Starbucks). But if you’re used to having data all the time, particularly when you’re walking from one place to another, then you’ll want a sim card. This also lets you receive calls more easily, not just solving your data problem. You’ll need to make sure your phone isn’t locked to your local network – get that sorted before you leave. It’s no trouble to drop by a T-mobile or AT&T store and getting a prepaid sim. You can’t get one from the airport, but if the PASS Summit is your first stop, there’s a T-mobile store on 6th in Seattle between Pine & Pike, so you can see it from the Sheraton hotel if that’s where you’re staying. AT&T isn’t far away either. But – there’s an extra step that you should be aware of. If you talk to one of these US telcos, you’ll probably (hopefully I’m wrong, but this is how it was for me recently) be told that their prepaid sims don’t work in smartphones. And they’re right – the APN gets detected and stops the data from working. But luckily, Apple (and others) have provided information about how to change the APN, which has been used by a company based in New Zealand to let you get your phone working. Basically, you send your phone browser to http://unlockit.co.nz and follow the prompts. But do this from a WiFi place somewhere, because you won’t have data access until after you’ve sorted this out... Oh, and if you get a prepaid sim with “unlimited data”, you will still need to get a Data Feature for it. And just for the record – this is WAY easier if you’re going to the UK. I dropped into a T-mobile shop there, and bought a prepaid sim card for five quid, which gave me 250MB data and some (but not much) call credit. In Australia it’s even easier, because you can buy data-enabled sim cards that work in smartphones from the airport when you arrive. I think having access to data really helps you feel at home in a different place. It means you can pull up maps, see what your friends are doing, and more. Hopefully this post helps, but feel free to post comments with extra information if you have it. @rob_farley

    Read the article

  • How is intermediate data organized in MapReduce?

    - by Pedro Cattori
    From what I understand, each mapper outputs an intermediate file. The intermediate data (data contained in each intermediate file) is then sorted by key. Then, a reducer is assigned a key by the master. The reducer reads from the intermediate file containing the key and then calls reduce using the data it has read. But in detail, how is the intermediate data organized? Can a data corresponding to a key be held in multiple intermediate files? What happens when there is too much data corresponding to one key to be held by a single file? In short, how do intermediate partitions differ from intermediate files and how are these differences dealt with in the implementation?

    Read the article

  • DataSets and XML - The Simplistic Approach

    One of the first ways I learned how to read xml data from external data sources was by using a DataSet’s ReadXML function. This function takes file path for an XML document and then converts it to a Dataset. This functionality is great when you need a simple way to process an XML document.  In addition the DataSet object also offers a simple way to save data in an xml format by using the WriteXML function. This function saves the current data in the DataSet to an XML file to be used later. DataSet ds  = New DataSet();String filePath = “http://www.yourdomain.com/someData.xml”;String fileSavePath = “C:\Temp\Test.xml”//Read file for this locationds.readxml(filePath);//Save file to this locationds.writexml(fileSavePath); I have used the ReadXML function before when consuming data from external Rss feeds to display on one of my sites.  It allows me to quickly pull in data from external sites with little to no processing. Example site: MyCreditTech.com

    Read the article

  • Advanced Analytics Oracle Data Mining - NEW 2-Day Training Course

    - by Mike.Hallett(at)Oracle-BI&EPM
    A NEW 2-Day Oracle University (OU) Instructor Led Course on Oracle Data Mining has been developed for partners and customers to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database. Oracle Data Mining, provides data mining algorithms that run native for high performance in-database model building and model deployment. This OU course is a great way to learn the advantages and benefits of "big data analytics"; mining data, building and deploying "predictive analytics" all inside the Oracle Database and to work with OBI. To register for a class, click here, then click on View Schedule to see the latest scheduled classes and/or submit your information expressing interest in attending a class.

    Read the article

  • The updated Survey pattern for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    One of the first models I created for the many-to-many revolution white paper was the Survey one. At the time, it was in Analysis Services Multidimensional, and then we implemented it in Analysis Services Tabular and in Power Pivot, using the DAX language. I recently reviewed the data model and published it in the Survey article on DAX Patterns site. The Survey pattern is the foundation for others, such as the Basket Analysis, and it is widely used in many different business scenario. I was particularly happy to know it has been using to perform data analysis for cancer research! In this article I did some maintenance on the DAX formulas, checking that the proper error handling is part of the formulas, and highlighting some differences in slicers behavior between Excel 2010 and Excel 2013, which could be particularly important for the Survey scenario. As usual, we provide sample workbooks for both Excel 2010 and Excel 2013, and we use DAX Formatter to make the DAX code easier to read. Any feedback will be appreciated!

    Read the article

  • What ports should I open for listen and/or transmit in my firewall to keep internet ok?

    - by H_7
    What are the minimum ports and services that should I allow/open for listen and/or transmit in my firewall (I am using Firestarter) to keep my internet running ok? Some talk about what services, which port uses, and why it matters should be nice too. Thanks for any advice. I have some bet to the answer 80 http 443 https 53 dns EDIT: I will just browse some sites. I have a windows virtual machine, so I ll use it too, bu t just normal navigation too. No torrents, IRC's, FLASH, etc.. Thanks for attention. hmmm... lost! Linux Mint 9 here.

    Read the article

  • How to reduce tight coupling between two data sources

    - by fstuijt
    I'm having some trouble finding a proper solution to the following architecture problem. In our setting (sketched below) we have 2 data sources, where data source A is the primary source for items of type Foo. A secondary data source exists which can be used to retrieve additional information on a Foo; however this information does not always exist. Furthermore, data source A can be used to retrieve items of type Bar. However, each Bar refers to a Foo. The difficulty here is that each Bar should refer to a Foo which, if available, also contains the information as augmented by data source B. My question is: how to remove the tight coupling between data source A and B?

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • SQL Server Data Tools–BI for Visual Studio 2013 Re-released

    - by Greg Low
    Customers used to complain that the tooling for creating BI projects (Analysis Services MD and Tabular, Reporting Services, and Integration services) has been based on earlier versions of Visual Studio than the ones they were using for their other work in Visual Studio (such as C#, VB, and ASP.NET projects). To alleviate that problem, the shipment of those tools has been decoupled from the shipment of the SQL Server product. In SQL Server 2014, the BI tooling isn’t even included in the released version of SQL Server. This allows the team to keep up-to-date with the releases of Visual Studio. A little while back, I was really pleased to see that the Visual Studio 2013 update for SSDT-BI (SQL Server Data Tools for Business Intelligence) had been released. Unfortunately, they then had to be withdrawn. The good news is that they’re back and you can get the latest version from here: http://www.microsoft.com/en-us/download/details.aspx?id=42313

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Is the Spark View Engine for ASP.NET dead?

    - by AUser
    Is the Spark View Engine for ASP.NET dead? I'm considering using it for a new project. I tried to get approved to the Google Groups but nobody would approve me. The last message posted there was in May. I tried emailing the developers but nobody would reply back. I'm not having happy feelings about this using SPARK for a major project of mine at the moment. Is this project now dead especially after the Razor came out?

    Read the article

  • Video: Analyzing Big Data using Oracle R Enterprise

    - by Sherry LaMonica
    Learn how Oracle R Enterprise is used to generate new insight and new value to business, answering not only what happened, but why it happened. View this YouTube Oracle Channel video overview describing how analyzing big data using Oracle R Enterprise is different from other analytics tools at Oracle. Oracle R Enterprise (ORE),  a component of the Oracle Advanced Analytics Option, couples the wealth of analytics packages in R with the performance, scalability, and security of Oracle Database. ORE executes base R functions transparently on database data without having to pull data from Oracle Database. As an embedded component of the database, Oracle R Enterprise can run your R script and open source packages via embedded R where the database manages the data served to the R engine and user-controlled data parallelism. The result is faster and more secure access to data. ORE also works with the full suite of in-database analytics, providing integrated results to the analyst.

    Read the article

  • Is this the correct way to implement .NET MVC website structure?

    - by aspdotnetuser
    I have recently seen a .NET MVC solution in which the markup in the .aspx views have a Controller as their model, and the .ascx user controls they contain use a separate model. I'm new to MVC and I wanted to find out about a few things I'm not clear on. An example of how the code is implemented: UserDetails.aspx view has markup that shows it's using the UserDetailsController.cs as the model. It contains RenderPartial("User_Details.ascx", UserDetailsModel) and passes it the UserDetailsModel. Is this the standard/correct way of implementing MVC? Or just one way to implement it? I also noticed that the classes used as Models appear to be Service classes that have [DataMember] and [DataContract] attributes on the class name and properties - what is the advantage of this implementation?

    Read the article

  • Speaking at Sinergija12

    - by DigiMortal
    Next week I will be speaker at Sinergija12, the biggest Microsoft conference held in Serbia. The first time I visited Sinergija it was clear to me that this is the event where I should go back. Why? Because technical level of sessions was very well in place and actually sessions I visited were pretty hardcore. Now, two years later, I will be back there but this time I’m there as speaker. My session at Sinergija12 Here are my three almost finished sessions for Sinergija12. ASP.NET MVC 4 Overview Session focuses on new features of ASP.NET MVC 4 and gives the audience good overview about what is coming. Demos cover all important new features - agent based output, new application templates, Web API and Single Page Applications. This session is for everybody who plans to move to ASP.NET MVC 4 or who plans to start building modern web sites.   Building SharePoint Online applications using Napa Office 365 Next version of Office365 allows you to build SharePoint applications using browser based IDE hosted in cloud. This session introduces new tools and shows through practical examples how to build online applications for SharePoint 2013.   Cloud-enabling ASP.NET MVC applications Cloud era is here and over next years more and more web applications will be hosted on cloud environments. Also some of our current web applications will be moved to cloud. This session shows to audience how to change the architecture of ASP.NET web application so it runs on shared hosting and Windows Azure with same code base. Also the audience will see how to debug and deploy web applications to Windows Azure. All developers who are coming to Sinergija12 are welcome to my sessions. See you there! :)

    Read the article

  • What's the correct approach for passing data from several models into a service?

    - by Doug Chamberlain
    I have an AccountModel and a page where the user can upload a file. What I would like to have happen is when the user uploads the file. The PageController does something like the following. this is a quick attempt just written in the question to illustrate my question. public class PageController : Controller { private Service service; public ActionResult Upload(HttpPostedFileBase f){ service.savefile(f,_AccountModel_whatever.currentlyloggedinuser.taxid) } } public class Service { // abunch of validation and error checking to make sure the file is good to store } Wouldn't this approach be in bad practice? Since I'm making my controller dependent on the existence of th AccountModel? This will become a HUGE program over the next few years, and I really want to maximize the quality of the framework now.

    Read the article

  • What are the caveats of the event system built on Messenger rather than on classic .NET events?

    - by voroninp
    MVVM Light and PRISM offer messenger to implement event system. the approximate interface looks like the following one: interface Messanger { void Subscribe<TMessageParam>(Action<TMessageParam> action); void Unsubscribe<TMessageParam>(Action<TMessageParam> action); void Unsubscribe<TMessageParam>(objec actionOwner); void Notify<TMessageParam>(TMessageParam param); } Now this model seems beneficial comparing to classic .net events. It works well with Dependency Injection. Actions are stored as weak references so memory leaks are avioded and unsubscribe is not a must. The only annoyance is the need to declare new TMessageParam for each specific message. But everything comes at a cost. And what I'm really worried about is that I see no shortcomings of this approach. Has anoyne the experience of some troubles with this design pattern?

    Read the article

  • Is there a canonical source for learning C# and .NET internals?

    - by ta269uec
    I have been a C++ programmer for last several years with a bit of C# here and there. In my latest job, I work heavily on C#. I picked most of my C# by following the code-base or random google searches on what I wanted to do at that point (like threading in C#). I feel a time has come to invest some time into understanding the language internals and also understanding the .NET framework from an architectural point of view. Can someone recommend the "gold standard" text/resource for accomplishing these? What about that resource makes it the best? What were your experiences with it?

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >