Search Results

Search found 71852 results on 2875 pages for 'data load'.

Page 31/2875 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Starting to construct a data access layer. Things to consider?

    - by Phil
    Our organisation uses inline sql. We have been tasked with providing a suitable data access layer and are weighing up the pro's and cons of which way to go... Datasets ADO.net Linq Entity framework Subsonic Other? Some tutorials and articles I have been using for reference: http://www.asp.net/(S(pdfrohu0ajmwt445fanvj2r3))/learn/data-access/tutorial-01-vb.aspx http://www.simple-talk.com/dotnet/.net-framework/designing-a-data-access-layer-in-linq-to-sql/ http://msdn.microsoft.com/en-us/magazine/cc188750.aspx http://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx http://www.subsonicproject.com/ I'm extremely torn, and finding it very difficult to make a decision on which way to go. Our site is a series of 2 internal portals and a public web site. We are using vs2008 sp1 and framework version 3.5. Please can you give me advise on what factors to consider and any pro's and cons you have faced with your data access layer. Thanks.

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • How to get child nodes after store load

    - by Azincourt
    Version: ExtJs 4.1 To change the child items I use this function: TreeStore (Ext.data.TreeStore) storeId : 'treeStore', ... constructor: function( oConfig ) { ... this.on( 'expand', function( oObj ) { oObj.eachChild(function(oNode) { switch(oNode.data.type) { case "report": oNode.set('icon', strIconReport); break; case "view": oNode.set('icon', strIconView); break; } }); }); Reload After removing or adding items in the tree, I reload the tree somewhere else with: var oStore = Ext.getStore('treeStore'); oStore.load({ node : oNode, params : { newpath : oNode.data.path, overwrite : true } }); Although it is the same store treeStore, after loading and expanding to the correct path, the icons are not changed since the .on( 'expand') function is not called. Why? Question How can I change the icons of this newly loaded store before it expands to the node path? What I tried Before calling .load() I tried to edit the children with oNode.eachChild(function(oChild) {} but no success.

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • jQuery .load() call doesn't execute javascript in loaded html file

    - by Mike
    This seems to be a problem related to Safari only. I've tried 4 on mac and 3 on windows and am still having no luck. What I'm trying to do is load an external html file and have the Javascript that is embedded execute. The code I'm trying to use is this: $("#myBtn").click(function() { $("#myDiv").load("trackingCode.html"); }); trackingCode.html looks like this (simple now, but will expand once/if I get this working): <html> <head> <title>Tracking HTML File</title> <script language="javascript" type="text/javascript"> alert("outside the jQuery ready"); $(function() { alert("inside the jQuery ready"); }); </script> </head> <body> </body> </html> I'm seeing both alert messages in IE (6 & 7) and Firefox (2 & 3). However, I am not able to see the messages in Safari (the last browser that I need to be concerned with - project requirements - please no flame wars). Any thoughts on why Safari is ignoring the Javascript in the trackingCode.html file? Eventually I'd like to be able to pass Javascript objects to this trackingCode.html file to be used within the jQuery ready call, but I'd like to make sure this is possible in all browsers before I go down that road. Thanks for your help!

    Read the article

  • Used HDD/ran DiskSmartView/40,000 Power-on-hours?? should i trust it w/ my data, or take it back and bitch?

    - by David Lindsay
    I just bought a used hard drive from a University Surplus Store. Decided to run DiskSmartView to make sure it wasn't ready to fail. 40,000 power-on-hours I don't know if I feel like trusting my data to something that used. I really dont know if thats unreasonably old, but when i compare it to the POH reading i get when testing my other hdds its more than 3x older (my others have 2110 hours, 6150 hours, etc.. It's a Western Digital, so that gives me a little bit of hope(WDC WD4000KD-00NAB0). I could sure use someone else's opinion here. Thanks, DAVE

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • SQL SERVER – Load Generator – Free Tool From CodePlex

    - by pinaldave
    One of the most common questions I receive is if there any tool available to generate load on SQL Server. Absolutely there is a fabulous free tool available to generate load on SQL Server on Codeplex. This tool was released in 2008 but it is still extremely relevant to generate the load on SQL Server as well works fabulously. CodePlex is a project initiated by Microsoft for hosting open source softwares. The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. One of the things which I felt needed improvement was a default configuration. As every single time when I was adding a query the default settings were showing up and I had to manually change that. However, when I went to Menu >> Tools >>Options I was really happy as it has options to change every single default which is available. Here one can give default username, password, database name as well various settings related to configuration. Additionally application logging is also possible through the options. A couple of other important points I noticed was the button to reset counters as well status bar containing useful information of Total Threads, Completed Queries and Failed Queries. I use this frequently for my load testing. What tool do you use for SQL Server Load Generator? Download SQL Load Generator Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Master Data Management

    - by Logicalj
    I am looking for a very flexible, easy to integrate and dynamic application with as many features as possible for Master Data Management. As Master Data Management is used to Manage Operational Data, Analytical Data and Master Data so, I want guidance about "What is exactly expected from Master Data Management and What are the Basic and Challenging Scenarios to be covered or resolved in Master Data Management". Please guide me with all the possible aspects of Master Data Management like Data Cleansing, Data Management and Start Data Analyzing, etc.

    Read the article

  • What is the architectural name for the set of data that enables UI choices?

    - by Richard Collette
    I have separate service methods that fetch business object data and the data for UI selection input such as radio buttons, check-boxes, combo-boxes, etc. I want to name my service methods that fetch the selection data appropriately. I am assuming that Model and ViewModel would not be part of the name because the selection data is but a portion of the Model or ViewModel. What might this set of data be named such that I can name my service method?

    Read the article

  • jquery load not loading in IE6

    - by Moak
    I got a simple jquery script workin on all but IE j('.slide').click(function(){ j('#content').load('/menu.php'); return false; }); Here's the url http://identitykit.gotdns.org/ it's valid xhtml and all other similar questions don't answer my problem. menu.php starts with <?php header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past ?>

    Read the article

  • F5 Load Balancer- ASPXAuth Cookie

    - by Emon
    Can somebody explain what ASPXAuth cookie does? My website uses forms auth and I am trying to create a load balancer (hardware) rule which will keep track of sessions based on the aspxauth cookie. Is it safe assume that the value of the cookie is unique? Thanks.

    Read the article

  • Remote controller load testing

    - by SonOfOmer
    Hi everyone, I am trying to run a load test from remote controller and I get this error. Failed to queue test run 'username@computername 2010-04-15 06:06:03': Object of type 'Microsoft.VisualStudio.TestTools.LoadTesting.LoadTestConstantLoadProfile' cannot be converted to type 'Microsoft.VisualStudio.TestTools.WebStress.WebTestLoadProfile'. Running unit test from remote controller works just fine. Thanks.

    Read the article

  • how to load a page and then run a jquery function

    - by Khawar
    on master page i put this code to load page in same master page and then run jqeury function i m trying this $(document).ready(function() { $("#mainContactLink").click(function() { $(body).open("about-us.aspx", self); $(" #abtContacts").fadeIn(1000).siblings("div").fadeOut(100); $(" #abtContacts").siblings(); }); }); its not working.. any idea ?

    Read the article

  • Load/Store Objects in file in Java

    - by brain_damage
    I want to store an object from my class in file, and after that to be able to load the object from this file. But somewhere I am making a mistake(s) and cannot figure out where. May I receive some help? public class GameManagerSystem implements GameManager, Serializable { private static final long serialVersionUID = -5966618586666474164L; HashMap<Game, GameStatus> games; HashMap<Ticket, ArrayList<Object>> baggage; HashSet<Ticket> bookedTickets; Place place; public GameManagerSystem(Place place) { super(); this.games = new HashMap<Game, GameStatus>(); this.baggage = new HashMap<Ticket, ArrayList<Object>>(); this.bookedTickets = new HashSet<Ticket>(); this.place = place; } public static GameManager createManagerSystem(Game at) { return new GameManagerSystem(at); } public boolean store(File f) { try { FileOutputStream fos = new FileOutputStream(f); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(games); oos.writeObject(bookedTickets); oos.writeObject(baggage); oos.close(); fos.close(); } catch (IOException ex) { return false; } return true; } public boolean load(File f) { try { FileInputStream fis = new FileInputStream(f); ObjectInputStream ois = new ObjectInputStream(fis); this.games = (HashMap<Game,GameStatus>)ois.readObject(); this.bookedTickets = (HashSet<Ticket>)ois.readObject(); this.baggage = (HashMap<Ticket,ArrayList<Object>>)ois.readObject(); ois.close(); fis.close(); } catch (IOException e) { return false; } catch (ClassNotFoundException e) { return false; } return true; } . . . } public class JUnitDemo { GameManager manager; @Before public void setUp() { manager = GameManagerSystem.createManagerSystem(Place.ENG); } @Test public void testStore() { Game g = new Game(new Date(), Teams.LIONS, Teams.SHARKS); manager.registerGame(g); File file = new File("file.ser"); assertTrue(airport.store(file)); } }

    Read the article

  • MySQL: LOAD DATA reclaim disk space after delete

    - by Michael
    I have a DB schema composed of MYISAM tables, i am interested to delete old records from time to time from some of the tables. I know that delete does not reclaim the memory space, but as i found in a description of DELETE command, inserts may reuse the space deleted In MyISAM tables, deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions. I am interested if LOAD DATA command also reuses the deleted space? UPDATE I am also interested how the index space reclaimed?

    Read the article

  • www-data can upload a file but cant move it after the upload action

    - by user70058
    I am currently running Apache and PHP on Ubuntu. I have a page where a user is supposed to upload a profile image. The action on the backend is supposed to work like this: Upload file to user directory -- WORKS! Refer to the uploaded file and create a thumbnail in directory thumbs -- DOES NOT WORK www-data has write access to directory thumbs. My guess is that www-data for some reason does not have proper access to the file that was uploaded. UPLOADED FILE PERMISSIONS -rw-r--r-- 1 www-data www-data 47057 Feb 8 23:24 0181c6e0973eb19cb0d98521a6fe1d9e71cd6daa.jpg THUMBS DIRECTORY PERMISSIONS drwxr-sr-x 2 www-data www-data 4096 Feb 8 23:23 thumbs Im at lost here. I'm new to Ubuntu as well. Any help would be greatly appreciated!

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >