Search Results

Search found 71953 results on 2879 pages for 'load data infile'.

Page 120/2879 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Is there an difference between transient properties defined in the data model, or in the custom subc

    - by mystify
    I was reading that setting the value of a transient property always results in marking the managed object as "dirty". However, what I don't get is this: If I make a subclass of NSManagedObject and use some extra properties which I don't need to be persistet, how does Core Data know about them and how can it mark the object as dirty when I access these? Again, they're not defined in the data model, so Core Data has no really good hint that they are there. Or does Core Data use some kind of introspection to analyze my custom class and figure out what properties I have in there?

    Read the article

  • python how to put data on y-axis when plotting histogram

    - by user3041107
    I don't quite understand how to control y - axis when using plt.hist plot in python. I read my .txt data file - it contains 10 columns with various data. If I want to plot distribution of strain on x axis I take column n.5. But what kind of value appears on y axis ??? Don't understand that. here is the code: import numpy import matplotlib.pyplot as plt from pylab import * from scipy.stats import norm import sys strain = [] infile = sys.argv[1] for line in infile: ret = numpy.loadtxt(infile) strain += list(ret[:,5]) fig = plt.figure() plt.hist(strain, bins = 20) plt.show() Thanks for help!

    Read the article

  • jquery load function not working

    - by pradeep
    function newPage(pagenum) { /* load page default from server - pass product name */ $('#data').html("<div id='response'>Loading.....</div>").load( '/college/college_change.php', { product:'college', city:"<?php echo $city ?>", university:"<?php echo $university ?>", programmes:"<?php $programmes ?>", type:"<?php echo $type ?>", entrance_exams:"<?php echo $entrance_exams ?>", pagenum:pagenum }); } I am using this load function, it works well in most browsers, but in IE it does not load the data.

    Read the article

  • R: How to write out a data.frame so that I can paste it into SO for others to read?

    - by John
    I have a large data.frame displaying some weird properties when plotted. I'd like to ask a question about it on Stackoverflow, to do that I'd like to write the data.frame out in a form that I can paste it into SO and somebody else can easily run it and have it back into a data.frame object again. Is there an easy way to accomplish this? Also, if it is really long, should I use paste bin instead of directly paste it here?

    Read the article

  • Mass data store with SQL SERVER

    - by Leo
    We need management 10,000 GPS devices, each GPS device upload a GPS data every 30 seconds, these data need to store in the database(MS SQL Server 2005). Each GPS device daily data quantity is: 24 * 60 * 2 = 2,880 10 000 10,000 GPS devices daily data quantity is: 10000 * 2880 = 28,800,000 Each GPS data approximately 160Byte, the amount of data per day is: 28,800,000 * 160 = 4.29GB We need hold at least 3 months of GPS data in the database, My question is: 1, whether SQL Server 2005 can support such a large amount of data store? 2, How to plan data table? (all GPS data storage in one table? Daily table? Each GPS device with a GPS data table?) The GPS data: GPSID varchar(21), RecvTime datetime, GPSTime datetime, IsValid bit, IsNavi bit, Lng float, Lat float, Alt float, Spd smallint, Head smallint, PulseValue bigint, Oil float, TSW1 bigint, TSW1Mask bigint, TSW2 bigint, TSW2Mask, BSW bigint, StateText varchar(200), PosText varchar(200), UploadType tinyint

    Read the article

  • C# or windows equivalent of OS X's Core Data?

    - by Nektarios
    I'm late to the boat and have only just now started using Core Data in OS X / Cocoa - it's incredible and is really changing the way I look at things. Is there an equivalent technology in C# or the modern Windows frameworks? i.e. having managed data types where you get saving, data management, deleting, searching all for free? Also wondering if there's anything like this on Linux.

    Read the article

  • What is the most efficient way to use Core Data?

    - by Eric
    I'm developing an iPad application using Core Data, and was hoping someone could clarify something about Core Data. Right now, I populate my table by making a fetch request for all of my data in viewDidLoad. I'd rather make individual fetch requests in my tableView:cellForRowAtIndexPath:. Can anyone tell me which is more efficient, and why? In other words, is it much less efficient to make lots of small requests as opposed to one big request?

    Read the article

  • What happens if a user jumps over 10 versions before updating, and every version had a new data mode

    - by dontWatchMyProfile
    Example: User installs app v.1.0, adds data. Then the dev submits 10 updates in 10 weeks. After 11 weeks, the user wants v.11.0 and grabs a copy from the app store. Assuming that the app has got 11 .xcdatamodel versions inside, where ***11.xcdatamodel is the current one, what would happen now since the persistent store of the user is ages old? would the migration happen 10 times, step-by-step through every migration iteration? Or does the actual migration of data (lets assume gigabytes of data) happen exactly once, after Core Data (or the persistent store coordinator) has figured out precisely what to do to go from v.1.0 to v.11.0?

    Read the article

  • Load script with parameters

    - by Doseke
    Before I used .jsp pages for jsf, and the below code was pretty fine <script language="javascript" src='<%= renderResponse.encodeURL(renderRequest.getContextPath() +"/resources/jsCropperUI/scriptaculous.js?load=effects,builder,dragdrop") %>' > </script> Now, I'm using .xhtml with RichFaces, and the below code does not work <a4j:loadScript src="/resources/jsCropperUI/scriptaculous.js?load=effects,builder,dragdrop"/> Exception is Static resource not found for path /resources/jsCropperUI/scriptaculous.js?load=effects,builder,dragdrop How can I fix this?

    Read the article

  • jQuery load default content into div

    - by Ricki
    Hi, ive searched around but couldnt really find anything to help. I use this code as a main ajax call for all content on my site (All content loaded dynamically into a div using this script): jQuery(document).ready(function($) { function load(num) { $('#pageContent').html('<img src="imgs/ajax-loader.gif">') $('#pageContent').load(num +".html"); } $.history.init(function(url) { load(url == "" ? "1" : url); }); $('#bbon a').live('click', function(e) { var url = $(this).attr('href'); URLDecoder.decode(location,"UTF-8"); url = url.replace(/^.*#/, ''); $.history.load(url); return false; }); }); which works great. its fantastic. however, i am unable to get default content displayed in the <div> on page load.. so a visitor would have to select a menu item before any content shows. Any ideas on how i could do this? At the minute all i see is my loading animation.. I use jQuery with the History plugin.

    Read the article

  • Algorithms after load-balancer?

    - by Vimvq1987
    I need to study about load-balancers, such as Network Load Balancing, Linux Virtual Server, HAProxy,...There're somethings under-the-hood I need to know: What algorithms/technologies are used in these load-balancers? Which is the most popular? most effective? I expect that these algorithms/technologies will not be too complicated. Are there some resources written about them? Thank you very much for your help.

    Read the article

  • Embedded YouTube video slows down page load

    - by Tom S
    Hi, total newbie here - trying to figure out how to make a page load faster with 1 embedded YouTube video on it - a very modest page takes an extra 5 seconds to completely load with the YouTube player showing up. I'd either like the page to load first or to only load the video when user clicks on it - don't know how to do that.. Here is the YouTube video embed code: <object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/kfZIIKVfJ1w&hl=en_US&fs=1&rel=0&hd=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/kfZIIKVfJ1w&hl=en_US&fs=1&rel=0&hd=0" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object> Thanks for your help!

    Read the article

  • Iterating over a large data set in long running Python process - memory issues?

    - by user1094786
    I am working on a long running Python program (a part of it is a Flask API, and the other realtime data fetcher). Both my long running processes iterate, quite often (the API one might even do so hundreds of times a second) over large data sets (second by second observations of certain economic series, for example 1-5MB worth of data or even more). They also interpolate, compare and do calculations between series etc. What techniques, for the sake of keeping my processes alive, can I practice when iterating / passing as parameters / processing these large data sets? For instance, should I use the gc module and collect manually? Any advice would be appreciated. Thanks!

    Read the article

  • Load the <?php the_permalink(); ?> with an ajax loader

    - by fxg
    I´m working on a wordpress template. I´m trying to load the single.php of a post using ajax. I´m doing all the load thru a loader.js file that has this: // load single project page $("#project_slider").live("click", function(){ $("#content").hide(); $("#content").load("<?php the_permalink(); ?>", function(){ $(this).fadeIn("slow"); }); }); The problem is that I can´t just put on the .load because it doesn´t works. this is the markup: <div id="project_page" class="item"> <a href="#"> <img src="<?php the_field('artworks_thumbnail'); ?>" alt="" width="240" height="173"> </a> <div class="art_title"> <p>SWEET LIFE</p> </div> <div class="mask"></div> </div> How can I add the permalink via the loader.js?

    Read the article

  • Perfmon: which counter identifies that threads are waiting?

    - by frankadelic
    While load testing an ASP.NET app, we find that the pages are taking 20-30 sec under heavy load. We suspect this is because the pages are waiting for database calls or web services. Is there a particular perfmon counter that can identify this sort of bottleneck on the web servers? CPU, Memory, and Disk are normal. Or must we use a tool other than perfmon to track down this bottleneck?

    Read the article

  • Keepalived alternative for Solaris 10

    - by antispam
    We are considering an architecture like the one in the picture for Solaris 10 That is, high avalaibility software load balancers in front of web and application servers. Unfortunately, Keepalived is not available for Solaris at the moment. Is there an equivalent artifact for substituing Keepalived which is supported in Solaris 10? Is there an equivalent architecture for Solaris using HA SW load balancing? Thank you.

    Read the article

  • Determining which database instance makes biggest IO

    - by user2008937
    Assuming that I have a dedicated server on which I am running multiple instances of mysql and postresql servers. How without iotop determine which instance in particular time (proc/pid/io shows data collected in some peroid of time) makes the biggest IO (so it increases IOWAIT)? When lots of ppl do something on DB then I clearly see which instance is making the load because of high cpu usage, but I had a situation when the cpu usage was just normal, but very high iowait made a huge load on server and i had problem finding process that was making some outstanding IO

    Read the article

  • is it possible to synchronize the states of TCP proxies in real time (for real-high-availability of SLB)?

    - by Song
    Consider that there are two server load balancers working in the tcp proxy mode (e.g., for L7 load balancing). Is it possible to synchronize their states in real time so that they can be a backup for each other? in case that one is down, the other still has all necessary states to uninterruptedly support all existing TCP connections. I understand that this is hard, but I am wondering whether any free/commercial LB already supports this feature. Thank you!

    Read the article

  • WCF – interchangeable data-contract types

    - by nmarun
    In a WSDL based environment, unlike a CLR-world, we pass around the ‘state’ of an object and not the reference of an object. Well firstly, what does ‘state’ mean and does this also mean that we can send a struct where a class is expected (or vice-versa) as long as their ‘state’ is one and the same? Let’s see. So I have an operation contract defined as below: 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract] 5: Employee SaveEmployee(Employee employee); 6: } 7:  8: [ServiceBehavior] 9: public class LearnWcfService : ILearnWcfServiceExtend 10: { 11: public Employee SaveEmployee(Employee employee) 12: { 13: employee.EmployeeId = 123; 14: return employee; 15: } 16: } Quite simplistic operation there (which translates to ‘absolutely no business value’). Now, the data contract Employee mentioned above is a struct. 1: public struct Employee 2: { 3: public int EmployeeId { get; set; } 4:  5: public string FName { get; set; } 6: } After compilation and consumption of this service, my proxy (in the Reference.cs file) looks like below (I’ve ignored the rest of the details just to avoid unwanted confusion): 1: public partial struct Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged I call the service with the code below: 1: private static void CallWcfService() 2: { 3: Employee employee = new Employee { FName = "A" }; 4: Console.WriteLine("IsValueType: {0}", employee.GetType().IsValueType); 5: Console.WriteLine("IsClass: {0}", employee.GetType().IsClass); 6: Console.WriteLine("Before calling the service: {0} - {1}", employee.EmployeeId, employee.FName); 7: employee = LearnWcfServiceClient.SaveEmployee(employee); 8: Console.WriteLine("Return from the service: {0} - {1}", employee.EmployeeId, employee.FName); 9: } The output is: I now change my Employee type from a struct to a class in the proxy class and run the application: 1: public partial class Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged { The output this time is: The state of an object implies towards its composition, the properties and the values of these properties and not based on whether it is a reference type (class) or a value type (struct). And as shown above, we’re actually passing an object by its state and not by reference. Continuing on the same topic of ‘type-interchangeability’, WCF treats two data contracts as equivalent if they have the same ‘wire-representation’. We can do so using the DataContract and DataMember attributes’ Name property. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: public int Id { get; set; } 6:  7: [DataMember] 8: public string FirstName { get; set; } 9: } 10:  11: [DataContract(Name="Person")] 12: public class Employee 13: { 14: [DataMember(Name = "Id")] 15: public int EmployeeId { get; set; } 16:  17: [DataMember(Name="FirstName")] 18: public string FName { get; set; } 19: } I’ve created two data contracts with the exact same wire-representation. Just remember that the names and the types of data members need to match to be considered equivalent. The question then arises as to what gets generated in the proxy class. Despite us declaring two data contracts (Person and Employee), only one gets emitted – Person. This is because we’re saying that the Employee type has the same wire-representation as the Person type. Also that the signature of the SaveEmployee operation gets changed on the proxy side: 1: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")] 2: [System.ServiceModel.ServiceContractAttribute(ConfigurationName="ServiceProxy.ILearnWcfServiceExtend")] 3: public interface ILearnWcfServiceExtend 4: { 5: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployee", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployeeResponse")] 6: ClientApplication.ServiceProxy.Person SaveEmployee(ClientApplication.ServiceProxy.Person employee); 7: } But, on the service side, the SaveEmployee still accepts and returns an Employee data contract. 1: [ServiceBehavior] 2: public class LearnWcfService : ILearnWcfServiceExtend 3: { 4: public Employee SaveEmployee(Employee employee) 5: { 6: employee.EmployeeId = 123; 7: return employee; 8: } 9: } Despite all these changes, our output remains the same as the last one: This is type-interchangeability at work! Here’s one more thing to ponder about. Our Person type is a struct and Employee type is a class. Then how is it that the Person type got emitted as a ‘class’ in the proxy? It’s worth mentioning that WSDL describes a type called Employee and does not say whether it is a class or a struct (see the SOAP message below): 1: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 2: xmlns:tem="http://tempuri.org/" 3: xmlns:ser="http://schemas.datacontract.org/2004/07/ServiceApplication"> 4: <soapenv:Header/> 5: <soapenv:Body> 6: <tem:SaveEmployee> 7: <!--Optional:--> 8: <tem:employee> 9: <!--Optional:--> 10: <ser:EmployeeId>?</ser:EmployeeId> 11: <!--Optional:--> 12: <ser:FName>?</ser:FName> 13: </tem:employee> 14: </tem:SaveEmployee> 15: </soapenv:Body> 16: </soapenv:Envelope> There are some differences between how ‘Add Service Reference’ and the svcutil.exe generate the proxy class, but turns out both do some kind of reflection and determine the type of the data contract and emit the code accordingly. So since the Employee type is a class, the proxy ‘Person’ type gets generated as a class. In fact, reflecting on svcutil.exe application, you’ll see that there are a couple of places wherein a flag actually determines a type as a class or a struct. One example is in the ExportISerializableDataContract method in the System.Runtime.Serialization.CodeExporter class. Seems like these flags have a say in deciding whether the type gets emitted as a struct or a class. This behavior is different if you use the WSDL tool though. WSDL tool does not do any kind of reflection of the data contract / serialized type, it emits the type as a class by default. You can check this using the two command lines below:   Note to self: Remember ‘state’ and type-interchangeability when traversing through the WSDL planet!

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >