Search Results

Search found 94339 results on 3774 pages for 'system data'.

Page 28/3774 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Imperative vs. LINQ Performance on WP7

    - by Bil Simser
    Jesse Liberty had a nice post presenting the concepts around imperative, LINQ and fluent programming to populate a listbox. Check out the post as it’s a great example of some foundational things every .NET programmer should know. I was more interested in what the IL code that would be generated from imperative vs. LINQ was like and what the performance numbers are and how they differ. The code at the instruction level is interesting but not surprising. The imperative example with it’s creating lists and loops weighs in at about 60 instructions. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void ImperativeMethod() cil managed 2: { 3: .maxstack 3 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.List`1<int32> inLoop, 7: [2] int32 n, 8: [3] class [mscorlib]System.Collections.Generic.IEnumerator`1<int32> CS$5$0000, 9: [4] bool CS$4$0001) 10: L_0000: nop 11: L_0001: ldc.i4.1 12: L_0002: ldc.i4.s 50 13: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 14: L_0009: stloc.0 15: L_000a: newobj instance void [mscorlib]System.Collections.Generic.List`1<int32>::.ctor() 16: L_000f: stloc.1 17: L_0010: nop 18: L_0011: ldloc.0 19: L_0012: callvirt instance class [mscorlib]System.Collections.Generic.IEnumerator`1<!0> [mscorlib]System.Collections.Generic.IEnumerable`1<int32>::GetEnumerator() 20: L_0017: stloc.3 21: L_0018: br.s L_003a 22: L_001a: ldloc.3 23: L_001b: callvirt instance !0 [mscorlib]System.Collections.Generic.IEnumerator`1<int32>::get_Current() 24: L_0020: stloc.2 25: L_0021: nop 26: L_0022: ldloc.2 27: L_0023: ldc.i4.5 28: L_0024: cgt 29: L_0026: ldc.i4.0 30: L_0027: ceq 31: L_0029: stloc.s CS$4$0001 32: L_002b: ldloc.s CS$4$0001 33: L_002d: brtrue.s L_0039 34: L_002f: ldloc.1 35: L_0030: ldloc.2 36: L_0031: ldloc.2 37: L_0032: mul 38: L_0033: callvirt instance void [mscorlib]System.Collections.Generic.List`1<int32>::Add(!0) 39: L_0038: nop 40: L_0039: nop 41: L_003a: ldloc.3 42: L_003b: callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext() 43: L_0040: stloc.s CS$4$0001 44: L_0042: ldloc.s CS$4$0001 45: L_0044: brtrue.s L_001a 46: L_0046: leave.s L_005a 47: L_0048: ldloc.3 48: L_0049: ldnull 49: L_004a: ceq 50: L_004c: stloc.s CS$4$0001 51: L_004e: ldloc.s CS$4$0001 52: L_0050: brtrue.s L_0059 53: L_0052: ldloc.3 54: L_0053: callvirt instance void [mscorlib]System.IDisposable::Dispose() 55: L_0058: nop 56: L_0059: endfinally 57: L_005a: nop 58: L_005b: ldarg.0 59: L_005c: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB1 60: L_0061: ldloc.1 61: L_0062: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 62: L_0067: nop 63: L_0068: ret 64: .try L_0018 to L_0048 finally handler L_0048 to L_005a 65: } 66:   67: Compare that to the IL generated for the LINQ version which has about half of the instructions and just gets the job done, no fluff. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void LINQMethod() cil managed 2: { 3: .maxstack 4 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> queryResult) 7: L_0000: nop 8: L_0001: ldc.i4.1 9: L_0002: ldc.i4.s 50 10: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 11: L_0009: stloc.0 12: L_000a: ldloc.0 13: L_000b: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 14: L_0010: brtrue.s L_0025 15: L_0012: ldnull 16: L_0013: ldftn bool PerfTest.MainPage::<LINQProgramming>b__4(int32) 17: L_0019: newobj instance void [System.Core]System.Func`2<int32, bool>::.ctor(object, native int) 18: L_001e: stsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 19: L_0023: br.s L_0025 20: L_0025: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 21: L_002a: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0> [System.Core]System.Linq.Enumerable::Where<int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, bool>) 22: L_002f: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 23: L_0034: brtrue.s L_0049 24: L_0036: ldnull 25: L_0037: ldftn int32 PerfTest.MainPage::<LINQProgramming>b__5(int32) 26: L_003d: newobj instance void [System.Core]System.Func`2<int32, int32>::.ctor(object, native int) 27: L_0042: stsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 28: L_0047: br.s L_0049 29: L_0049: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 30: L_004e: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!1> [System.Core]System.Linq.Enumerable::Select<int32, int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, !!1>) 31: L_0053: stloc.1 32: L_0054: ldarg.0 33: L_0055: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB2 34: L_005a: ldloc.1 35: L_005b: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 36: L_0060: nop 37: L_0061: ret 38: } Again, not surprising here but a good indicator that you should consider using LINQ where possible. In fact if you have ReSharper installed you’ll see a squiggly (technical term) in the imperative code that says “Hey Dude, I can convert this to LINQ if you want to be c00L!” (or something like that, it’s the 2010 geek version of Clippy). What about the fluent version? As Jon correctly pointed out in the comments, when you compare the IL for the LINQ code and the IL for the fluent code it’s the same. LINQ and the fluent interface are just syntactical sugar so you decide what you’re most comfortable with. At the end of the day they’re both the same. Now onto the numbers. Again I expected the imperative version to be better performing than the LINQ version (before I saw the IL that was generated). Call it womanly instinct. A gut feel. Whatever. Some of the numbers are interesting though. For Jesse’s example of 50 items, the numbers were interesting. The imperative sample clocked in at 7ms while the LINQ version completed in 4. As the number of items went up, the elapsed time didn’t necessarily climb exponentially. At 500 items they were pretty much the same and the results were similar up to about 50,000 items. After that I tried 500,000 items where the gap widened but not by much (2.2 seconds for imperative, 2.3 for LINQ). It wasn’t until I tried 5,000,000 items where things were noticeable. Imperative filled the list in 20 seconds while LINQ took 8 seconds longer (although personally I wouldn’t suggest you put 5 million items in a list unless you want your users showing up at your door with torches and pitchforks). Here’s the table with the full results. Method/Items 50 500 5,000 50,000 500,000 5,000,000 Imperative 7ms 7ms 38ms 223ms 2230ms 20974ms LINQ/Fluent 4ms 6ms 41ms 240ms 2310ms 28731ms Like I said, at the end of the day it’s not a huge difference and you really don’t want your users waiting around for 30 seconds on a mobile device filling lists. In fact if Windows Phone 7 detects you’re taking more than 10 seconds to do any one thing, it considers the app hung and shuts it down. The results here are for Windows Phone 7 but frankly they're the same for desktop and web apps so feel free to apply it generally. From a programming perspective, choose what you like. Some LINQ statements can get pretty hairy so I usually fall back with my simple mind and write it imperatively. If you really want to impress your friends, write it old school then let ReSharper do the hard work for! Happy programming!

    Read the article

  • The remote host closed the connection. The error code is 0x80070057

    - by Jalpesh P. Vadgama
    While creating a PDF or any file with asp.net pages I was getting following error. Exception Type:System.Web.HttpException The remote host closed the connection. The error code is 0x80072746. at System.Web.Hosting.ISAPIWorkerRequestInProcForIIS6.FlushCore(Byte[] status, Byte[] header, Int32 keepConnected, Int32 totalBodySize, Int32 numBodyFragments, IntPtr[] bodyFragments, Int32[] bodyFragmentLengths, Int32 doneWithSession, Int32 finalStatus, Boolean& async) at System.Web.Hosting.ISAPIWorkerRequest.FlushCachedResponse(Boolean isFinal) at System.Web.Hosting.ISAPIWorkerRequest.FlushResponse(Boolean finalFlush) at System.Web.HttpResponse.Flush(Boolean finalFlush) at System.Web.HttpResponse.Flush() at System.Web.UI.HttpResponseWrapper.System.Web.UI.IHttpResponse.Flush() at System.Web.UI.PageRequestManager.RenderFormCallback(HtmlTextWriter writer, Control containerControl) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) at System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.HtmlFormWrapper.System.Web.UI.IHtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.PageRequestManager.RenderPageCallback(HtmlTextWriter writer, Control pageControl) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Exception Type:System.Web.HttpException The remote host closed the connection. The error code is 0x80072746. at System.Web.Hosting.ISAPIWorkerRequestInProcForIIS6.FlushCore(Byte[] status, After searching and analyzing I have found that client was disconnected and still I am flushing the response which I am doing for creating PDF files from the stream. To fix this kind of error we can use Response.IsClientConnected property to check whether client is connected or not and then we can flush and end response from client. Here is the sample code to fix that problem. if (Response.IsClientConnected) { Response.Flush(); Response.End(); } That’s it Hope this will help you..Stay tuned for more.. Till that Happy Programming!! Technorati Tags: Exception,ASp.NET

    Read the article

  • Banshee crashes consistently - is there a fix?

    - by user36334
    Since updating to ubuntu 11.10 I've had trouble with banshee. In particular when I run it I find that it crashes within an hour without fail. I get the following Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.NullReferenceException: Object reference not set to an instance of an object at Mono.Zeroconf.Providers.AvahiDBus.BrowseService.DisposeResolver () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.BrowseService.Dispose () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.ServiceBrowser.OnItemRemove (Int32 interface, Protocol protocol, System.String name, System.String type, System.String domain, LookupResultFlags flags) [0x00000] in <filename unknown>:0 at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&) at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <filename unknown>:0 at System.Delegate.DynamicInvokeImpl (System.Object[] args) [0x00000] in <filename unknown>:0 at System.MulticastDelegate.DynamicInvokeImpl (System.Object[] args) [0x00000] in <filename unknown>:0 at System.Delegate.DynamicInvoke (System.Object[] args) [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.HandleSignal (NDesk.DBus.Message msg) [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.DispatchSignals () [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.Iterate () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.DBusManager.IterateThread (System.Object o) [0x00000] in <filename unknown>:0 Does anyone else also have this problem?

    Read the article

  • In an Entity-Component-System Engine, How do I deal with groups of dependent entities?

    - by John Daniels
    After going over a few game design patterns, I have settle with Entity-Component-System (ES System) for my game engine. I've reading articles (mainly T=Machine) and review some source code and I think I got enough to get started. There is just one basic idea I am struggling with. How do I deal with groups of entities that are dependent on each other? Let me use an example: Assume I am making a standard overhead shooter (think Jamestown) and I want to construct a "boss entity" with multiple distinct but connected parts. The break down might look like something like this: Ship body: Movement, Rendering Cannon: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled Core: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled, Disabling (er...destroying) all other entities in the ship group My goal would be something that would be identified (and manipulated) as a distinct game element without having to rewrite subsystem form the ground up every time I want to build a new aggregate Element. How do I implement this kind of design in ES System? Do I implement some kind of parent-child entity relationship (entities can have children)? This seems to contradict the methodology that Entities are just empty container and makes it feel more OOP. Do I implement them as separate entities, with some kind of connecting Component (BossComponent) and related system (BossSubSystem)? I can't help but think that this will be hard to implement since how components communicate seem to be a big bear trap. Do I implement them as one Entity, with a collection of components (ShipComponent, CannonComponents, CoreComponent)? This one seems to veer way of the ES System intent (components here seem too much like heavy weight entities), but I'm know to this so I figured I would put that out there. Do I implement them as something else I have mentioned? I know that this can be implemented very easily in OOP, but my choosing ES over OOP is one that I will stick with. If I need to break with pure ES theory to implement this design I will (not like I haven't had to compromise pure design before), but I would prefer to do that for performance reason rather than start with bad design. For extra credit, think of the same design but, each of the "boss entities" were actually connected to a larger "BigBoss entity" made of a main body, main core and 3 "Boss Entities". This would let me see a solution for at least 3 dimensions (grandparent-parent-child)...which should be more than enough for me. Links to articles or example code would be appreciated. Thanks for your time.

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • NSURLConnection receives data even if no data was thrown back

    - by Anna Fortuna
    Let me explain my situation. Currently, I am experimenting long-polling using NSURLConnection. I found this and I decided to try it. What I do is send a request to the server with a timeout interval of 300 secs. (or 5 mins.) Here is a code snippet: NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLCacheStorageAllowedInMemoryOnly timeoutInterval:300]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&resp error:&err]; Now I want to test if the connection will "hold" the request if no data was thrown back from the server, so what I did was this: if (data != nil) [self performSelectorOnMainThread:@selector(dataReceived:) withObject:data waitUntilDone:YES]; And the function dataReceived: looks like this: - (void)dataReceived:(NSData *)data { NSLog(@"DATA RECEIVED!"); NSString *string = [NSString stringWithUTF8String:[data bytes]]; NSLog(@"THE DATA: %@", string); } Server-side, I created a function that will return a data once it fits the arguments and returns none if nothing fits. Here is a snippet of the PHP function: function retrieveMessages($vardata) { if (!empty($vardata)) { $result = check_data($vardata) //check_data is the function which returns 1 if $vardata //fits the arguments, and 0 if it fails to fit if ($result == 1) { $jsonArray = array('Data' => $vardata); echo json_encode($jsonArray); } } } As you can see, the function will only return data if the $result is equal to 1. However, even if the function returns nothing, NSURLConnection will still perform the function dataReceived: meaning the NSURLConnection still receives data, albeit an empty one. So can anyone help me here? How will I perform long-polling using NSURLConnection? Basically, I want to maintain the connection as long as no data is returned. So how will I do it? NOTE: I am new to PHP, so if my code is wrong, please point it out so I can correct it.

    Read the article

  • How to maintain an ordered table with Core Data (or SQL) with insertions/deletions?

    - by Jean-Denis Muys
    This question is in the context of Core Data, but if I am not mistaken, it applies equally well to a more general SQL case. I want to maintain an ordered table using Core Data, with the possibility for the user to: reorder rows insert new lines anywhere delete any existing line What's the best data model to do that? I can see two ways: 1) Model it as an array: I add an int position property to my entity 2) Model it as a linked list: I add two one-to-one relations, next and previous from my entity to itself 1) makes it easy to sort, but painful to insert or delete as you then have to update the position of all objects that come after 2) makes it easy to insert or delete, but very difficult to sort. In fact, I don't think I know how to express a Sort Descriptor (SQL ORDER BY clause) for that case. Now I can imagine a variation on 1): 3) add an int ordering property to the entity, but instead of having it count one-by-one, have it count 100 by 100 (for example). Then inserting is as simple as finding any number between the ordering of the previous and next existing objects. The expensive renumbering only has to occur when the 100 holes have been filled. Making that property a float rather than an int makes it even better: it's almost always possible to find a new float midway between two floats. Am I on the right track with solution 3), or is there something smarter?

    Read the article

  • How can I scrape specific data from a website

    - by Stoney
    I'm trying to scrape data from a website for research. The urls are nicely organized in an example.com/x format, with x as an ascending number and all of the pages are structured in the same way. I just need to grab certain headings and a few numbers which are always in the same locations. I'll then need to get this data into structured form for analysis in Excel. I have used wget before to download pages, but I can't figure out how to grab specific lines of text. Excel has a feature to grab data from the web (Data-From Web) but from what I can see it only allows me to download tables. Unfortunately, the data I need is not in tables.

    Read the article

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • How can I downgrade a system that accidentally had backports installed?

    - by Glyph
    I installed a fresh Ubuntu system. Somehow - possibly through my own error - the backports repository got enabled. Then I did several upgrades. I noticed that this happened when networking suddenly stopped working, "Network Settings" now has an "(alpha)" in the title bar, "System Settings" ? "Network" now displays an error dialog saying "The system network services are not compatible with this version". Now, I've disabled the backports repository, and I'd like to restore my system to its previously-functional state. My question is twofold: How do I determine which packages were installed from backports? Can I automatically re-install all those packages (and purge their configuration) to get back to a sensible state? If the answer to 2 is "no" I can probably manually purge some things and reinstall, but it would be nice to have it handled automatically. Update: It wasn't an update that broke the network; it was apt-get install indicator-network, which installed something called "connman" and removed network-manager and network-manager-gnome. Nevertheless I am leaving the question up, since I am still interested in how I can purge packages from a particular source after accidentally adding that source, and how I can determine which packages were installed from where.

    Read the article

  • System Idle Process network traffic?-Updated

    - by Moab
    I was using NetBalancer and noticed network traffic on an unidentified service, but when I highlight it and then go to the lower center pane and click the parent process it says it is the System Idle process, it is showing incoming and outgoing traffic in the upper pane, anyone know why this Windows System Idle Process is talking on the network? Windows 7 HP 64bit . . . Edit, after blocking the traffic for that unidentified Service I checked my event viewer (Windows LogsSystem) and found 3 new events that were never recorded before and matched the time I blocked the traffic. So is this part of the Windows local DNS cache? Event ID 1014 DNS Client Events Name resolution for the name dns.msftncsi.com timed out after none of the configured DNS servers responded. dns.msftncsi.com Name resolution for the name wpad.home timed out after none of the configured DNS servers responded. wpad Name resolution for the name mscrl.microsoft.com timed out after none of the configured DNS servers responded. mscrl.microsoft.com . Then My Web Browser refused to work, I re-enabled the traffic and all returned to normal. .

    Read the article

  • NTbackup doesn't complete on system state

    - by Joe Majsterski
    I have a Windows 2003 server that is running a semi-custom backup task. The scheduled task calls NTbackup with a few switches depending on whether it is a full or incremental backup. Most of the time, the NTbackup completes fine, and the wrapper then appends the NTbackup log into its own log before adding a few final comments and completing. The problem I am having is that sometimes, NTbackup seems to just... blank out. It always completes backup of the C: and E: drives, but then it will start the system state and not add any more messages into the event log saying it completed that. And the NTbackup log is left empty, since it doesn't write anything to the log until all the backup tasks are complete. This is causing the wrapper to append no text into its own log. That causes problems for us because we read the information out of that log to determine whether backups are failing. The wrapper task also reports that it is completing normally in the event log. Anyone ever seen a case where system state doesn't complete consistently? To be clear, the server is not logging any error messages anywhere. It's just not seeming to complete or log anything.

    Read the article

  • Calculating percentiles in Excel with "buckets" data instead of the data list itself

    - by G B
    I have a bunch of data in Excel that I need to get certain percentile information from. The problem is that instead of having the data set made up of each value, I instead have info on the number of or "bucket" data. For example, imagine that my actual data set looks like this: 1,1,2,2,2,2,3,3,4,4,4 The data set that I have is this: Value No. of occurrences 1 2 2 4 3 2 4 3 Is there an easy way for me to calculate percentile information (as well as the median) without having to explode the summary data out to full data set? (Once I did that, I know that I could just use the Percentile(A1:A5, p) function) This is important because my data set is very large. If I exploded the data out, I would have hundreds of thousands of rows and I would have to do it for a couple of hundred data sets. Help!

    Read the article

  • How do I setup a WCF Data Service with an ADO.NET Entity Entity Model in another assembly?

    - by lsb
    Hi! I have an ASP.NET 4.0 website that has an Entity Data Model hooked up to WCF Data Service. When the Service and Model are in the same assembly everything works. Unfortunately, when I move the Model to another "shared" assembly (and change the namespace) the service compiles but throws a 500 error when launched in a browser. The reason I want to have the Model in a common assembly (lets call it RiaTest.Shared) is that I want share common validation code between the client and service (by checking "Reuse types in referenced assemblies" in the Advanced tab of the Add Service Reference dialog). Anyway, I've spent a couple of hours on this to no avail so any help in the regard would be appreciated...

    Read the article

  • ILMerge - Unresolved assembly reference not allowed: System.Core

    - by Steve Michelotti
    ILMerge is a utility which allows you the merge multiple .NET assemblies into a single binary assembly more for convenient distribution. Recently we ran into problems when attempting to use ILMerge on a .NET 4 project. We received the error message: An exception occurred during merging: Unresolved assembly reference not allowed: System.Core.     at System.Compiler.Ir2md.GetAssemblyRefIndex(AssemblyNode assembly)     at System.Compiler.Ir2md.GetTypeRefIndex(TypeNode type)     at System.Compiler.Ir2md.VisitReferencedType(TypeNode type)     at System.Compiler.Ir2md.GetMemberRefIndex(Member m)     at System.Compiler.Ir2md.PopulateCustomAttributeTable()     at System.Compiler.Ir2md.SetupMetadataWriter(String debugSymbolsLocation)     at System.Compiler.Ir2md.WritePE(Module module, String debugSymbolsLocation, BinaryWriter writer)     at System.Compiler.Writer.WritePE(String location, Boolean writeDebugSymbols, Module module, Boolean delaySign, String keyFileName, String keyName)     at System.Compiler.Writer.WritePE(CompilerParameters compilerParameters, Module module)     at ILMerging.ILMerge.Merge()     at ILMerging.ILMerge.Main(String[] args) It turns out that this issue is caused by ILMerge.exe not being able to find the .NET 4 framework by default. The answer was ultimately found here. You either have to use the /lib option to point to your .NET 4 framework directory (e.g., “C:\Windows\Microsoft.NET\Framework\v4.0.30319” or “C:\Windows\Microsoft.NET\Framework64\v4.0.30319”) or just use an ILMerge.exe.config file that looks like this: 1: <configuration> 2: <startup useLegacyV2RuntimeActivationPolicy="true"> 3: <requiredRuntime safemode="true" imageVersion="v4.0.30319" version="v4.0.30319"/> 4: </startup> 5: </configuration> This was able to successfully resolve my issue.

    Read the article

  • Open Data, Government and Transparency

    - by Tori Wieldt
    A new track at TDC (The Developer's Conference in Sao Paulo, Brazil) is titled Open Data. It deals with open data, government and transparency. Saturday will be a "transparency hacker day" where developers are invited to create applications using open data from the Brazilian government.  Alexandre Gomes, co-lead of the track, says "I want to inspire developers to become "Civic hackers:" developers who create apps to make society better." It is a chance for developers to do well and do good. There are many opportunities for developers, including monitoring government expenditures and getting citizens involved via social networks. The open data movement is growing worldwide. One initiative, the Open Government Partnership, is working to make government data easier to find and access. Making this data easily available means that with the right applications, it will be easier for people to make decisions and suggestions about government policies based on detailed information. Last April, the Open Government Partnership held its annual meeting in Brasilia, the capitol of Brazil. It was a great success showcasing the innovative work being done in open data by governments, civil societies and individuals around the world. For example, Bulgaria now publishes daily data on budget spending for all public institutions. Alexandre Gomes Explains Open Data At TDC, the Open Data track will include a presentation of examples of successful open data projects, an introduction to the semantic web, how to handle big data sets, techniques of data visualization, and how to design APIs.The other track lead is Christian Moryah Miranda, a systems analyst for the Brazilian Government's Ministry of Planning. "The Brazilian government wholeheartedly supports this effort. In order to make our data available to the public, it forces us to be more consistent with our data across ministries, and that's a good step forward for us," he said. He explained the government knows they cannot achieve everything they would like without help from the public. "It is not the government versus the people, rather citizens are partners with the government, and together we can achieve great things!" Miranda exclaimed. Saturday at TDC will be a "transparency hacker day" where developers will be invited to create applications using open data from the Brazilian government. Attendees are invited to pitch their ideas, work in small groups, and present their project at the end of the conference. "For example," Gomes said, "the Brazilian government just released the salaries of all government employees and I can't wait to see what developers can do with that." Resources Open Government Partnership  U.S. Government Open Data ProjectBrazilian Government Open Data ProjectU.K. Government Open Data Project 2012 International Open Government Data Conference 

    Read the article

  • EPM System Standard Deployment Video Series

    - by Paul Anderson -Oracle
    (in via Jan) A four-part video series on deploying Enterprise Performance Management (EPM) System Products has been made available within the Oracle EPMWebcasts YouTube channel. This video series is designed for system administrators. It provides an overview of how to deploy EPM System products using the standard deployment methodology. Deploying EPM System Using Standard Deployment video series: Part 1 - Overview [3:12] Describes the EPM standard deployment and details each of the videos in the series. Part 2 - Preparing for Deployment [3:41] Discusses how to prepare for an EPM standard deployment. Part 3 - Installing and Configuring an Initial Instance of EPM System [4:11] Outlines the steps to install and configure an initial instance of EPM System components.. Part 4 - Scaling Out and Installing EPM System Clients [4:00] Provides an overview of the steps to scale out EPM System components and install EPM System client software. More information is available within the PDF document: EPM System Standard Deployment Guide 11.1.2.3 To view and and access other Oracle EPM Webcast videos visit: Oracle EPM Webcasts YouTube Channel To view and download all of the EPM product documentation visit the Oracle Technology Network (OTN) EPM Documentation Library. In addition to the Oracle EPM Webcasts YouTube channel these videos along with other EPM related product videos and information are available in the Oracle Learning Library (OLL) - visit: Oracle Learning Library - EPM Consolidation and Planning Videos

    Read the article

  • Master Data Management and Cloud Computing

    - by david.butler(at)oracle.com
    Cloud Computing is all the rage these days. There are many reasons why this is so. But like its predecessor, Service Oriented Architecture, it can fall on hard times if the underlying data is left unmanaged. Master Data Management is the perfect Cloud companion. It can materially increase the chances for successful Cloud initiatives. In this blog, I'll review the nature of the Cloud and show how MDM fits in.   Here's the National Institute of Standards and Technology Cloud definition: •          Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.   Cloud architectures have three main layers: applications or Software as a Service (SaaS), Platforms as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS generally refers to applications that are delivered to end-users over the Internet. Oracle CRM On Demand is an example of a SaaS application. Today there are hundreds of SaaS providers covering a wide variety of applications including Salesforce.com, Workday, and Netsuite. Oracle MDM applications are located in this layer of Oracle's On Demand enterprise Cloud platform. We call it Master Data as a Service (MDaaS). PaaS generally refers to an application deployment platform delivered as a service. They are often built on a grid computing architecture and include database and middleware. Oracle Fusion Middleware is in this category and includes the SOA and Data Integration products used to connect SaaS applications including MDM. Finally, IaaS generally refers to computing hardware (servers, storage and network) delivered as a service.  This typically includes the associated software as well: operating systems, virtualization, clustering, etc.    Cloud Computing benefits are compelling for a large number of organizations. These include significant cost savings, increased flexibility, and fast deployments. Cost advantages include paying for just what you use. This is especially critical for organizations with variable or seasonal usage. Companies don't have to invest to support peak computing periods. Costs are also more predictable and controllable. Increased agility includes access to the latest technology and experts without making significant up front investments.   While Cloud Computing is certainly very alluring with a clear value proposition, it is not without its challenges. An IDC survey of 244 IT executives/CIOs and their line-of-business (LOB) colleagues identified a number of issues:   Security - 74% identified security as an issue involving data privacy and resource access control. Integration - 61% found that it is hard to integrate Cloud Apps with in-house applications. Operational Costs - 50% are worried that On Demand will actually cost more given the impact of poor data quality on the rest of the enterprise. Compliance - 49% felt that compliance with required regulatory, legal and general industry requirements (such as PCI, HIPAA and Sarbanes-Oxley) would be a major issue. When control is lost, the ability of a provider to directly manage how and where data is deployed, used and destroyed is negatively impacted.  There are others, but I singled out these four top issues because Master Data Management, properly incorporated into a Cloud Computing infrastructure, can significantly ameliorate all of these problems. Cloud Computing can literally rain raw data across the enterprise.   According to fellow blogger, Mike Ferguson, "the fracturing of data caused by the adoption of cloud computing raises the importance of MDM in keeping disparate data synchronized."   David Linthicum, CTO Blue Mountain Labs blogs that "the lack of MDM will become more of an issue as cloud computing rises. We're moving from complex federated on-premise systems, to complex federated on-premise and cloud-delivered systems."    Left unmanaged, non-standard, inconsistent, ungoverned data with questionable quality can pollute analytical systems, increase operational costs, and reduce the ROI in Cloud and On-Premise applications. As cloud computing becomes more relevant, and more data, applications, services, and processes are moved out to cloud computing platforms, the need for MDM becomes ever more important. Oracle's MDM suite is designed to deal with all four of the above Cloud issues listed in the IDC survey.   Security - MDM manages all master data attribute privacy and resource access control issues. Integration - MDM pre-integrates Cloud Apps with each other and with On Premise applications at the data level. Operational Costs - MDM significantly reduces operational costs by increasing data quality, thereby improving enterprise business processes efficiency. Compliance - MDM, with its built in Data Governance capabilities, insures that the data is governed according to organizational standards. This facilitates rapid and accurate reporting for compliance purposes. Oracle MDM creates governed high quality master data. A unified cleansed and standardized data view is produced. The Oracle Customer Hub creates a single view of the customer. The Oracle Product Hub creates high quality product data designed to support all go-to-market processes. Oracle Supplier Hub dramatically reduces the chances of 'supplier exceptions'. Oracle Site Hub masters locations. And Oracle Hyperion Data Relationship Management masters financial reference data and manages enterprise hierarchies across operational areas from ERP to EPM and CRM to SCM. Oracle Fusion Middleware connects Cloud and On Premise applications to MDM Hubs and brings high quality master data to your enterprise business processes.   An independent analyst once said "Poor data quality is like dirt on the windshield. You may be able to drive for a long time with slowly degrading vision, but at some point, you either have to stop and clear the windshield or risk everything."  Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed. This will in turn insure that expected returns on the investment in Cloud Computing will be realized.       Oracle MDM has proven its metal in this area and has the customers to back that up. In fact, I will be hosting a webcast on Tuesday, April 10th at 10 am PT with one of our top Cloud customers, the Church Pension Group. They have moved all mainline applications to a hosted model and use Oracle MDM to insure the master data is managed and cleansed before it is propagated to other cloud and internal systems. I invite you join Martin Hossfeld, VP, IT Operations, and Danette Patterson, Enterprise Data Manager as they review business drivers for MDM and hosted applications, how they did it, the benefits achieved, and lessons learned. You can register for this free webcast here.  Hope to see you there.

    Read the article

  • How to Fix "Read-only file system" error when I run something as sudo and try to make a folder/file?

    - by Andrew
    When I try to save something or rename a file/folder it say this error " Read-only file system" or run something as root in the terminal it say this error sudo: unable to open /var/lib/sudo/"My User Name"/0: Read-only file system W: Not using locking for read only lock file /var/lib/dpkg/lock E: Unable to write to /var/cache/apt/ E: The package lists or status file could not be parsed or opened. When I make a Folder the error dialog in the details with Nautilus is this: Error creating directory: Read-only file system I would show you I picture of it but it isn't even letting my save onto my flash drive. Please help me.

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • Can anyone explain to me what problem Core Data solves?

    - by Curtis Sumpter
    Core Data seems to add a needless layer of complexity. If you want to save data created natively by the user in an app why not just use an object and then write the data all to SQLite or back to a server using a RESTful script if necessary. Android doesn't have Core Data (though if it has something similar I haven't seen it.). What the heck is the point of buggy CD except useless needless overhead for people who can't write SQL or CGI scripts?

    Read the article

  • Difference between "Data Binding'","Data Hiding","Data Wraping" and "Encapsulation"?

    - by krishna Chandra
    I have been studying the conpects of Object oriented programming. Still I am not able to distinguish between the following concepts of object oriented programming.. a) Data Binding b) Data Hiding c) Data Wrapping d) encapsulation e) Data Abstraction I have gone through a lot of books ,and I also search the difference in google. but still I am not able to make the difference between these? Could anyone please help me ?

    Read the article

  • Recover harddrive data

    - by gameshints
    I have a dell laptop that recently "died" (It would get the blue screen of death upon starting) and the hard drive would make a weird cyclic clicking noises. I wanted to see if I could use some tools on my linux machine to recover the data, so I plugged it into there. If I run "fdisk" I get: Disk /dev/sdb: 20.0 GB, 20003880960 bytes 64 heads, 32 sectors/track, 19077 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Disk identifier: 0x64651a0a Disk /dev/sdb doesn't contain a valid partition table Fine, the partition table is messed up. However if I run "testdisk" in attempt to fix the table, it freezes at this point, making the same cyclical clicking noises: Disk /dev/sdb - 20 GB / 18 GiB - CHS 19078 64 32 Analyse cylinder 158/19077: 00% I don't really care about the hard drive working again, and just the data, so I ran "gpart" to figure out where the partitions used to be. I got this: dev(/dev/sdb) mss(512) chs(19077/64/32)(LBA) #s(39069696) size(19077mb) * Warning: strange partition table magic 0x2A55. Primary partition(1) type: 222(0xDE)(UNKNOWN) size: 15mb #s(31429) s(63-31491) chs: (0/1/1)-(3/126/63)d (0/1/32)-(15/24/4)r hex: 00 01 01 00 DE 7E 3F 03 3F 00 00 00 C5 7A 00 00 Primary partition(2) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) (BOOT) size: 19021mb #s(38956987) s(31492-38988478) chs: (4/0/1)-(895/126/63)d (15/24/5)-(19037/21/31)r hex: 80 00 01 04 07 7E FF 7F 04 7B 00 00 BB 6F 52 02 So I tried to mount just to the old NTFS partition, but got an error: sudo mount -o loop,ro,offset=16123904 -t ntfs /dev/sdb /mnt/usb NTFS signature is missing. Ugh. Okay. But then I tried to get a raw data dump by running dd if=/dev/sdb of=/home/erik/brokenhd skip=31492 count=38956987 But the file got up to 59885568 bytes, and made the same cyclical clicking noises. Obviously there is a bad sector, but I don't know what to do about it! The data is still there... if I view that 57MB file in textpad... I can see raw data from files. How can I get my data back? Thanks for any suggestions, Solution: I was able to recover about 90% of my data: Froze harddrive in freezer Used Ddrescue to make a copy of the drive Since Ddrescue wasn't able to get enough of my drive to use testdisk to recover my partitions/file system, I ended up using photorec to recover most of my files

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >