Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 368/837 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • How to serve a View as CSV in ASP.NET Web Forms

    - by ChessWhiz
    Hi, I have a MS SQL view that I want to make available as a CSV download in my ASPNET Web Forms app. I am using Entity Framework for other views and tables in the project. What's the best way to enable this download? I could add a HyperLink whose click handler iterates over the view, writes its CSV form to the disk, and then serves that file. However, I'd prefer not to write to the disk if it can be avoided, and that involves iteration code that may be avoided with some other solution. Any ideas?

    Read the article

  • [SWT/RCP] Alpha blending is slow on linux

    - by elgcom
    we are developing an SWT/RCP(Eclipse 3.5) application on both Windows and Linux (on identical hardware). The application is a GIS app which shows several layered maps(PNG images) rendered with alpha blending. org.eclipse.draw2d.Graphics.setAlpha(...); org.eclipse.draw2d.Graphics.drawImage(...); On Windows the performance is pretty good, but on Linux it is very poor. is that a Linux(GTK/KDE) problem? or is there any workaround to improve the performance on Linux?

    Read the article

  • Is AMQP suitable as both an intra and inter-machine software bus?

    - by Bwooce
    I'm trying to get my head around AMQP. It looks great for inter-machine (cluster, LAN, WAN) communication between applications but I'm not sure if it is suitable (in architectural, and current implementation terms) for use as a software bus within one machine. Would it be worth pulling out a current high performance message passing framework to replace it with AMQP, or is this falling into the same trap as RPC by blurring the distinction between local and non-local communication? I'm also wary of the performance impacts of using a WAN technology for intra-machine communications, although this may be more of an implementation concern than architecture. War stories would be appreciated.

    Read the article

  • Is there a way that I can hard code a const XmlNameTable to be reused by all of my XmlTextReader(s)?

    - by highone
    Before I continue I would just like to say I know that "Premature optimization is the root of all evil." However this program is only a hobby project and I enjoy trying to find ways to optimize it. That being said, I was reading an article on improving xml performance and it recommended sharing "the XmlNameTable class that is used to store element and attribute names across multiple XML documents of the same type to improve performance." I wasn't able to find any information about doing this in my googling, so it is likely that this is either not possible, a no-no, or a stupid question, but what's the harm in asking?

    Read the article

  • WPF DataGrid Vs Windows Forms DataGridView

    - by Mrk Mnl
    I have experience in WPF and Windows Forms, however have only used the Windows Forms DataGridView and not the WPF DataGrid (which was only included in .Net 4 or could be added to .Net 3.5 from Codeplex, I understand). I am about to devlop an app using one of these controls heavily for large amounts of data and have read performance is an issue with the WPF DataGrid so I may stick to the Windows Forms DataGridView.. Is this the case? I do not want to use a 3rd party control. Does the Windows Forms DataGridView offer significant performance over the WPF DataGrid for large amounts of data? If I were to use WPF I would prefer to use .Net 3.5S SP1, unless the DataGrid in the .Net 4 is significantly better? Also I want to use ADO with DataTable's which I feel is better suited to Windows Forms..

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • How do we achieve "substring-match" under O(n) time?

    - by Pacerier
    I have an assignment that requires reading a huge file of random inputs, for example: Adana Izmir Adnan Menderes Apt Addis Ababa Aden ADIYAMAN ALDAN Amman Marka Intl Airport Adak Island Adelaide Airport ANURADHAPURA Kodiak Apt DALLAS/ADDISON Ardabil ANDREWS AFB etc.. If I specify a search term, the program is supposed to find the lines whereby a substring occurs. For example, if the search term is "uradha", the program is supposed to show ANURADHAPURA. If the search term is "airport", the program is supposed to show Amman Marka Intl Airport, Adelaide Airport A quote from the assignment specs: "You are to program this application taking efficiency into account as though large amounts of data and processing is involved.." I could easily achieve this functionality using a loop but the performance would be O(n). I was thinking of using a trie but it seems to only work if the substring starts from index 0. I was wondering what solutions are there which gives a performance better than O(n)?

    Read the article

  • Fully automated SQL Server Restore

    - by hasen j
    I'm not very fluent with SQL Server commands. I need a script to restore a database from a .bak file and move the logical_data and logical_log files to a specific path. I can do: restore filelistonly from disk='D:\backups\my_backup.bak' This will give me a result set with a column LogicalName, next I need to use the logical names from the result set in the restore command: restore database my_db_name from disk='d:\backups\my_backups.bak' with file=1, move 'logical_data_file' to 'd:\data\mydb.mdf', move 'logical_log_file' to 'd:\data\mylog.ldf' How do I capture the logical names from the first result set into variables that can be supplied to the "move" command? I think the solution might be trivial, but I'm pretty new to SQL Server.

    Read the article

  • CgiModule and FastCgiModule in IIS7

    - by hari
    My web server is IIS7 running on Windows 2008 Web edition. There are nearly 40 modules when checked pre-installed "Modules". It also having "CgiModule and FastCgiModules". All the websites installed on this server purely runs with ASP.NET technology. Can I remove these two modules to improve performance? Same way, my application uses "Forms Authentication" only. In such case can I delete "Windows Authentication and WindowsAuthenticationModule"?. Also please suggest if any other modules can be deleted to improve performance.

    Read the article

  • from ggplot2 to OOo workflow?

    - by Andreas
    This is not really a programming question, but I try here none the less. I once used latex for my reports. But the people I work with needs to make small edits and do not have latex skillz. Openoffice is then the way to go. But saving ggplot images with dpi 100 makes for really ugly graphs. dpi = 600 is a no go (e.g. huge legend). So what to do? I currently save (still via ggsave) to eps - which openoffice can import. But performance is not good at all. Googling I found a bug for the poor eps performance in OOo, and also talk about a non-implemented svg feature. But none helps me right now. If you work with ggplot2 and OOo - What do you do? I have been unsuccesfull with pdf conversion for some reason.

    Read the article

  • Application_End() cannot access cache through HttpContext.Current.Cache[key]

    - by Carl J.
    I want to be able to maintain certain objects between application restarts. To do that, I want to write specific cached items out to disk in Global.asax Application_End() function and re-load them back on Application_Start(). I currently have a cache helper class, which uses the following method to return the cached value: return HttpContext.Current.Cache[key]; Problem: during Application_End(), HttpContext.Current is null since there is no web request (it's an automated cleanup procedure) - therefore, I cannot access .Cache[] to retrieve any of the items to save to disk. Question: how can I access the cache items during Application_End()?

    Read the article

  • How to reduce java concurrent mode failure and excessive gc

    - by jimx
    In Java, the concurrent mode failure means that the concurrent collector failed to free up enough memory space form tenured and permanent gen and has to give up and let the full stop-the-world gc kicks in. The end result could be very expensive. I understand this concept but never had a good comprehensive understanding of A) what could cause a concurrent mode failure and B) what's the solution?. This sort of unclearness leads me to write/debug code without much of hints in mind and often has to shop around those performance flags from Foo to Bar without particular reasons, just have to try. I'd like to learn from developers here how your experience is. If you had previous encountered such performance issue, what was the cause and how you addressed it? If you have coding recommendations, please don't be too general. Thanks!

    Read the article

  • Is there a difference between Perl's shift versus assignment from @_ for subroutine parameters?

    - by cowgod
    Let us ignore for a moment Damian Conway's best practice of no more than three positional parameters for any given subroutine. Is there any difference between the two examples below in regards to performance or functionality? Using shift: sub do_something_fantastical { my $foo = shift; my $bar = shift; my $baz = shift; my $qux = shift; my $quux = shift; my $corge = shift; } Using @_: sub do_something_fantastical { my ($foo, $bar, $baz, $qux, $quux, $corge) = @_; } Provided that both examples are the same in terms of performance and functionality, what do people think about one format over the other? Obviously the example using @_ is fewer lines of code, but isn't it more legible to use shift as shown in the other example? Opinions with good reasoning are welcome.

    Read the article

  • Portable and Secure Document Repository

    - by Sivakanesh
    I'm trying to find a document manager/repository (WinXP) that can be used from a USB disk. I would like a tool that will allow you to add all documents into a single repository (or a secure file system). Ideally you would login to this portable application to add or retrieve a document and document shouldn't be accessible outside of the application. I have found an application called Benubird Pro (app is portable) that allows you to add files to a single repository, but downsides are that it is not secure and the repository is always stored on the PC and not on the USB disk. Are you able to recommend any other applications? Thanks

    Read the article

  • How does one modify the thread scheduling behavior when using Threading Building Blocks (TBB)?

    - by J Teller
    Does anyone know how to modify the thread scheduling (specifically affinity) when using TBB? Doing a high level analysis on a simple parallel-for application, it seems like TBB is specifying the underlying threads' affinity in a way that reduces performance. Specifically, the cores I'm running on have hyper-threading enabled, and it looks like TBB is affinitizing threads to the same core even if there is a different core left completely unloaded. FWIW, I realize it's likely that TBB is doing the "right thing" and that changing the threads' affinity will only reduce performance. I'd just like to experiment with it to see if that's really the case.

    Read the article

  • Saving contents of ApplicationState in ASP.Net (MVC)

    - by Saqib
    I have an internal app used to edit XML files on disk. The XML files are loaded into an object model which is stored in ApplicationState. I need to save this data. The one option is to do this every time the user changes some data. However, this seems a bit inefficient - writing the data out to disk each time a change is made. Instead, is it possible to be notified whenever the user closes their browser, plus just before the web server exits? Thus, the data would be saved each time a session ends, plus when the computer shuts down, etc. I thought that Application_End(), Application_Error() and Session_End() in Global.asax would provide this, but these methods don't seem to be called.

    Read the article

  • Compression Array of Bytes

    - by Pedro Magalhaes
    Hi, My problem is: I want to store a array of bytes in compressed file, and then I want to read it with a good performance. So I create a array of bytes then pass to a ZLIB algorithm then store it in the file. For my surprise the algorithm doesn't work well., probably because the array is a random sample. Using this approach, it will will be ber easy to read. Just copy the stream to memory, decompress them and copy it to a array of bytes. But i need to compress the file. Do I have to use a algorithm, like RLE, for compresse the byte array? I think that I can store the byte array like a string and then compress it. But i think I am going to have a poor performance on reading data. Sorry for my poor english. Thanks

    Read the article

  • Why do people still use C these days? [closed]

    - by Joshua
    C++ is clearly a far superior language than C, since it has many features that C lacks (although, C++'s object model isn't as ideal as say C#'s). With the coming off the new C++0x standard, why hasn't C been phased out to obscurity? C++ has been around for so long, since the '80s. The Linux kernel has already been ported to C++ with negligible performance differences. I believe, with no evidence, that larger program structures benefit in performance if written in C++ than in C, if only because of object interaction. Don't get me started on "objects-in-C!" libraries, which are all a terrible hack. (Not that C++'s object model is the most ideal, but it is almost up to snuff with C# using common ad-hoc techniques.)

    Read the article

  • Comparison of collection datatypes in C#

    - by Joel in Gö
    Does anyone know of a good overview of the different C# collection types? I am looking for something showing which basic operations such as Add, Remove, RemoveLast etc. are supported, and giving the relative performance. It would be particularly interesting for the various generic classes - and even better if it showed eg. if there is a difference in performance between a List<T> where T is a class and one where T is a struct. A start would be a nice cheat-sheet for the abstract data structures, comparing Linked Lists, Hash Tables etc. etc. Thanks!

    Read the article

  • .NET and C# Exceptions. What is it reasonable to catch.

    - by djna
    Disclaimer, I'm from a Java background. I don't do much C#. There's a great deal of transfer between the two worlds, but of course there are differences and one is in the way Exceptions tend to be thought about. I recently answered a C# question suggesting that under some circstances it's reasonable to do this: try { some work } catch (Exeption e) { commonExceptionHandler(); } (The reasons why are immaterial). I got a response that I don't quite understand: until .NET 4.0, it's very bad to catch Exception. It means you catch various low-level fatal errors and so disguise bugs. It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed, so even if the callExceptionReporter fuunction tries to log and quit, it may not even get to that point (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database). May I'm more confused than I realise, but I don't agree with some of that. Please would other folks comment. I understand that there are many low level Exceptions we don't want to swallow. My commonExceptionHandler() function could reasonably rethrow those. This seems consistent with this answer to a related question. Which does say "Depending on your context it can be acceptable to use catch(...), providing the exception is re-thrown." So I conclude using catch (Exception ) is not always evil, silently swallowing certain exceptions is. The phrase "Until .NET 4 it is very bad to Catch Exception" What changes in .NET 4? IS this a reference to AggregateException, which may give us some new things to do with exceptions we catch, but I don't think changes the fundamental "don't swallow" rule. The next phrase really bothers be. Can this be right? It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database) My understanding is that if some low level code had lowLevelMethod() { try { lowestLevelMethod(); } finally { some really important stuff } } and in my code I call lowLevel(); try { lowLevel() } catch (Exception e) { exception handling and maybe rethrowing } Whether or not I catch Exception this has no effect whatever on the excution of the finally block. By the time we leave lowLevelMethod() the finally has already run. If the finally is going to do any of the bad things, such as corrupt my disk, then it will do so. My catching the Exception made no difference. If It reaches my Exception block I need to do the right thing, but I can't be the cause of dmis-executing finallys

    Read the article

  • Extract a sentence out of sentences separated by delimitors

    - by Laura
    Below is a sample line I have extracted from a website: below a satisfactory level; &quot;an off year for tennis&quot;; &quot;his performance was off&quot; The output displays as: below a satisfactory level; "an off year for tennis"; "his performance was off" I want to get only the first sentence "below a satisfactory level"; Here is the code I have tried after exploring many stackoverflow posts: $data=explode('; ',$str); echo $data[0]; But somehow it is not working. Thanks in advance.

    Read the article

  • Warning in gdb,while run application in device mode

    - by dragon
    Warning in gdb, while run application in device mode... The warning message is warning: UUID mismatch detected with the loaded library - on disk is: /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/System/Library/PrivateFrameworks/MBX2D.framework/MBX2D =uuid-mismatch-with-loaded-file,file="/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/System/Library/PrivateFrameworks/MBX2D.framework/MBX2D" warning: UUID mismatch detected with the loaded library - on disk is: /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libxml2.2.dylib =uuid-mismatch-with-loaded-file,file="/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libxml2.2.dylib" The application not loaded in ipod ? the blackscreen shown for long time ... How can i fix this ? can any one help me? Thanks in advance.....

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >