Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 536/3118 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Best practice for inserting large chunks of HTML into elements with Javscript?

    - by hamstar
    Hey guys. I'm building a web application (using prototype) at the moment that requires the addition of large chunks of HTML into the DOM. Most of these are rows that contain elements with all manner of attributes. Currently I keep a blank row of html in a variable and var blankRow = '<tr><td>' +'<a href="{LINK}" onclick="someFunc(\'{STRING}\');">{WORD}</a>' +'</td></tr>'; function insertRow(o) { newRow = blankRow .sub('{LINK}',o.link) .sub('{STRING}',o.string) .sub('{WORD}',o.word); $('tbodyElem').insert( newRow ); } Now that works all well and dandy, but is it the best practice? I have to update the code in the blankRow when I update code on the page, so the new elements being inserted are the same. It gets sucky when I have like 40 lines of HTML to go in a blankRow and then I have to escape it too. Is there an easier way? I was thinking of urlencoding and then decoding it before insertion but that would still mean a blankRow and lots of escaping. What would be mean would be a eof function a la PHP et al. $blankRow = <<<EOF text text EOF; That would mean no escaping but it would still need a blankRow. What do you do in this situation?

    Read the article

  • Which is faster for large "for" loop: function call or inline coding?

    - by zaplec
    Hi, I have programmed an embedded software (using C of course) and now I'm considering ways to improve the running time of the system. The most important single module in my system is one very large nested for loop module. That module consists of two nested for loops that loops max 122500 times. That's not very much yet, but the problem is that inside that nested for loop I have a function call to a function that is in another source file. That specific function consists mostly of two another nested for loops which loops always 22500 times. So now I have to make a function call 122500 times. I have made that function that is to be called a lot lighter and shorter (yet still works as it should) and now I started to think that would it be faster to rip off that function call and write that process directly inside those first two for loops? The processor in that system is ARM7TDMI and its frequency is 55MHz. The system itself isn't very time critical so it doesn't have to be real time capable. However the faster it can process its duties the better. Also would it be also faster to use while loops instead of fors? And any piece of advice about how to improve the running time is appreciated. -zaplec

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • Web page for IPhone - Large fonts instead of scrolling?

    - by chris_l
    I'm planning the layout of my web page, which should also be usable on the IPhone. I don't really have much experience with the IPhone yet - I just installed the IPhone Simulator on my Mac. The page's contents are flexible, so I think it would be better to use this flexibility to avoid that the user has to scroll around the entire page. Especially I have A header and a sidebar that will be used all the time to perform several actions. A main content area with a number of elements (e.g. images). The UI would stay usable pretty well, if the number of elements shown at one time is reduced for a small screen (e.g. by JavaScript). It would also be okay to make the main content area scrollable (as opposed to the entire page). The problem: If I simply display the page on the IPhone, it uses an extremely small font size, so that users must zoom in first, and then scroll around - so that they can't see the header and sidebar all the time. What's the best way to deal with this situation? Just leave it this way (very small fonts), because users expect that behaviour on the IPhone? Increase the font size (by specifying it in em or px or with xx-large, or what would be the best way?), if I detect - somehow - that it's being displayed on the IPhone. Or is there some way to restrict the viewport size to the screen size, and make it zoom in automatically? I think that would be the easiest solution in my case. Or ...?

    Read the article

  • what can cause large discrepancy between minor GC time and total pause time?

    - by cxcg
    We have a latency-sensitive application, and are experiencing some GC-related pauses we don't fully understand. We occasionally have a minor GC that results in application pause times that are much longer than the reported GC time itself. Here is an example log snippet: 485377.257: [GC 485378.857: [ParNew: 105845K-621K(118016K), 0.0028070 secs] 136492K-31374K(1035520K), 0.0028720 secs] [Times: user=0.01 sys=0.00, real=1.61 secs] Total time for which application threads were stopped: 1.6032830 seconds The total pause time here is orders of magnitude longer than the reported GC time. These are isolated and occasional events: the immediately preceding and succeeding minor GC events do not show this large discrepancy. The process is running on a dedicated machine, with lots of free memory, 8 cores, running Red Hat Enterprise Linux ES Release 4 Update 8 with kernel 2.6.9-89.0.1EL-smp. We have observed this with (32 bit) JVM versions 1.6.0_13 and 1.6.0_18. We are running with these flags: -server -ea -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:NewSize=128m -XX:MaxNewSize=128m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-TraceClassUnloading Can anybody offer some explanation as to what might be going on here, and/or some avenues for further investigation?

    Read the article

  • Javascript (using jQuery) in a large Project... organization, passing data, private method, etc.

    - by gaoshan88
    I'm working on a large project that is organized like so: Multiple javascript files are included as needed and most code is wrapped in anonymous functions... // wutang.js //Included Files Go Here // Public stuff var MethodMan; // Private stuff (function() { var someVar1; MethodMan = function(){...}; var APrivateMethod = function(){...}; $(function(){ //jquery page load stuff here $('#meh').click(APrivateMethod); }); })(); I'm wondering about a few things here. Assuming that there are a number of these files included on a page, what has access to what and what is the best way to pass data between the files? For example, I assume that MethodMan() can be accessed by anything on any included page. It is public, yes? var someVar1 is only accessible to the methods inside that particular anonymous function and nowhere else, yes? Also true of var APrivateMethod(), yes? What if I want APrivateMethod() to make something available to some other method in a different anonymous wrapped method in a different included page. What's the best way to pass data between these private, anonymous functions on different included pages? Do I simply have to make whatever I want to use between them public? How about if I want to minimize global variables? What about the following: var PootyTang = function(){ someVar1 = $('#someid').text(); //some stuff }; and in another included file used by that same page I have: var TangyPoot = function(){ someVar1 = $('#someid').text(); //some completely different stuff }; What's the best way to share the value of someVar1 across these anonymous (they are wrapped as the first example) functions?

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • How to avoid geometric slowdown with large Linq transactions?

    - by Shaul
    I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) ) Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()... Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there. So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?

    Read the article

  • Exemplars of large document-centric applications with COM/XPCOM/.NET interfaces.

    - by Warren P
    I am looking for exemplars (design examples) showing the use of interfaces (aka 'protocols' for you smalltalkers) to design a document management architecture in a large Word Processor, Spreadsheet, vector graphic or publishing package, or office-productivity (non-database) application with support for as many of the following as possible: any open source project, will be ideal, and language of implementation is unimportant since I am looking for design examples, however an object oriented language with support for "interfaces" is a must. I know at least a dozen languages, and I'm willing to study any application's source. use of "interface" could loosely be applied to either XPCOM or COM interfaces, or .NET interfaces, or even the use of pure-virtual (virtual+abstract) base-classes for OOP languages that lack the ability to declare an interface distinct from a class. I am mostly looking for a robust, thorough and flexible implementation for a document, IDocument, various document views (IDocumentView), and whatever operations make sense in that case. I am particular interested in cases where the product in question is a real-world product. For example, if anybody familiar with OpenOffice can tell me if the code contains a good sample design. I am looking for design documentation that outlines the design of the interfaces for such an application. So for example, if the openoffice spreadsheet has such an interface design, then that might be the best case, because it is a widely used real-world design, with millions of users, rather than a textbook example, which is minimal, and contrived. I know that the Mozilla platform uses XPCOM, and its design is heavily "interface" oriented, but I am looking more for a "word processor" or "spreadsheet" type of document design, rather than a web-browser. I am particularly interested in the interfaces used to access to data and meta-data such as markup (attributes like bold, and italics, and font size), and the ability to search and look up named entities within a document.

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • How can you exclude a large number of records in a cross db query using LINQ2SQL?

    - by tap
    So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported. What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x = !idList.Contains(x.Id)) clause on the remote query. This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query. Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure? Thanks in advance.

    Read the article

  • Listening Port Permanently. if file is on my stream, Get file. How to?

    - by Phsika
    i writed 2 client and server program. client dend file also server listen port and than get file.But i need My server App must listen on 51124 port permanently. if any file on my stream, show my message "there is a file on your stream" and than show me savefile dialog. But my server app in "Infinite loop". 1) listen 51124 port every time 2) do i have a file on my stream, show me a messagebox. private void Form1_Load(object sender, EventArgs e) { TcpListener Dinle = new TcpListener(51124); try { Dinle.Start(); Socket Baglanti = Dinle.AcceptSocket(); if (!Baglanti.Connected) { MessageBox.Show("No Connection!"); } else { while (true) { byte[] Dizi = new byte[250000]; Baglanti.Receive(Dizi, Dizi.Length, 0); string Yol; saveFileDialog1.Title = "Save File"; saveFileDialog1.ShowDialog(); Yol = saveFileDialog1.FileName; FileStream Dosya = new FileStream(Yol, FileMode.Create); Dosya.Write(Dizi, 0, Dizi.Length - 20); Dosya.Close(); listBox1.Items.Add("dosya indirildi"); listBox1.Items.Add("Dosya Boyutu=" + Dizi.Length.ToString()); listBox1.Items.Add("Indirilme Tarihi=" + DateTime.Now); listBox1.Items.Add("--------------------------------"); } } } catch (Exception) { throw; } } My Algorithm: if(AnyFileonStream()==true) { GetFile() //Also continue to listening 51124 port... } How can i do that?

    Read the article

  • Good C++ array class for dealing with large arrays of data in a fast and memory efficient way?

    - by Shane MacLaughlin
    Following on from a previous question relating to heap usage restrictions, I'm looking for a good standard C++ class for dealing with big arrays of data in a way that is both memory efficient and speed efficient. I had been allocating the array using a single malloc/HealAlloc but after multiple trys using various calls, keep falling foul of heap fragmentation. So the conclusion I've come to, other than porting to 64 bit, is to use a mechanism that allows me to have a large array spanning multiple smaller memory fragments. I don't want an alloc per element as that is very memory inefficient, so the plan is to write a class that overrides the [] operator and select an appropriate element based on the index. Is there already a decent class out there to do this, or am I better off rolling my own? From my understanding, and some googling, a 32 bit Windows process should theoretically be able address up to 2GB. Now assuming I've 2GB installed, and various other processes and services are hogging about 400MB, how much usable memory do you think my program can reasonably expect to get from the heap? I'm currently using various flavours of Visual C++.

    Read the article

  • How to process a large post array in PHP where item names are all different and not known in advance

    - by Salnajjar
    I have a PHP page that queries a DB to populate a form for the user to modify the data and submit. The query returns a number of rows which contain 3 items: ImageID ImageName ImageDescription The PHP page titles each box in the form with a generic name and appends the ImageID to it. Ie: ImageID_03 ImageName_34 ImageDescription_22 As it's unknown which images are going to have been retrieved from the DB then I can't know in advance what the name of the form entries will be. The form deals with a large number of entries at the same time. My backend PHP form processor that gets the data just sees it as one big array: [imageid_2] => 2 [imagename_2] => _MG_0214 [imageid_10] => 10 [imagename_10] => _MG_0419 [imageid_39] => 39 [imagename_39] => _MG_0420 [imageid_22] => 22 [imagename_22] => Curly Fern [imagedescription_2] => Wibble [imagedescription_10] => Wobble [imagedescription_39] => Fred [imagedescription_22] => Sally I've tried to do an array walk on it to split it into 3 arrays which set places but am stuck: // define empty arrays $imageidarray = array(); $imagenamearray = array(); $imagedescriptionarray = array(); // our function to call when we walk through the posted items array function assignvars($entry, $key) { if (preg_match("/imageid/i", $key)) { array_push($imageidarray, $entry); } elseif (preg_match("/imagename/i", $key)) { // echo " ImageName: $entry"; } elseif (preg_match("/imagedescription/i", $key)) { // echo " ImageDescription: $entry"; } } array_walk($_POST, 'assignvars'); This fails with the error: array_push(): First argument should be an array in... Am I approaching this wrong?

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • ORA-7445 Troubleshooting

    - by [email protected]
        QUICKLINK: Note 153788.1 ORA-600/ORA-7445 Lookup tool Note 1082674.1 : A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video]   Have you observed an ORA-07445 error reported in your alert log? While the ORA-600 error is "captured" as a handled exception in the Oracle source code, the ORA-7445 is an unhandled exception error due to an OS exception which should result in the creation of a core file.  An ORA-7445 is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces.  Looking for the best way to diagnose? Whenever an ORA-7445 error is raised a core file is generated.  There may be a trace file generated with the error as well.   Prior to 11g, the core files are located in the CORE_DUMP_DEST directory.   Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data.  Diagnostic files are written into a root directory for all diagnostic data called the ADR home.   Core files at 11g will go to the ADR HOME/cdump directory.   For more information on the Oracle 11g Diagnosability feature see Note 453125.1 11g Diagnosability Frequently Asked Questions Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]   NOTE:  While the core file is captured in the Diagnosability infrastructure, the file may not be included with a diagnostic package.1.  Check the Alert LogThe alert log may indicate additional errors or other internal errors at the time of the problem.   In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors.  The ORA-7445 error can be side effects of the other problems and you should review the first error and associated core file or trace file and work down the list of errors.   Note 1020463.6 DIAGNOSING ORA-3113 ERRORS Note 1812.1 TECH:  Getting a Stack Trace from a CORE file Note 414966.1 RDA Documentation Index   If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If you see a message at the end of the file   "MAX DUMP FILE SIZE EXCEEDED"   the MAX_DUMP_FILE_SIZE parameter is not setup high enough or to 'unlimited'. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult.  Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information. For pointers on deeper analysis of these errors see   Note 390293.1 Introduction to 600/7445 Internal Error Analysis Note 211909.1 Customer Introduction to ORA-7445 Errors 2.  Search 600/7445 Lookup Tool Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 153788.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out key stack pointers from the associated trace file to match up against known bugs.3.  "Fine tune" searches in Knowledge Base As the ORA-7445 error indicates an unhandled exception in the Oracle source code, your search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file. Keep in mind that searches on generic argument data will bring back a large result set.  The more you can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match your problem. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] NOTE:  If no trace file is captured, see Note 1812.1 TECH:  Getting a Stack Trace from a CORE file.  Core files are managed through 11g Diagnosability, but are not packaged with other diagnostic data automatically.  The core files can be quite large, but may be useful during analysis within Oracle Support.4.  If assistance is required from Oracle Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the Alert log  Associated tracefile(s) or incident package at 11g Patch level  information Core file(s)  Information about changes in configuration and/or application prior to  issues  If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors RDA report or Oracle Configuration Manager information Oracle Configuration Manager Quick Start Guide Note 548815.1 My Oracle Support Configuration Management FAQ Note 414966.1 RDA Documentation Index ***For reference to the content in this blog, refer to Note.1092832.1 Master Note for Diagnosing ORA-600

    Read the article

  • How do I upload an NSImage(NSData)to Twitpic with OAMutableURLRequest?

    - by timothy5216
    I'm using OAConsumer in my xAuth twitterEngine and i'm adding Twitpic OAuth Echo to it. But it won't POST the NSData. here is some of my code: //other file NSArray *reps = [[imageToUpload image] representations]; NSData *imageData = [NSBitmapImageRep representationOfImageRepsInArray:reps usingType:NSJPEGFileType properties:nil]; [twitter testUploadImageData:imageData withMessage:@"Hello WORLD!!" toURL:[NSURL URLWithString:uploadURL.stringValue]]; // - (void)testUploadImageData:(NSData *)data withMessage:(NSString *)message toURL:(NSURL *)url; { //url = @"http://api.twitpic.com/2/upload.xml" //message = @"Hello WORLD!!" NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSString *String = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSLog(@"dataString: %@",String); OAMutableURLRequest *request = [[OAMutableURLRequest alloc] initWithURL:url consumer:self.consumer token:_accessToken realm:nil signatureProvider:nil]; // Setup POST body [request setHTTPMethod:@"POST"]; //NSString *stringBoundary = [NSString stringWithString:@"0xKhTmLbOuNdArY"]; //NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@", stringBoundary]; // NSString *stringBoundarySeparator = [NSString stringWithFormat:@"\r\n--%@\r\n", stringBoundary]; /* NSMutableString *postString = [NSMutableString string]; [postString appendString:@"\r\n"]; [postString appendString:stringBoundarySeparator]; [postString appendString:[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"message\"\r\n\r\n%@", message]]; [postString appendString:stringBoundarySeparator]; [postString appendString:[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"media\"; filename=\"%@\"\r\n", @"file.jpg"]]; [postString appendString:@"Content-Type: image/jpg\r\n"]; [postString appendString:@"Content-Transfer-Encoding: binary\r\n\r\n"]; // Setting up the POST request's multipart/form-data body NSMutableData *postBody = [NSMutableData data]; [postBody appendData:[postString dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:data]; [request setHTTPBody:postBody]; */ [request setHTTPMethod:@"POST"]; NSString *thing = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSLog(@"%@",thing); [request setParameters:[NSArray arrayWithObjects: [OARequestParameter requestParameterWithName:@"oauth_token" value:_accessToken.key], [OARequestParameter requestParameterWithName:@"X-Auth-Service-Provider" value:@"https://api.twitter.com/1/account/verify_credentials.json"], [OARequestParameter requestParameterWithName:@"key" value:@"my-key-here :P"], [OARequestParameter requestParameterWithName:@"message" value:message], //iv'e changed this many times. I was just trying this to see if it works [OARequestParameter requestParameterWithName:@"media" value:thing], nil]]; OAAsynchronousDataFetcher *dataFetcher = [[OAAsynchronousDataFetcher alloc] init]; [dataFetcher initWithRequest:request delegate:self didFinishSelector:@selector(uploadDidUpload:withData:) didFailSelector:@selector(uploadDidFail:withData:)]; [dataFetcher start]; [dataFetcher release]; [request release]; [pool drain]; } I'm authenticated but it still won't POST the data :(

    Read the article

  • java: how to get a string representation of a compressed byte array ?

    - by Guillaume
    I want to put some compressed data into a remote repository. To put data on this repository I can only use a method that take the name of the resource and its content as a String. (like data.txt + "hello world"). The repository is moking a filesystem but is not, so I can not use File directly. I want to be able to do the following: client send to server a file 'data.txt' server compress 'data.txt' into a compressed file 'data.zip' server send a string representation of data.zip to the repository repository store data.zip client download from repository data.zip and his able to open it with its favorite zip tool The problem arise at step 3 when I try to get a string representation of my compressed file. Here is a sample class, using the zip*stream and that emulate the repository showcasing my problem. The created zip file is working, but after its 'serialization' it's get corrupted. (the sample class use jakarta commons.io ) Many thanks for your help. package zip; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import java.util.zip.ZipOutputStream; import org.apache.commons.io.FileUtils; /** * Date: May 19, 2010 - 6:13:07 PM * * @author Guillaume AME. */ public class ZipMe { public static void addOrUpdate(File zipFile, File ... files) throws IOException { File tempFile = File.createTempFile(zipFile.getName(), null); // delete it, otherwise you cannot rename your existing zip to it. tempFile.delete(); boolean renameOk = zipFile.renameTo(tempFile); if (!renameOk) { throw new RuntimeException("could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath()); } byte[] buf = new byte[1024]; ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile)); ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile)); ZipEntry entry = zin.getNextEntry(); while (entry != null) { String name = entry.getName(); boolean notInFiles = true; for (File f : files) { if (f.getName().equals(name)) { notInFiles = false; break; } } if (notInFiles) { // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(name)); // Transfer bytes from the ZIP file to the output file int len; while ((len = zin.read(buf)) > 0) { out.write(buf, 0, len); } } entry = zin.getNextEntry(); } // Close the streams zin.close(); // Compress the files if (files != null) { for (File file : files) { InputStream in = new FileInputStream(file); // Add ZIP entry to output stream. out.putNextEntry(new ZipEntry(file.getName())); // Transfer bytes from the file to the ZIP file int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } // Complete the entry out.closeEntry(); in.close(); } // Complete the ZIP file } tempFile.delete(); out.close(); } public static void main(String[] args) throws IOException { final String zipArchivePath = "c:/temp/archive.zip"; final String tempFilePath = "c:/temp/data.txt"; final String resultZipFile = "c:/temp/resultingArchive.zip"; File zipArchive = new File(zipArchivePath); FileUtils.touch(zipArchive); File tempFile = new File(tempFilePath); FileUtils.writeStringToFile(tempFile, "hello world"); addOrUpdate(zipArchive, tempFile); //archive.zip exists and contains a compressed data.txt that can be read using winrar //now simulate writing of the zip into a in memory cache String archiveText = FileUtils.readFileToString(zipArchive); FileUtils.writeStringToFile(new File(resultZipFile), archiveText); //resultingArchive.zip exists, contains a compressed data.txt, but it can not //be read using winrar: CRC failed in data.txt. The file is corrupt } }

    Read the article

  • How to open a file URI in C#?

    - by roshan
    Here's the code snippet String str= ??????? // I want to assign c:/my/test.html to this string Uri uri= new Uri (str); Stream src = Application.GetContentStream(uri).Stream; What's the correct way to do this? I'm getting "URI not relative" Exception thrown

    Read the article

  • Play audio file data - Spring MVC

    - by Vijay Veeraraghavan
    In my web-application, I have various audio clips uploaded by the users in the database stored in the BLOB column. The audio files are low bit rate WAV files. The clips are secured, one can see only those clips he has uploaded. Instead of user downloading the clip and playing it in his player, I need it be steamed online in the web page itself. In the jsp I use the <audio> tag with the source mapping to the controller mappping url. <td> <audio controls><source src="recfile/${au.id}" type="audio/mpeg" /></audio> </td> Where, the recfile is the request mapping and the au.id is the audio id. In the controller I process the request like below @RequestMapping(value = "/recfile/{id}", method = RequestMethod.GET, produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE }) public HttpEntity<byte[]> downloadRecipientFile(@PathVariable("id") int id, ModelMap model, HttpServletResponse response) throws IOException, ServletException { LOGGER.debug("[GroupListController downloadRecipientFile]"); VoiceAudioLibrary dGroup = audioClipService.findAudioClip(id); if (dGroup == null || dGroup.getAudioData() == null || dGroup.getAudioData().length <= 0) { throw new ServletException("No clip found/clip has not data, id=" + id); } HttpHeaders header = new HttpHeaders(); I tried this too //header.setContentType(new MediaType("audio", "mp3")); header.setContentType(new MediaType("audio", "vnd.wave"); header.setContentLength(dGroup.getAudioData().length); return new HttpEntity<byte[]>(dGroup.getAudioData(), header); } When the jsp loads, the controller get the request, it serves back the audio data fetched from the database, the jsp too shows the player with the controls. But when I play it nothing happens. Why is it? Am I missing anything in the configuration? Am I doing it right?

    Read the article

  • Open File Dialog MVVM

    - by Jose
    Ok I really would like to know how expert MVVM developers handle an openfile dialog in WPF. I don't really want to do this in my ViewModel(where 'Browse' is referenced via a DelegateCommand) void Browse(object param) { //Add code here OpenFileDialog d = new OpenFileDialog(); if (d.ShowDialog() == true) { //Do stuff } } Because I believe that goes against MVVM methodology. What do I do?

    Read the article

  • Recursion in Ecore-File?!

    - by Dominik
    Hey guys, just tried to convert towards a Ecore-Model from a given UML-Model. After this I am trying to create a Generator Model. Everytime I try to do this I get the Error Message, that there is a "Unhandled event loop exception" with this log: org.eclipse.swt.SWTException: Failed to execute runnable (java.lang.NullPointerException) at org.eclipse.swt.SWT.error(SWT.java:3884) at org.eclipse.swt.SWT.error(SWT.java:3799) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:137) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:3885) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3506) at org.eclipse.jface.window.Window.runEventLoop(Window.java:825) at org.eclipse.jface.window.Window.open(Window.java:801) at org.eclipse.gmf.internal.bridge.ui.dashboard.DashboardMediator$RunWizardAction.run(DashboardMediator.java:316) at org.eclipse.gmf.internal.bridge.ui.dashboard.HyperlinkFigure$1.mousePressed(HyperlinkFigure.java:63) at org.eclipse.draw2d.Figure.handleMousePressed(Figure.java:873) at org.eclipse.draw2d.SWTEventDispatcher.dispatchMousePressed(SWTEventDispatcher.java:214) at org.eclipse.draw2d.LightweightSystem$EventHandler.mouseDown(LightweightSystem.java:513) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:179) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3910) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3503) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Caused by: java.lang.NullPointerException at org.eclipse.emf.converter.util.ConverterUtil.computeRequiredPackages(ConverterUtil.java:374) at org.eclipse.emf.converter.ui.contribution.base.ModelConverterPackagePage.validate(ModelConverterPackagePage.java:965) at org.eclipse.emf.importer.ui.contribution.base.ModelImporterPackagePage.validate(ModelImporterPackagePage.java:101) at org.eclipse.emf.converter.ui.contribution.base.ModelConverterPackagePage$1.run(ModelConverterPackagePage.java:155) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:134) ... 34 more After this there occurs another exception with this text: "Unable to create editor ID org.eclipse.emf.codegen.ecore.genmodel.presentation.GenModelEditorID:An unexpected exception was thrown." The session data says: eclipse.buildId=unknown java.version=1.6.0_13 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=de_DE Framework arguments: -product org.eclipse.epp.package.modeling.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.modeling.product -consoleLog With this long log: java.lang.NullPointerException at org.eclipse.emf.ecore.util.EcoreUtil.getURI(EcoreUtil.java:2887) at org.eclipse.emf.codegen.ecore.genmodel.impl.GenModelImpl.diagnose(GenModelImpl.java:2930) at org.eclipse.emf.codegen.ecore.genmodel.presentation.GenModelEditor.validate(GenModelEditor.java:1773) at org.eclipse.emf.codegen.ecore.genmodel.presentation.GenModelEditor.initialize(GenModelEditor.java:596) at org.eclipse.emf.codegen.ecore.genmodel.presentation.GenModelEditor.createPages(GenModelEditor.java:1080) at org.eclipse.ui.part.MultiPageEditorPart.createPartControl(MultiPageEditorPart.java:357) at org.eclipse.ui.internal.EditorReference.createPartHelper(EditorReference.java:662) at org.eclipse.ui.internal.EditorReference.createPart(EditorReference.java:462) at org.eclipse.ui.internal.WorkbenchPartReference.getPart(WorkbenchPartReference.java:595) at org.eclipse.ui.internal.EditorReference.getEditor(EditorReference.java:286) at org.eclipse.ui.internal.WorkbenchPage.busyOpenEditorBatched(WorkbenchPage.java:2857) at org.eclipse.ui.internal.WorkbenchPage.busyOpenEditor(WorkbenchPage.java:2762) at org.eclipse.ui.internal.WorkbenchPage.access$11(WorkbenchPage.java:2754) at org.eclipse.ui.internal.WorkbenchPage$10.run(WorkbenchPage.java:2705) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:2701) at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:2685) at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:2668) at org.eclipse.emf.converter.ui.contribution.base.ModelConverterWizard.openEditor(ModelConverterWizard.java:318) at org.eclipse.emf.importer.ui.contribution.base.ModelImporterWizard.performFinish(ModelImporterWizard.java:167) at org.eclipse.jface.wizard.WizardDialog.finishPressed(WizardDialog.java:752) at org.eclipse.gmf.internal.bridge.ui.dashboard.DashboardMediator$RunWizardAction$1.finishPressed(DashboardMediator.java:311) at org.eclipse.jface.wizard.WizardDialog.buttonPressed(WizardDialog.java:373) at org.eclipse.jface.dialogs.Dialog$2.widgetSelected(Dialog.java:624) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:228) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3910) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3503) at org.eclipse.jface.window.Window.runEventLoop(Window.java:825) at org.eclipse.jface.window.Window.open(Window.java:801) at org.eclipse.gmf.internal.bridge.ui.dashboard.DashboardMediator$RunWizardAction.run(DashboardMediator.java:316) at org.eclipse.gmf.internal.bridge.ui.dashboard.HyperlinkFigure$1.mousePressed(HyperlinkFigure.java:63) at org.eclipse.draw2d.Figure.handleMousePressed(Figure.java:873) at org.eclipse.draw2d.SWTEventDispatcher.dispatchMousePressed(SWTEventDispatcher.java:214) at org.eclipse.draw2d.LightweightSystem$EventHandler.mouseDown(LightweightSystem.java:513) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:179) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3910) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3503) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Has anyone an idea what is going wrong? I looked a while at my model but were not able to find something wrong. I just thought there might be a recursion due to the "Unhandled event loop exception" but is this even possible? Thanks in advance, Dominik

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >