Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 116/145 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Android Plugin in Eclipse 3.5 on Ubuntu 64bit got problems with properties

    - by Zordid
    Hi there! I got a huge problem with the Android Development Tools ADT running in Eclipse Galileo (3.5.1) on Ubuntu 9.10, 64bit. On this platform, I do not manage to see any property edit dialogs for layout properties. E.g. the one where you can select a string resource ID for text fields, or a drawable ID for image fields or backgrounds. Whenever I click on the ... button next to the property value - nothing happens, except this button disappears. Properties with a list of possible values, e.g. "wrap_content" or "fill_parent" are displayed in a dropdown box directly in the properties field. On a different system I work in a Windows environment with Eclipse 3.4 and the same ADT: no problems whatsoever, everything works fine, the dialogs come perfectly. Does anyone know what to do here? Where's the problem? Why does Eclipse not tell me that something goes wrong? Thanks! NEW DISCOVERIES: I found out that it might not even be an Android problem, but a general Eclipse problem that I can see with all version (Ganymede, Galileo, Helios) on my Linux (Ubuntu) system. It must be a simple UI problem: the button with ... next to the values does not receive the mouse click!! I managed to see the appropriate dialogs to edit the property values by doubleclicking the button - crazy, strange, ugly behavior! But why on earth does nobody else know about this problem - I cannot find anything else on the net about it! Could it be related to this strange "GDK native window problem" on Gnome? HELP!

    Read the article

  • Testing a Gui-heavy WPF application.

    - by Hamish Grubijan
    We (my colleagues) have a messy 12 y.o. mature app that is GUI-based, and the current plan is to add new dialogs & other GUI in WPF, as well as replace some of the older dialogs in WPF as well. At the same time we wish to be able to test that Monster - GUI automation in a maintainable way. Some challenges: The application is massive. It constantly gains new features. It is being changed around (bug fixes, patches). It has a back end, and a layer in-between. The state of it can get out of whack if you beat it to death. What we want is: Some tool that can automate testing of WPF. auto-discovery of what the inputs and the outputs of the dialog are. An old test should still work if you add a label that does nothing. It should fail, however, if you remove a necessary text field. It would be very nice if the test suite was easy to maintain, if it ran and did not break most of the time. Every new dialog should be created with testability in mind. At this point I do not know exactly what I want, so I am marking this as a community wiki. If having to test a huge GUI-based app rings the bell (even if not in WPF), then please share your good, bad and ugly experiences here.

    Read the article

  • How to structure applications as multiple projects an name the packages in Java

    - by lostiniceland
    Hello Everyone I would like to know how you set up your projects in Java For example, in my current work-project, a six year old J2EE app with approximately 2 million LoC, we only have one project in Eclipse. The package structure is split into tiers and then domains, so it follows guidelines from Sun/Oracle. A huge ant-script is building different jars out of this one source-folder Personally I think it would be better to have multiple projects, at least for each tier. Recently I was playing around with a projects-structure like this: Domainproject (contains only annotated pojos, needed by all other projects) Datalayer (only persistence) Businesslogic (services) Presenter View This way, it should be easier to exchange components and when using a build tool like Maven I can have everything in a repository so when only working on the frontend I can get the rest as a dependecy in my classpath. Does this makes sense to you? Do you use different approaches and how do they look like? Furthermore I am struggeling how to name my packages/projects correctly. Right now, the above project-structure reflects in the names of the packages, eg. de.myapp.view and it continues with some technical subfolders like internal or interfaces. What I am missing here, and I dont know how to do this properly, is the distinction to a certain domain. When the project gets bigger it would be nice to recognise a particular domain but also the technical details to navigate more easily within the project. This leads to my second question: how do you name your projects and packages?

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • Poor performance using RMI-proxies with Swing components

    - by Patrick
    I'm having huge performance issues when I add RMI proxy references to a Java Swing JList-component. I'm retrieving a list of user Profiles with RMI from a server. The retrieval itself takes just a second or so, so that's acceptable under the circumstances. However, when I try to add these proxies to a JList, with the help of a custom ListModel and a CellRenderer, it takes between 30-60 seconds to add about 180 objects. Since it is a list of users' names, it's preferrable to present them alphabetically. The biggest performance hit is when I sort the elements as they get added to the ListModel. Since the list will always be sorted, I opted to use the built-in Collections.binarySearch() to find the correct position for the next element to be added, and the comparator uses two methods that are defined by the Profile interface, namely getFirstName() and getLastName(). Is there any way to speed this process up, or am I simply implementing it the wrong way? Or is this a "feature" of RMI? I'd really love to be able to cache some of the data of the remote objects locally, to minimize the remote method calls.

    Read the article

  • Microsoft products such as Visual Studio 2010 does not require to enter serial number

    - by MainMa
    Hi, I am member of WebsiteSpark and was member of DreamSpark. Both programs enable to download software and provide serial keys to use. Some software like Windows Server has an ISO file to download and a serial number displayed on the website which I must enter during installation. Some other software does not have any serial key. For example, when I downloaded Visual Studio 2010, there was just a link to an ISO file. During installation, there was no such a field as serial number (whereas Visual Studio 2008 had this field at the beginning of installation process). There is the same thing with SQL Server 2008 and Microsoft Expression Studio 3. Even when I've downloaded the public trial RTM version of Windows Seven Enterprise, there were no serial number to enter. I don't think that such expensive products as SQL Server 2008 Enterprise are delivered without serials and online validation, so I suppose that the serial is embedded into the product itself, either in installation binaries or in a separate config file, so is already in the ISO I download so I do not have to enter it. So my question is, how it is done technically? Is each 2 GBs ISO generated on-demand on the server to embed a serial each time this ISO is requested? I suppose that if it is done, it has a huge impact on servers performance (no caching, no streaming...), so what may be the techniques used behind? I want to implement the same feature in a product I intend to ship (to simplify installation by avoiding to ask to enter serial number), but I really don't see how to do it with low impact on server performance.

    Read the article

  • Flag bit computation and detection

    - by Majid
    Hi all, In some code I'm working on I should take care of ten independent parameters which can take one of two values (0 or 1). This creates 2^10 distinct conditions. Some of the conditions never occur and can be left out, but those which do occur are still A LOT and making a switch to handle all cases is insane. I want to use 10 if statements instead of a huge switch. For this I know I should use flag bits, or rather flag bytes as the language is javascript and its easier to work with a 10 byte string with to represent a 10-bit binary. Now, my problem is, I don't know how to implement this. I have seen this used in APIs where multiple-selectable options are exposed with numbers 1, 2, 4, 8, ... , n^(n-1) which are decimal equivalents of 1, 10, 100, 1000, etc. in binary. So if we make call like bar = foo(7), bar will be an object with whatever options the three rightmost flags enable. I can convert the decimal number into binary and in each if statement check to see if the corresponding digit is set or not. But I wonder, is there a way to determine the n-th digit of a decimal number is zero or one in binary form, without actually doing the conversion?

    Read the article

  • Branch view for a file that has been split into multiple files

    - by ScottJ
    I have a large source file in Perforce that has been split up into several smaller files in a branch. I want to create a branch view that can handle this, but perforce (2009.1) only sees the last of the multiple files. For example, I created: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c Later I split the huge file into smaller ones: p4 integrate //depot/new/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_three.c Then edit each of those (including //depot/new/huge_file.c) and submit. Now I make changes to //depot/original/huge_file.c and I want to integrate those changes to //depot/new. If I do this manually, it works fine: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_three.c But I don't want to do that every time I integrate -- this kind of thing belongs in a branch view. Unfortunately if the branch view includes the same source file multiple times, the subsequent lines override the earlier ones. How can I create a branch view like this: //depot/original/huge_file.c //depot/new/huge_file.c //depot/original/huge_file.c //depot/new/small_file_one.c //depot/original/huge_file.c //depot/new/small_file_two.c //depot/original/huge_file.c //depot/new/small_file_three.c When I integrate using this branch spec, I get only small_file_three.c integrated.

    Read the article

  • APplication performance issue : SqlServer & Oracle

    - by Mahesh
    Hi, We have a applicaiton in Silverlight,WCF, NHibernate. Currently it is supporting SQL Serve and Oracle database. As it's huge data, it is running ok on SQL Sevrer. But on Oracle it is running very slow. For one functionality it takes 5 Sec to execute on SQL Server and 30 Sec on Oracle. I am not able to figure out what will be issue. Two things that i want to share with you about our database. 1) Database: contains one base table contains column of type SQLServer: [Text] Oracle: [NCLOB] 2) Our database structure is too much normalized. May be in the oracle i have used NCLOB, that is the cause of the performance. I mean i don't know the details about it.... Can anyone please let me know what will be cause? Or Which actions do i need to follw to improve the performance as equal as SqlServer.? Thanks in advance. Mahesh.

    Read the article

  • using dummy row with NOT NULL to solve DEFAULT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup colunms. I have the first row in every lookup table which is PK id = 1 as a dummy row with just "none" in all the columns. This way I can use NOT NULL in my schema and if needed reference to the none row values PK =1 for FKs which do not have any lookup value. Is this a good design or any other work arounds? EDIT: I have: Neighborhood table Postal table. Every neighborhood has a city, so the FK can be NOT NULL. But not every postal code belongs to a neighborhood. Some do, some don't depending on the country. So if i use NOT NULL for the FK between postal and neighborhood then I will be screwed as there has to be some value entered. So what i am doing in essence is: have a row in every table to be a dummy row just to link the FKs. This way row one in neighborhood table will be: n_id = 1 name =none etc... In postal table I can have: postal_code = 3456A3 FK (city) = Moscow FK (neighborhood_id)=1 as a NOT NULL. If I don't have a dummy row in the neighborhood lookup table then I have to declare FK (neighborhood_id) as a Default null column and store blanks in the table. This is an example but there is a huge number of values which will have blanks then in many tables.

    Read the article

  • Django Template For Loop Removing <img> Self-Closing

    - by Zack
    Django's for loop seems to be removing all of my <img> tag's self-closing...ness (/>). In the Template, I have this code: {% for item in item_list %} <li> <a class="left" href="{{ item.url }}">{{ item.name }}</a> <a class="right" href="{{ item.url }}"> <img src="{{ item.icon.url }}" alt="{{ item.name }} Logo." /> </a> </li> {% endfor %} It outputs this: <li> <a class="left" href="/some-url/">This is an item</a> <a class="right" href="/some-url/"> <img src="/media/img/some-item.jpg" alt="This is an item Logo."> </a> </li> As you can see, the <img> tag is no longer closed, and thus the page doesn't validate. This isn't a huge issue since it'll still render properly in all browsers, but I'd like to know how to solve it. I've tried wrapping the whole for loop in {% autoescape off %}...{% endautoescape %} but that didn't change anything. All other self-closed <img> tags in the document outside the for loop still properly close.

    Read the article

  • Automated Legal Processing

    - by Chris S
    Will it ever be possible to make legal systems quantifiable enough to process with computer algorithms? What technologies would have to be in place before this is possible? Are there any existing technologies that are already trying to accomplish this? Out of curiosity, I downloaded the text for laws in my local municipality, and tried applying some simple NLP tricks to extract rules from sentences. I had mixed results. Some sentences were very explicit (e.g. "Cars may not be left in the park overnight"), but other sentences seemed hopelessly vague (e.g. "The council's purpose is to ensure the well-being of the community"). I apologize if this is too open-ended a topic, but I've often wondered what society would look like if legal systems were based on less ambiguous language. Lawyers, and the legal process in general, are so expensive because they have to manually process a complex set of rules codified in ambiguous legal texts. If this system could be represented in software, this huge expense could potentially be eliminated, making the legal system more accessible for everyone.

    Read the article

  • Valgrind 'noise', what does it mean?

    - by Chris Huang-Leaver
    When I used valgrind to help debug an app I was working on I notice a huge about of noise which seems to be complaining about standard libraries. As a test I did this; echo 'int main() {return 0;}' | gcc -x c -o test - Then I did this; valgrind ./test ==1096== Use of uninitialised value of size 8 ==1096== at 0x400A202: _dl_new_object (in /lib64/ld-2.10.1.so) ==1096== by 0x400607F: _dl_map_object_from_fd (in /lib64/ld-2.10.1.so) ==1096== by 0x4007A2C: _dl_map_object (in /lib64/ld-2.10.1.so) ==1096== by 0x400199A: map_doit (in /lib64/ld-2.10.1.so) ==1096== by 0x400D495: _dl_catch_error (in /lib64/ld-2.10.1.so) ==1096== by 0x400189E: do_preload (in /lib64/ld-2.10.1.so) ==1096== by 0x4003CCD: dl_main (in /lib64/ld-2.10.1.so) ==1096== by 0x401404B: _dl_sysdep_start (in /lib64/ld-2.10.1.so) ==1096== by 0x4001471: _dl_start (in /lib64/ld-2.10.1.so) ==1096== by 0x4000BA7: (within /lib64/ld-2.10.1.so) * large block of similar snipped * ==1096== Use of uninitialised value of size 8 ==1096== at 0x4F35FDD: (within /lib64/libc-2.10.1.so) ==1096== by 0x4F35B11: (within /lib64/libc-2.10.1.so) ==1096== by 0x4A1E61C: _vgnU_freeres (vg_preloaded.c:60) ==1096== by 0x4E5F2E4: __run_exit_handlers (in /lib64/libc-2.10.1.so) ==1096== by 0x4E5F354: exit (in /lib64/libc-2.10.1.so) ==1096== by 0x4E48A2C: (below main) (in /lib64/libc-2.10.1.so) ==1096== ==1096== ERROR SUMMARY: 3819 errors from 298 contexts (suppressed: 876 from 4) ==1096== malloc/free: in use at exit: 0 bytes in 0 blocks. ==1096== malloc/free: 0 allocs, 0 frees, 0 bytes allocated. ==1096== For counts of detected errors, rerun with: -v ==1096== Use --track-origins=yes to see where uninitialised values come from ==1096== All heap blocks were freed -- no leaks are possible. You can see the full result here: http://pastebin.com/gcTN8xGp I have two questions; firstly is there a way to suppress all the noise? --show-below-main is set to no by default, but there doesn't appear to be a --show-after-main equivalent.

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Should client-server code be written in one "project" or two?

    - by Ricket
    I've been beginning a client-server application. At first I naturally created two projects in Eclipse, two source control repositories, etc. But I'm quickly seeing that there is a bit of shared code between the two that would probably benefit to sharing instead of copying. In addition, I've been learning and trying test-driven development, and it seems to me that it would be easier to test based on real client components rather than having to set up a huge amount of code just to mock something, when the code is probably mostly in the client. My biggest concern in merging the client and server is of security; how do I ensure that the server pieces of the code do not reach an user's computer? So especially if you are writing client-server applications yourself (and especially in Java, though this can turn into a language-agnostic question if you'd like to share your experience with this in other languages), what sort of separation do you keep between your client and server code? Are they just in different packages/namespaces or completely different binaries using shared libraries, or something else entirely? How do you test the code together and yet ship separately?

    Read the article

  • C++ Exception Handling

    - by user1413793
    So I was writing some code and I noticed that apart from syntactical, type, and other compile-time errors, C++ does not throw any other exceptions. So I decided to test this out with a very trivial program: #include<iostream> int main() { std::count<<5/0<<std::endl; return 1 } When I compiled it using g++, g++ gave me a warning saying I was dividing by 0. But it still compiled the code. Then when I ran it, it printed some really large arbitrary number. When I want to know is, how does C++ deal with exceptions? Integer division by 0 should be a very trivial example of when an exception should be thrown and the program should terminate. Do I have to essentially enclose my entire program in a huge try block and then catch certain exceptions? I know in Python when an exception is thrown, the program will immediately terminate and print out the error. What does C++ do? Are there even runtime exceptions which stop execution and kill the program?

    Read the article

  • Some optimization about the code (computing ranks of a vector)?

    - by user1748356
    The following code is a function (performance-critical) to compute tied ranks of a vector: mergeSort(x,inds,ci); //a sort function to sort vector x of length ci, also returns keys (inds) of x. int tj=0; double xi=x[0]; for (int j = 1; j < ci; ++j) { if (x[j] > xi) { double rankvalue = 0.5 * (j - 1 + tj); for (int k = tj; k < j; ++k) { ranks[inds[k]]=rankvalue; }; tj = j; xi = x[j]; }; }; double rankvalue = 0.5 * (ci - 1 + tj); for (int k = tj; k < ci; ++k) { ranks[inds[k]]=rankvalue; }; The problem is, the supposed performance bottleneck mergeSort(), which is O(NlogN) is several times faster than the other part of codes (which is O(N)), which suggests there is room for huge improvment with the other part of the codes, any advices?

    Read the article

  • Pump Messages During Long Operations + C# (it is urgent)

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? It is very urgent. Thanks

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • Swimlane Diagram Softwares with Expand/Collapse Features

    - by louis xie
    I've been searching real hard for a software which can fulfill my needs, but to no avail. I have a swimlane diagram which is extremely huge, and almost impossible to model using Visio or any traditional swimlane software. I would need to model both the operational process, as well as the interactions within an application and between different applications. Therefore, without wasting additional effort modelling these separately, I am looking for a solution which I can combine both views together. That is, possibly one which I can expand/collapse/group/ungroup processes/subprocesses together. Take a typical credit card process for instance, a hypothetical description of the swimlane could be as such: Customer submits application form to the bank Bank Officer A receives the application form and validates that it was correctly filled Bank Officer A submits application form to Bank Officer B for processing. Bank Officer B checks credit quality of the customer through Application X. Application X submits query to Application Y to retrieve Credit Report. Application X retrieves credit report and submits to Application Z for computation of credit scores Bank Officer B validates that customer is credit worthy, and submits application to Bank Officer C for processing. The above is an over-simplified credit card request process, and a purely hypothetical one. What I'm trying to drive at is, each of the above processes have sub-processes, and I want to be able to switch between a "detailed" view and "aggregated" view. If possible, add in time dependency of the different tasks, as well. I haven't been able to find one such software which could do this.

    Read the article

  • Multi-account sync with Dropbox API

    - by Dan
    I'm trying to create a web app that lets users share files with each other through Dropbox. At the moment, Dropbox handles all the sharing, and there's one central Dropbox account running on the web server that shares the folder with the people who want it. I'm trying to change it so people don't have to accept a new folder invitation each time. I'd like to have them authorize my app to access an app folder in their Dropbox account, and all their shared folders would go inside there. Any changes they make would get noticed by the app on the server and synced to everyone else's folders. There's a couple things I'm having trouble figuring out to make this work: Do I need to make repeated calls to /delta for every account? I can't think how else I'd do this, but that sounds like it would quickly turn into thousands of requests a minute just polling for updates. When someone adds a file, do I have to upload it once for each account? That seems like a huge waste of bandwidth. I've looked into using /copy_ref, which I think would add a file to another user's account without my app ever touching it, but my app's web interface also allows users to upload files directly to my server, which would then need to be synced with everyone else's folders. That file isn't on Dropbox's servers yet, so /copy_ref obviously wouldn't work. For a little extra context, my app is written in node.js, and I've been playing with this library to interface with Dropbox, which uses their REST API.

    Read the article

  • R : remove columns from dataframe where ALL values are NA

    - by Sophomore
    hello everybody! I'm having some trouble with my huge data frame and couldn't really resolve that question myself: The dataframe has some properties as columns and each row represents one data set. I've done some sanatizing to this dataframe (e.g. get rid of datasets which are not to be included in evaluation). (Whoever might be interested: Beforehand I aggregate around 5000 single text files and put them in a tsv, some of the proerties have a sequence number like "button.pressed.1" ... ""button.pressed.n". Some of the sets excluded had really high numbers for n but got excluded, all sets left have much smaller numbers for n but the property "button.presed.50" is still there and all remaining sets have an NA in that column. Actually its a different property but the example should clarify my intention...) So the question is quite simple (for some sophisticated R pro): I need to get rid of columns where for ALL rows the value is NA. Could someone please help me out? (All I have managed to get rid of columns where at least one NA exists which dropped about half my columns)...

    Read the article

  • ASP.NET ListView, custom DataSources, and editing items

    - by Andrew Shepherd
    The MSDN walkthroughs provide a number of examples where you can drag a DataSource from the toolbox, run through some simple configuration steps, then drag a ListView onto the screen, point it at the DataSource, and hey - you've got full table editing. Now I'm trying to write my own DataSource class (a class that implements System.Web.UI.IDataSource) and my own DataSourceView class. I now assign an instance of this custom DataSource class to the ListView.DataSource propery. The display of all the items is working well. However, updating, inserting and deleting just is not working. I'm overriding every function I can in my DataSourceView class, and they just aren't being called. This is such a huge topic, I'll focus this question on one simple example: When you press the "Edit" button (the button inside the ItemTemplate with a CommandName of "Edit", you expect the ItemTemplate to be replaced by an EditItemTemplate. This did not happen. The only way I could get it to happen was to handle the onitemediting event. protected void _listViewPublicHolidays_ItemEditing(object sender, ListViewEditEventArgs e) { _listViewPublicHolidays.EditIndex = e.NewEditIndex; _listViewPublicHolidays.DataBind(); } This is hardly a problem, but how come I had to do it at all? In the MSDN walkthroughs where I attach a ListView to a LinqDataSource, this code doesn't have to be written. Can someone who's been here before hazard a guess as to what would be different or missing in my custom datasource?

    Read the article

  • (nested) user controls within a mvp pattern causing sporadic problems

    - by LLEA
    hi everyone, I have a serious problem with nested user controls (UCs) in WinForms while trying to implement the mvp pattern (passive view). The project consists of one model (designed as a singleton) and a set of different presenters with corresponding view interfaces. To avoid one huge view interface and therefore an overloaded main UI, I decided to make use of a set of different UCs. Each UC has one view interface and one presenter that can access the model. But there are nested UCs: meaning that one specialised UC implements sort of a basic UC. The main UI just represents a container for all those UCs. So far, so good (if not, please ask)?! There are two problems that I am facing now (but I guess both are of the same origin): From time to time it is not possible anymore to load the UCs and test them within the Visual Studio 2008 User Control Test Container. The error message just says that a project with an output type of class library cannot be started directly etc. I can "handle" that by unloading all UC projects and reloading them afterwards. (I guess the references to all mvp components and other UCs are then updated). Assuming that the implementation of my mvp pattern is okay and all those UCs are testable within the VS Test Container at a certain time - there is the biggest problem still left: I am not able to add any UC (even the basic and unnested ones) to a simple Form (UI). The error message: error message.jpg Could it be that my basic UC causes all these Problems?! It consists of two simple ComboBoxes and implements a basic presenter and basic view interface. Whenever I add this UC to a different UC the VS designer adds two references to the public getter methods of the UC as resources. I then manually remove those resources from the resx-file and commend the corresponding lines in the designer file. thx in advance

    Read the article

  • Unable to persist objects in GAE JDO

    - by Basil Dsouza
    Hello Guys, I am completely fresh to both JDO and GAE, and have been struggling to get my data layer to persist any code at all! The issues I am facing may be very simple, but I just cant seem to find any a way no matter what solution I try. Firstly the problem: (Slightly simplified, but still contains all the info necessary) My data model is as such: User: (primary key) String emailID String firstName Car: (primary key) User user (primary key) String registration String model This was the initial datamodel. I implemented a CarPK object to get a composite primary key of the User and the registration. However that ran into a variety of issues. (Which i will save for another time/question) I then changed the design as such: User: (Unchanged) Car: (primary key) String fauxPK (here fauxPK = user.getEmailID() + SEP + registration) User user String registration String model This works fine for the user, and it can insert and retrieve user objects. However when i try to insert a Car Object, i get the following error: "Cannot have a java.lang.String primary key and be a child object" Found the following helpful link about it: http://stackoverflow.com/questions/2063467/persist-list-of-objects Went to the link suggested there, that explains how to create Keys, however they keep talking about "Entity Groups" and "Entity Group Parents". But I cant seem to find any articles or sites that explain what are "Entity Group"s or an "Entity Group Parents" I could try fiddling around some more to figure out if i can store an object somehow, But I am running sort on patience and also would rather understand and implement than vice versa. So i would appreciate any docs (even if its huge) that covers all these points, and preferably has some examples that go beyond the very basic data modeling. And thanks for reading such a long post :)

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >