Search Results

Search found 956 results on 39 pages for 'querying'.

Page 30/39 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • CFfile -- value is not set to the queried data

    - by user494901
    I have this add user form, it also doubles as a edit user form by querying the data and setting the value="#query.xvalue#". If the user exists (eg, you're editing a user, it loads in the users data from the database. When doing this on the <cffile field it does not load in the data, then when the insert goes to insert data it overrights the database values with a blank string (If a user does not input a new file). How do I avoid this? Code: Form: <br/>Digital Copy<br/> <!--- If null, set a default if not, set the default to database default ---> <cfif len(Trim(certificationsList.cprAdultImage)) EQ 0> <cfinput type="file" required="no" name="cprAdultImage" value="" > <cfelse> File Exists: <cfoutput><a href="#certificationsList.cprAdultImage#">View File</a></cfoutput> <cfinput type="file" required="no" name="cprAdultImage" value="#certificationsList.cprAdultImage#"> </cfif> Form Processor: <!--- Has a file been specificed? ---> <cfif not len(Trim(form.cprAdultImage)) EQ 0> <cffile action="upload" filefield="cprAdultImage" destination="#destination#" nameConflict="makeUnique"> <cfinvokeargument name="cprAdultImage" value="#pathOfFile##cffile.serverFile#"> <cfelse> <cfinvokeargument name="cprAdultImage" value=""> </cfif> CFC ARGS: <cfargument name="cprAdultExp" required="NO"> <cfargument name="cprAdultCompany" type="string" required="no"> <cfargument name="cprAdultImage" type="string" required="no"> <cfargument name="cprAdultOnFile" type="boolean" required="no"> Query: UPDATE mod_StudentCertifications SET cprAdultExp='#DateFormat(ARGUMENTS.cprAdultExp, "mm/dd/yyyy")#', cprAdultCompany='#Trim(ARGUMENTS.cprAdultCompany)#', cprAdultImage='#Trim(ARGUMENTS.cprAdultImage)#', cprAdultOnFile='#Trim(ARGUMENTS.cprAdultOnFile)#' INSERT INTO mod_StudentCertifications( cprAdultExp, cprAdultcompany, cprAdultImage, cprAdultOnFile

    Read the article

  • C# Dataset Dynamically Add DataColumn

    - by Wesley
    I am trying to add a extra column to a dataset after a query has completed. I have a database relationship of the following: Employees / \ Groups EmployeeGroups Empoyees holds all the data for that individual, I'll name the unique key the UserID. Groups holds all the groups that a employee can be a part of, i.e. Super User, Admin, User; etc. I will name the unique key GroupID EmployeeGroups holds all the associations of which groups each employee belongs too. (UserID | GroupID) What I am trying to accomplish is after querying for a all users I want to loop though each user and add what groups that user is a part of by adding a new column to the dataset named 'Groups' which is a string to insert the values of the next query to get all the groups that user is a part of. Then by user of databinding populate a listview with all employees and their group associations My code is as follows; Position 5 is the new column I am trying to add to the dataset. string theQuery = "select UserID, FirstName, LastName, EmployeeID, Active from Employees"; DataSet theEmployeeSet = itsDatabase.runQuery(theQuery); DataColumn theCol = new DataColumn("Groups", typeof(string)); theEmployeeSet.Tables[0].Columns.Add(theCol); foreach (DataRow theRow in theEmployeeSet.Tables[0].Rows) { theRow.ItemArray[5] = "1234"; } At the moment, the code will create the new column but when i assign the data to that column nothing will be assigned, what am I missing? If there is any further explination or information I can provide, please let me know. Thank you all

    Read the article

  • Program freezing when syncing a ldap database (100+ entries added)

    - by djerry
    Hey guys, I'm updating a ldap database. I need to add a list of users to the db. I've written a simple foreach loop. There are about 180 users i need to add, but at the 128th user, the program freezes. I know ldap is really used for querying (fast), and that adding and modifying entries will not go as smooth as a search query, but is it normal that the program freezes while doing this? I'll post some code just in case. public static void SyncLDAPWithMySql(Novell.Directory.Ldap.LdapConnection _conn) { List<User> users = GetUsers(); int iteller = 0; foreach (User user in users) { if (!UserAlreadyInLdap(user, _conn)) { TelUser teluser = new TelUser(); teluser.Telephone = user.E164; teluser.Uid = user.E164; teluser.Company = "/"; teluser.Dn = ""; teluser.Name = "/"; teluser.DisplayName = "/"; teluser.FirstName = "/"; TelephoneDA.InsertUser(_conn, teluser ); } Console.WriteLine(iteller + " : " + user.E164); iteller++; } } private static bool UserAlreadyInLdap(User user, Novell.Directory.Ldap.LdapConnection _conn) { List<TelUser> users = TelephoneDA.GetAllEntries(_conn); foreach (TelUser teluser in users) { if (teluser.Telephone.Equals(user.E164)) return true; } return false; } public static int InsertUser(LdapConnection conn, TelUser user) { int iResponse = IsTelNumberUnique(conn, user.Dn, user.Telephone); if (iResponse == 0) { LdapAttributeSet attrSet = MakeAttSet(user); string dnForPhonebook = configurationManager.AppSettings.Get("phonebookDn"); LdapEntry ent = new LdapEntry("uid=" + user.Uid + "," + dnforPhonebook, attrSet); try { conn.Add(ent); } catch (Exception ex) { Console.WriteLine(ex.Message); } } return iResponse; } Am i adding too many entries at a time??? Thanks in advance.

    Read the article

  • Best method to search hierarchical data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (i.e. Glasgow) or in a larger area (i.e. Scotland). The two approaches I am considering are: keep a note of children in the database for each record (i.e. cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. Use a recursive function to find all results who have a relation to the selected area. The problems I see from both are: Holding this data in the db will mean having to keep track of all changes to structure. Recursion is slow and inefficent. Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • How can i mock or test my deferred execution functionality?

    - by cottsak
    I have what could be seen as a bizarre hybrid of IQueryable<T> and IList<T> collections of domain objects passed up my application stack. I'm trying to maintain as much of the 'late querying' or 'lazy loading' as possible. I do this in two ways: By using a LinqToSql data layer and passing IQueryable<T>s through by repositories and to my app layer. Then after my app layer passing IList<T>s but where certain elements in the object/aggregate graph are 'chained' with delegates so as to defer their loading. Sometimes even the delegate contents rely on IQueryable<T> sources and the DataContext are injected. This works for me so far. What is blindingly difficult is proving that this design actually works. Ie. If i defeat the 'lazy' part somewhere and my execution happens early then the whole thing is a waste of time. I'd like to be able to TDD this somehow. I don't know a lot about delegates or thread safety as it applies to delegates acting on the same source. I'd like to be able to mock the DataContext and somehow trace both methods of deferring (IQueryable<T>'s SQL and the delegates) the loading so that i can have tests that prove that both functions are working at different levels/layers of the app/stack. As it's crucial that the deferring works for the design to be of any value, i'd like to see tests fail when i break the design at a given level (separate from the live implementation). Is this possible?

    Read the article

  • Scalable Full Text Search With Per User Result Ordering

    - by jeremy
    What options exist for creating a scalable, full text search with results that need to be sorted on a per user basis? This is for PHP/MySQL (Symfony/Doctrine as well, if relevant). In our case, we have a database of workouts that have been performed by users. The workouts that the user has done before should appear at the top of the results. The more frequently they've done the workout, the higher it should appear in search matches. If it helps, you can assume we know the number of times a user has done a workout in advance. Possible Solutions Sphinx - Use Sphinx to implement full text search, do all the querying and sorting in MySQL. This seems promising (and there's a Symfony Plugin!) but I don't know much about it. Lucene - Use Lucene to perform full text search and put the users' completions into the query. As is suggested in this Stack Overflow thread. Alternatively, use Lucene to retrieve the results, then reorder them in PHP. However, both solutions seem clunky and potentially unscalable as a user may have completed hundreds of workouts. Mysql - No native full text support (InnoDB), so we'd have use LIKE or REGEX, which isn't scalable.

    Read the article

  • Firing through HTTP a Perl script for sending signals to daemons

    - by Eric Fortis
    Hello guys, I'm using apache2 on Ubuntu. I have a Perl script which basically read the files names of a directory, then rewrites a text file, then sends a signal to a daemon. How can this be done, as secure as possible through a web-page? Actually I can run the code below, but not if I remove the comments. I'm looking for advise considering: Using HTTP Requests? How about Apache file permissions on the directory shown in code? Is htaccess enough to enable user/pass access to the cgi? Should I use a database instead of writing to a file and run a cron querying the db with permission granted to write and send the signal? Granting as less permissions as possible to the webserver. Should I set a VPN? #!/usr/bin/perl -wT use strict; use CGI; #@fileList = </home/user/*>; #read a directory listing my $query = CGI->new(); print $query->header( "text/html" ), $query->p( "FirstFileNameInArray" ), #$query->p( $fileList[0] ), #output the first file in directory $query->end_html;

    Read the article

  • Flexible forms and supporting database structure

    - by sunwukung
    I have been tasked with creating an application that allows administrators to alter the content of the user input form (i.e. add arbitrary fields) - the contents of which get stored in a database. Think Modx/Wordpress/Expression Engine template variables. The approach I've been looking at is implementing concrete tables where the specification is consistent (i.e. user profiles, user content etc) and some generic field data tables (i.e. text, boolean) to store non-specific values. Forms (and model fields) would be generated by first querying the tables and retrieving the relevant columns - although I've yet to think about how I would setup validation. I've taken a look at this problem, and it seems to be indicating an EAV type approach - which, from my brief research - looks like it could be a greater burden than the blessings it's flexibility would bring. I've read a couple of posts here, however, which suggest this is a dangerous route: How to design a generic database whose layout may change over time? Dynamic Database Schema I'd appreciate some advice on this matter if anyone has some to give regards SWK

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • encapsulation in python list (want to use " instead of ')

    - by Codehai
    I have a list of users users["pirates"] and they're stored in the format ['pirate1','pirate2']. If I hand the list over to a def and query for it in MongoDB, it returns data based on the first index (e.g. pirate1) only. If I hand over a list in the format ["pirate1","pirate"], it returns data based on all the elements in the list. So I think there's something wrong with the encapsulation of the elements in the list. My question: can I change the encapsulation from ' to " without replacing every ' on every element with a loop manually? Short Example: aList = list() # get pirate Stuff # users["pirates"] is a list returned by a former query # so e.g. users["pirates"][0] may be peter without any quotes for pirate in users["pirates"]: aList.append(pirate) aVar = pirateDef(aList) print(aVar) the definition: def pirateDef(inputList = list()): # prepare query col = mongoConnect().MYCOL # query for pirates Arrrr pirates = col.find({ "_id" : {"$in" : inputList}} ).sort("_id",1).limit(50) # loop over users userList = list() for person in pirates: # do stuff that has nothing to do with the problem # append user to userlist userList.append(person) return userList If the given list has ' encapsulation it returns: 'pirates': [{'pirate': 'Arrr', '_id': 'blabla'}] If capsulated with " it returns: 'pirates' : [{'_id': 'blabla', 'pirate' : 'Arrr'}, {'_id': 'blabla2', 'pirate' : 'cheers'}] EDIT: I tried figuring out, that the problem has to be in the MongoDB query. The list is handed over to the Def correctly, but after querying pirates only consists of 1 element... Thanks for helping me Codehai

    Read the article

  • Ultra-grand super acts_as_tree rails query

    - by Bloudermilk
    Right now I'm dealing with an issue regarding an intense acts_as_tree MySQL query via rails. The model I am querying is Foo. A Foo can belong to any one City, State or Country. My goal is to query Foos based on their location. My locations table is set up like so: I have a table in my database called locations I use a combination of acts_as_tree and polymorphic associations to store each individual location as either a City, State or Country. (This means that my table consists of the rows id, name, parent_id, type) Let's say for instance, I want to query Foos in the state "California". Beside Foos that directly belong to "California", I should get all Foos that belong every City in "California" like Foos in "Los Angeles" and "San Francisco". Not only that, but I should get any Foos that belong to the Country that "California" is in, "United States". I've tried a few things with associations to no avail. I feel like I'm missing some super-helpful Rails-fu here. Any advice?

    Read the article

  • Creating PHP strings using other variables...works manually, can't figure out how automatically

    - by Matt
    Hello, I'm trying to get a variable to be formed automatically using data pulled from a mysql database. I know the data is being pulled from the database in some form resembling its original form, but that data does not act the same as data that is manually typed and assigned to a string. For example, if a cell in a mysql table says... I said "goodbye" before I left. She also said "goodbye." ...and I manually copy/paste it and add the necessary escapes... $string1 = " I said \"goodbye\" before I left. She also said \"goodbye.\" "; ...that does not equal... $string1 = $mysqlResultArray['specificCellWithQuoteShownAbove'] Interestingly, if I echo both versions of $string1 and view the output, they appear to be exactly the same. But they do not function the same when put through various functions I've created. The functions only work if I do the manual copy/paste method--which is not what I want, obviously. I'm not sure if it has to do with the line breaks or the escapes--or some combination of the two. But while both strings are superficially the same, they are apparently functionally different and I don't know why. So how can I create $string1 without manually copy/pasting the contents from the mysql entry and instead querying for the data and assigning it to $string1 in such a way that it's exactly functionally equivalent as the manual copy/pasted string?

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • Core Data Predicates with Subclassed NSManagedObjects

    - by coneybeare
    I have an AUDIO class. This audio has a SOUND_A subclass and a SOUND_B subclass. This is all done correctly and is working fine. I have another model, lets call it PLAYLIST_JOIN, and this can contain (in the real world) SOUND_A's and SOUND_B's, so we give it a relationship of AUDIO and PLAYLIST. This all works in the app. The problem I am having now is querying the PLAYLIST_JOIN table with an NSPredicate. What I want to do is find an exact PLAYLIST_JOIN item by giving it 2 keys in the predicate sound_a._sound_a_id = %@ && playlist.playlist_id = %@ and sound_b.sound_b_id = %@ && playlist.playlist_id = %@ The main problem is that because the table does not store sound_a and sound_b, but stored audio, I cannot use this syntax. I do not have the option of reorganizing the sound_a and sound_b to use the same _id attribute name, so how do I do this? Can I pass a method to the predicate? something like this: [audio getID] = %@ && playlist_id = %@

    Read the article

  • Linq - reuse expression on child property

    - by user175528
    Not sure if what I am trying is possible or not, but I'd like to reuse a linq expression on an objects parent property. With the given classes: class Parent { int Id { get; set; } IList<Child> Children { get; set; } string Name { get; set; } } class Child{ int Id { get; set; } Parent Dad { get; set; } string Name { get; set; } } If i then have a helper Expression<Func<Parent,bool> ParentQuery() { Expression<Func<Parent,bool> q = p => p.Name=="foo"; } I then want to use this when querying data out for a child, along the lines of: using(var context=new Entities.Context) { var data=context.Child.Where(c => c.Name=="bar" && c.Dad.Where(ParentQuery)); } I know I can do that on child collections: using(var context=new Entities.Context) { var data=context.Parent.Where(p => p.Name=="foo" && p.Childen.Where(childQuery)); } but cant see any way to do this on a property that isnt a collection. This is just a simplified example, actually the ParentQuery will be more complex and I want to avoid having this repeated in multiple places as rather than just having 2 layers I'll have closer to 5 or 6, but all of them will need to reference the parent query to ensure security.

    Read the article

  • What is a mantainable way of saving "star rating" in a database?

    - by Montecristo
    I'll use the jQuery plugin for presenting the user with a nice interface The request is to display 5 stars, up to a total score of 10 (2 points per star). By now I thought about using 7/10 as a format for that value, but what if at some point in the future I'll receive a request like We would like to give users more choice, let's increase the total score to 20 (so that each star contributes with a maximum of 4 points) I'll end up with a table with mixed values for the "star rating" column: some will be like 7/10 while others will be like 14/20. Is it ok for you to have this difference in the database and deal with it in the logic layer to have it consistent? Or is preferred another way so that querying the table will not result in inconsistent results outside the application? Maybe floating point values could help me, is it better to store that value as a number less than or equal to one? So in each of the two examples the resulting value stored in the database would be 0,7, as a number, not a varchar, which can be queried also outside the application. What do you think?

    Read the article

  • CXF code first service, WSDL generation; soap:address changes?

    - by jcalvert
    I have a simple Java interface/implementation I am exposing via CXF. I have a jaxws element in my Spring configuration file like this: <jaxws:endpoint id="managementServiceJaxws" implementor="#managementService" address="/jaxws/ManagementService" > </jaxws:endpoint> It generates the WSDL from my annotated interface and exposes the service. Then when I hit http://myhostname/cxf/jaxws/ManagementService?wsdl I get a lovely WSDL. At the bottom in the wsdl:service element, I'll see <soap:address location="http://myhostname/cxf/jaxws/ManagementService"/> However, some time a day or so later, with no application restart, hitting that same url produces: This causes a number of problems, but what I really want is to fix it. Right now, there's a particular client to the webservice that sets the endpoint to localhost; because it runs on the same machine. Is it possible the wsdl is getting regenerated and cached and then exposing the 'localhost' version? In part I don't know the exact mechanism by which one goes from a ?wsdl request in CXF to the response. It seems almost certain that it's retrieving some cached version, given that it's supposed to be determining the address by asking the servletcontainer (Jetty). For reference I know a stopgap solution is using the hostname on the client and making sure an alias in place so that it goes over the loopback. EDIT: For reference, I confirmed that if I bring my application up and first hit it over localhost, then querying for the wsdl via the hostname shows the address as localhost. Conversely, first hitting it over the hostname causes localhost requests to show the hostname. So obviously something is getting cached here.

    Read the article

  • Best way to carry & modify a variable through various instances and functions?

    - by bobsoap
    I'm looking for the "best practice" way to achieve a message / notification system. I'm using an OOP-based approach for the script and would like to do something along the lines of this: if(!$something) $messages->add('Something doesn\'t exist!'); The add() method in the messages class looks somewhat like this: class messages { public function add($new) { $messages = $THIS_IS_WHAT_IM_LOOKING_FOR; //array $messages[] = $new; $THIS_IS_WHAT_IM_LOOKING_FOR = $messages; } } In the end, there is a method in which reads out $messages and returns every message as nicely formatted HTML. So the questions is - what type of variable should I be using for $THIS_IS_WHAT_IM_LOOKING_FOR? I don't want to make this use the database. Querying the db every time just for some messages that occur at runtime and disappear after 5 seconds just seems like overkill. Using global constants for this is apparently worst practice, since constants are not meant to be variables that change over time. I don't even know if it would work. I don't want to always pass in and return the existing $messages array through the method every time I want to add a new message. I even tried using a session var for this, but that is obviously not suited for this purpose at all (it will always be 1 pageload too late). Any suggestions? Thanks!

    Read the article

  • Using java to create a logistic model - arrays and properties

    - by Oliver Burdekin
    I'm currently trying to create a java model that will solve a problem we have. On a voluntary expedition each week we have some people leaving and some new people arriving. Accommodation is in tents. The tents sleep different numbers of people and certain rules apply. Males and females cannot be mixed and volunteers can be one of four types - school children/ research assistants/ scientific staff/ school teachers So types of volunteer and sexes cannot be mixed. Each week the manager spends hours trying to work this out so I've offered to make this model to keep my coding skills up. At present I'm working with arrays. Each tent is a 2D array [4][x] where x is the number of people it sleeps (each person sleeping there has 4 attributes). Each person is a 1D array with 4 attributes [4]. The idea is to check where people can go, cause the minimum movement for people staying on and solve this logistic problem. Does anyone have any better suggestions as to how to solve this? At present I'm finding it necessary to write a lot of code setting up and querying arrays. Any help is appreciated.

    Read the article

  • Get the first and last posts in a thread

    - by Grampa
    I am trying to code a forum website and I want to display a list of threads. Each thread should be accompanied by info about the first post (the "head" of the thread) as well as the last. My current database structure is the following: threads table: id - int, PK, not NULL, auto-increment name - varchar(255) posts table: id - int, PK, not NULL, auto-increment thread_id - FK for threads The tables have other fields as well, but they are not relevant for the query. I am interested in querying threads and somehow JOINing with posts so that I obtain both the first and last post for each thread in a single query (with no subqueries). So far I am able to do it using multiple queries, and I have defined the first post as being: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id LIMIT 0, 1 The last post is pretty much the same except for ORDER BY id DESC. Now, I could select multiple threads with their first or last posts, by doing: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id GROUP BY t.id But of course I can't get both at once, since I would need to sort both ASC and DESC at the same time. What is the solution here? Is it even possible to use a single query? Is there any way I could change the structure of my tables to facilitate this? If this is not doable, then what tips could you give me to improve the query performance in this particular situation?

    Read the article

  • 32-bit JVM on 64-bit Windows crashes on launch with -Xmx1300m and plenty of free memory

    - by Konrad Garus
    I'm struggling with Java heap space settings. The default Java on Windows is the 32-bit client regardless of OS version (that's what Oracle recommends to all users). It appears to set max heap size to 256 MB by default, and that is too little for me. I use a custom launcher to start the application. I would like it to use more memory on computers with plenty RAM, and default to -Xmx512m on those with less RAM. As far as I'm aware, the only way is the static -Xmx setting (that has to be set on launch). I have a user who has 8 GB RAM, 64-bit Windows and 32-bit Java 7. Maximum memory visible to the JVM is 4G (as returned by querying OperatingSystemMXBean). I understand why, no issue. For some reason my application is unable to start for this user with -Xmx1300m, even though he has 2.3G free memory. He closed some applications (having 5G free memory), and still it would not launch. The error reported to me was: error occured during init of vm could not reserve enough space for object heap What's going on? Could it be that the 32-bit JVM is only able to address the "first" 4G of memory and has to have a 1300M block available within those first 4 gigabytes? How can I solve this problem, except for asking everyone to install 64-bit Java (what is unlikely to be acceptable)?

    Read the article

  • RIA Service/oData ... "Requests that attempt to access a single element using key values from a resu

    - by user327911
    I've recently started working up a sample project to play with an oData feed coming from a RIA service. I am able to view the feed and the metadata via any web browser, however, if I try to perform certain query operations on the feed I receive "unsupported" exceptions. Sample oData feed: ProductSet http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet/ 2010-04-28T14:02:10Z http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128') 2010-04-28T14:02:10Z b0a2b170-c6df-441f-ae2a-74dd19901128 Product 0 Type 1 Active Sample web.config entry: Sample service: [EnableClientAccess()] public class ProductService : DomainService { [Query(IsDefault = true)] public IQueryable GetProducts() { IList products = new List(); for (int i = 0; i < 90; i++) { Product product = new Product { Id = Guid.NewGuid(), Name = "Product " + i.ToString(), ProductType = i < 30 ? "Type 1" : ((i > 30 && i < 60) ? "Type 2" : "Type 3"), Status = i % 2 == 0 ? "Active" : "NotActive" }; products.Add(product); } return products.AsQueryable(); } } If I provide the url "http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128')" to my web browser I receive the following xml: Requests that attempt to access a single element using key values from a result set are not supported. I'm new to RIA and oData. Could this be something as simple as my web browsers not supporting this type of querying on the result set or something else? Thanks ahead! Corey

    Read the article

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >