Search Results

Search found 14799 results on 592 pages for 'instance eval'.

Page 509/592 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Simulate stochastic bipartite network based on trait values of species - in R

    - by Scott Chamberlain
    I would like to create bipartite networks in R. For example, if you have a data.frame of two types of species (that can only interact across species, not within species), and each species has a trait value (e.g., size of mouth in the predator allows who gets to eat which prey species), how do we simulate a network based on the traits of the species (that is, two species can only interact if their traits overlap in values for instance)? UPDATE: Here is a minimal example of what I am trying to do. 1) create phylogenetic tree; 2) simulate traits on the phylogeny; 3) create networks based on species trait values. # packages install.packages(c("ape","phytools")) library(ape); library(phytools) # Make phylogenetic trees tree_predator <- rcoal(10) tree_prey <- rcoal(10) # Simulate traits on each tree trait_predator <- fastBM(tree_predator) trait_prey <- fastBM(tree_prey) # Create network of predator and prey ## This is the part I can't do yet. I want to create bipartite networks, where ## predator and prey interact based on certain crriteria. For example, predator ## species A and prey species B only interact if their body size ratio is ## greater than X.

    Read the article

  • @dynamic property needs setter with multiple behaviors

    - by ambertch
    I have a class that contains multiple user objects and as such has an array of them as an instance variable: NSMutableArray *users; The tricky part is setting it. I am deserializing these objects from a server via Objective Resource, and for backend reasons users can only be returned as a long string of UIDs - what I have locally is a separate dictionary of users keyed to UIDs. Given the string uidString of comma separated UIDs I override the default setter and populate the actual user objects: @dynamic users; - (void)setUsers:(id)uidString { users = [NSMutableArray arrayWithArray: [[User allUsersDictionary] objectsForKeys:[(NSString*)uidString componentsSeparatedByString:@","]]]; } The problem is this: I now serialize these to database using SQLitePO, which stores these as the array of user objects, not the original string. So when I retrieve it from database the setter mistakenly treats this array of user objects as a string! Where I actually want to adjust the setter's behavior when it gets this object from DB vs. over the network. I can't just make the getter serialize back into a string without tearing up large code that reference this array of user objects, and I tried to detect in the setter whether I have a string or an array coming in: if ([uidString respondsToSelector:@selector(addObject)]) { // Already an array, so don't do anything - just assign users = uidString but no success... so I'm kind of stuck - any suggestions? Thanks in advance!

    Read the article

  • how to fetch more than 1000 entities NON keybased?

    - by user291071
    If I should be approaching this problem through a different method, please suggest so. I am creating an item based collaborative filter. I populate the db with the LinkRating2 class and for each link there are more than a 1000 users that I need to call and collect their ratings to perform calculations which I then use to create another table. So I need to call more than 1000 entities for a given link. For instance lets say there are over a 1000 users rated 'link1' there will be over a 1000 instances of this class for the given link property that I need to call. How would I complete this example? class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() query =LinkRating2.all() link1 = 'link string name' a = query.filter('link = ', link1) aa = a.fetch(1000)##how would i get more than 1000 for a given link1 as shown? ##keybased over 1000 in other post example i need method for a subset though not key class MyModel(db.Expando): @classmethod def count_all(cls): """ Count *all* of the rows (without maxing out at 1000) """ count = 0 query = cls.all().order('__key__') while count % 1000 == 0: current_count = query.count() if current_count == 0: break count += current_count if current_count == 1000: last_key = query.fetch(1, 999)[0].key() query = query.filter('__key__ > ', last_key) return count

    Read the article

  • Proper structure for dependency injection (using Guice)

    - by David B.
    I would like some suggestions and feedback on the best way to structure dependency injection for a system with the structure described below. I'm using Guice and thus would prefer solutions centered around it's annotation-based declarations, not XML-heavy Spring-style configuration. Consider a set of similar objects, Ball, Box, and Tube, each dependent on a Logger, supplied via the constructor. (This might not be important, but all four classes happen to be singletons --- of the application, not Gang-of-Four, variety.) A ToyChest class is responsible for creating and managing the three shape objects. ToyChest itself is not dependent on Logger, aside from creating the shape objects which are. The ToyChest class is instantiated as an application singleton in a Main class. I'm confused about the best way to construct the shapes in ToyChest. I either (1) need access to a Guice Injector instance already attached to a Module binding Logger to an implementation or (2) need to create a new Injector attached to the right Module. (1) is accomplished by adding an @Inject Injector injectorfield to ToyChest, but this feels weird because ToyChest doesn't actually have any direct dependencies --- only those of the children it instantiates. For (2), I'm not sure how to pass in the appropriate Module. Am I on the right track? Is there a better way to structure this? The answers to this question mention passing in a Provider instead of using the Injector directly, but I'm not sure how that is supposed to work. EDIT: Perhaps a more simple question is: when using Guice, where is the proper place to construct the shapes objects? ToyChest will do some configuration with them, but I suppose they could be constructed elsewhere. ToyChest (as the container managing them), and not Main, just seems to me like the appropriate place to construct them.

    Read the article

  • I need help solving a rather weird error in a WCF service.

    - by Moulde
    Hi.. I have a solution that contains three projects. A main project with my MVC app, a silverlight application and a (silverlight enabled) WCF service project. In my silverlight project i have made a Service Reference to my WCF service. And i pretty much got that working. In my WCF service i have a method that returns an Book object, which got some random fields like title, date etc. In the book class, i have a ICollection field that contains a list of events. The book class is generated using entity framework 4.0, and Lazy Loading is enabled. If i in my getBook(int id) method return a book with the events field not initialized, it works as a charm. But if i initialize the field, i'm getting this error. The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error. I have a few ideas why that is happening, and while writing this i just got another one. The wcf service somehow threw away the reference to the event class. That would be very weird since i have a reference between my main mvc app (with the models) and my WCF service. Since i have enabled lazy loading in EF 4.0, i suspect that it may be the thing generating the error. But i'm not sure why that would be, because i'm not in any way accessing that field. I could understand that i may not be able to access the events field after i recive the object in my silverlight application since the connection between the book object and the entity framework is like broken. Did i mention that Lazy Loading is enabled on my EF instance? And there is no inner exception in the thrown exception. Thanks in advance. Malte Baden Hansen

    Read the article

  • c# wpf command pattern

    - by evan
    I have a wpf gui which displays a list of information in separate window and in a separate thread from the main application. As the user performs actions in the main window the side window is updated. (For example if you clicked page down in the main window a listbox in the side window would page down). Right now the architecture for this application feels very messy and I'm sure there is a cleaner way to do it. It looks like this: Main Window contains a singleton SideWindowControl which communicates with an instance of the SideWindowDisplay using events - so, for example, the pagedown button would work like: 1) the event handler of the button on the main window calls SideWindowControl.PageDown() 2) in the PageDown() function a event is created and thrown. 3) finally the gui, ShowSideWindowDisplay is subscribing to the SideWindowControl.Actions event handles the event and actually scrolls the listbox down - note because it is in a different thread it has to do that by running the command via Dispatcher.Invoke() This just seems like a very messy way to this and there must be a clearer way (The only part that can't change is that the main window and the side window must be on different threads). Perhaps using WPF commands? I'd really appreciate any suggestions!! Thanks

    Read the article

  • which is better: a lying copy constructor or a non-standard one?

    - by PaulH
    I have a C++ class that contains a non-copyable handle. The class, however, must have a copy constructor. So, I've implemented one that transfers ownership of the handle to the new object (as below) class Foo { public: Foo() : h_( INVALID_HANDLE_VALUE ) { }; // transfer the handle to the new instance Foo( const Foo& other ) : h_( other.Detach() ) { }; ~Foo() { if( INVALID_HANDLE_VALUE != h_ ) CloseHandle( h_ ); }; // other interesting functions... private: /// disallow assignment const Foo& operator=( const Foo& ); HANDLE Detach() const { HANDLE h = h_; h_ = INVALID_HANDLE_VALUE; return h; }; /// a non-copyable handle mutable HANDLE h_; }; // class Foo My problem is that the standard copy constructor takes a const-reference and I'm modifying that reference. So, I'd like to know which is better (and why): a non-standard copy constructor: Foo( Foo& other ); a copy-constructor that 'lies': Foo( const Foo& other ); Thanks, PaulH

    Read the article

  • "java.lang.OutOfMemoryError: Java heap space" in image and array storage

    - by totalconscience
    I am currently working on an image processing demonstration in java (Applet). I am running into the problem where my arrays are too large and I am getting the "java.lang.OutOfMemoryError: Java heap space" error. The algorithm I run creates an NxD float array where: N is the number of pixel in the image and D is the coordinates of each pixel plus the colorspace components of each pixel (usually 1 for grayscale or 3 for RGB). For each iteration of the algorithm it creates one of these NxD float arrays and stores it for later use in a vector, so that the user of the applet may look at the individual steps. My client wants the program to be able to load a 500x500 RGB image and run as the upper bound. There are about 12 to 20 iterations per run so that means I need to be able to store a 12x500x500x5 float in some fashion. Is there a way to process all of this data and, if possible, how? Example of the issue: I am loading a 512 by 512 Grayscale image and even before the first iteration completes I run out of heap space. The line it points me to is: Y.add(new float[N][D]) where Y is a Vector and N and D are described as above. This is the second instance of the code using that line.

    Read the article

  • PHP Header redirection problem within page called by cURL

    - by Das123
    I have a site that uses cURL to access some pages, stores the returned results in variables, and then uses these variables within its own page. The script works well except where the target cURL page has a header('Location: ...') command inside it. It seems to just ignore this header command. The cURL command is as follows... //Load result page into variable so portions can be allocated to correct variables $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); # URL to post to curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1 ); # return into a variable curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $postfields); $loaded_result = curl_exec( $ch ); # run! curl_close($ch); I've tried changing the CURLOPT_HEADER to 1 but it doesn't do anything. So how can I allow script redirection within the target urls using cURL to grab the results? By the way, the pages work fine if accessed other than via cURL but iFrames are not an option in this instance.

    Read the article

  • Reordering fields in Django model

    - by Alex Lebedev
    I want to add few fields to every model in my django application. This time it's created_at, updated_at and notes. Duplicating code for every of 20+ models seems dumb. So, I decided to use abstract base class which would add these fields. The problem is that fields inherited from abstract base class come first in the field list in admin. Declaring field order for every ModelAdmin class is not an option, it's even more duplicate code than with manual field declaration. In my final solution, I modified model constructor to reorder fields in _meta before creating new instance: class MyModel(models.Model): # Service fields notes = my_fields.NotesField() created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) class Meta: abstract = True last_fields = ("notes", "created_at", "updated_at") def __init__(self, *args, **kwargs): new_order = [f.name for f in self._meta.fields] for field in self.last_fields: new_order.remove(field) new_order.append(field) self._meta._field_name_cache.sort(key=lambda x: new_order.index(x.name)) super(TwangooModel, self).__init__(*args, **kwargs) class ModelA(MyModel): field1 = models.CharField() field2 = models.CharField() #etc ... It works as intended, but I'm wondering, is there a better way to acheive my goal?

    Read the article

  • TCP/IP RST being sent differently in different browsers.

    - by Brian
    On Mac OS X (10.6), if I start a YouTube video download and pull the Ethernet cable for 5 or so seconds, then plug it back in, I get varying results depending on the browser. With Opera and Chrome, after I plug the cable back in the video continues to load. But with Safari and Firefox, it never does. Using Wireshark to look at the traffic, I found that Opera and Chrome simply ACK the first packet from YouTube after the cable has been plugged back in, but Safari and Firefox set the RST flag (0x4) in the TCP header and no more traffic follows. I can put a HUB in between the machine and the internet connection, the problem goes away and all four browsers continue loading the video when the cable is plugged back into the HUB. Again, looking at the Wireshark logs, it's evident that the machine doesn't see the Mulitcast connection close and there is simply a delay in the packets flowing through. So it seems that if Safari and Firefox sees a Multicast connection close, and then later see data on that same connection, they will send a RST. My question is why? What is the correct course of action, and why are 2/4 browsers doing it one way, while the other 2/4 are doing it another way? Is there somewhere in the code that I can see where this is happening in Firefox, for instance? Thank you very much.

    Read the article

  • google app engine atomic section???

    - by bokertov
    hi, Say you retrieve a set of records from the datastore (something like: select * from MyClass where reserved='false'). how do i ensure that another user doesn't set the reserved is still false? I've looked in the Transaction documentation and got shocked from google's solution which is to catch the exception and retry in a loop. Any solution that I'm missing - it's hard to believe that there's no way to have an atomic operation in this environment. (btw - i could use 'syncronize' inside the servlet but i think it's not valid as there's no way to ensure that there's only one instance of the servlet object, isn't it? same applies to static variable solution) Any idea on how to solve??? (here's the google solution: http://code.google.com/appengine/docs/java/datastore/transactions.html#Entity_Groups look at: Key k = KeyFactory.createKey("Employee", "k12345"); Employee e = pm.getObjectById(Employee.class, k); e.counter += 1; pm.makePersistent(e); This requires a transaction because the value may be updated by another user after this code fetches the object, but before it saves the modified object. Without a transaction, the user's request will use the value of counter prior to the other user's update, and the save will overwrite the new value. With a transaction, the application is told about the other user's update. If the entity is updated during the transaction, then the transaction fails with an exception. The application can repeat the transaction to use the new data. THANKS!

    Read the article

  • Utilizing a Queue

    - by Nathan
    I'm trying to store records of transactions all together and by category for the last 1, 7, 30 or 360 days. I've tried a couple things, but they've brutally failed. I had an idea of using a queue with 360 values, one for each day, but I don't know enough about queue's to figure out how that would work. Input will be an instance of this class: class Transaction { public string TotalEarned { get; set; } public string TotalHST { get; set; } public string TotalCost { get; set; } public string Category { get; set; } } New transactions can occur at any time during the day, and there could be as many as 15 transactions in a day. My program is using a plain text file as external storage, but how I load it depends on how I decide to store this data. What would be the best way to do this?

    Read the article

  • Robotlegs: Warning: Injector already has a rule for type

    - by MikeW
    I have a bunch of warning messages like this appear when using Robotlegs/Signals. Everytime this command class executes, which is every 2-3 seconds ..this message displays below If you have overwritten this mapping intentionally you can use "injector.unmap()" prior to your replacement mapping in order to avoid seeing this message. Warning: Injector already has a rule for type "mx.messaging.messages::IMessage", named "". The command functions fine otherwise but I think I'm doing something wrong anyhow. public class MessageReceivedCommand extends SignalCommand { [Inject] public var message:IMessage; ...etc.. do something with message.. } the application context doesnt map IMessage to this command, as I only see an option to mapSignalClass , besides the payload is received fine. Wonder if anyone knows how I might either fix or suppress this message. I've tried calling this as the warning suggests injector.unmap(IMessage, "") but I receive an error - no mapping found for ::IMessage named "". Thanks Edit: A bit more info about the error Here is the signal that I dispatch to the command public class GameMessageSignal extends Signal { public function GameMessageSignal() { super(IMessage); } } which is dispatched from a IPushDataService class gameMessage.dispatch(message.message); and the implementation is wired up in the app context via injector.mapClass(IPushDataService, PushDataService); along with the signal signalCommandMap.mapSignalClass(GameMessageSignal, MessageReceivedCommand); Edit #2: Probably good to point out also I inject an instance of GameMessageSignal into IPushDataService public class PushDataService extends BaseDataService implements IPushDataService { [Inject] public var gameMessage:GameMessageSignal; //then private function processMessage(message:MessageEvent):void { gameMessage.dispatch(message.message); } } Edit:3 The mappings i set up in the SignalContext: injector.mapSingleton(IPushDataService); injector.mapClass(IPushDataService, PushDataService);

    Read the article

  • Silverlight: Button template with texttrimming cut off

    - by dex3703
    I'm replacing the default Button template's ContentPresenter with a TextBlock, so the text can be trimmed when it's too long. Works fine in WPF. In Silverlight the text gets pushed to one edge and cut off on the left, even when there's space on the right: Template is nothing special, just replaced the ContentPresenter with the TextBlock: <Border x:Name="bdrBackground" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Background="{TemplateBinding Background}" /> <Rectangle x:Name="rectMouseOverVisualElement" Opacity="0"> <Rectangle.Fill> <SolidColorBrush x:Name="rectMouseOverColor" Color="{StaticResource MouseOverItemBgColor}"/> </Rectangle.Fill> </Rectangle> <Rectangle x:Name="rectPressedVisualElement" Opacity="0" Style="{TemplateBinding Tag}"/> <TextBlock x:Name="textblock" Text="{TemplateBinding Content}" TextTrimming="WordEllipsis" TextWrapping="NoWrap" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/> <Rectangle x:Name="rectDisabledVisualElement" Opacity="0" Style="{StaticResource RectangleDisabledStyle}"/> <Rectangle x:Name="rectFocusVisualElement" Opacity="0" Style="{StaticResource RectangleFocusStyle}"/> </Grid> </ControlTemplate> How do I fix this? More info: With the latest comment about HorizontalAlignment, it's clear that SL's implementation of TextTrimming differs from WPF's. In SL, TextTrimming only really works if the text is aligned left. SL isn't smart enough to align the text the way WPF does. For instance: WPF button: SL button with the textblock horizontalalignment = left: SL button with textblock horizontalalignment = center:

    Read the article

  • Looking for resources to explain a security risk.

    - by Dave
    I've a developer which has given users the ability to download a zip archive which contains an html document which references a relative javascript file and flash document. The flash document accepts as one of it's parameters a url which is embedded in the html document. I believe that this archive is meant to be used as a means to transfer an advertisement to someone who would use the source to display the ad on their site, however the end user appears to want to view it locally. When one opens the html document the flash document is presented and when the user clicks on the flash document it redirects to this embedded url. However, if one extracts the archive on the desktop and opens the html document in a browser and clicks the flash object, nothing observable happens, they will not be redirected to the external url. I believe this is a security risk because one is transferring from the local computer zone to an external zone. I'm trying to determine the best way to explain this security risk in the simplest of terms to a very end user. They simply believe it's "broken" when it's not broken, they're being protected from a known vulnerability. The developer attempted to explain how to copy the files to a local iis instance, which I highly doubt is running on the users machine, and I do not consider this to be a viable explanation.

    Read the article

  • Activator.CreateInstance uses a huge amount of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue?

    Read the article

  • UDP security and identifying incoming data.

    - by Charles
    I have been creating an application using UDP for transmitting and receiving information. The problem I am running into is security. Right now I am using the IP/socketid in determining what data belongs to whom. However, I have been reading about how people could simply spoof their IP, then just send data as a specific IP. So this seems to be the wrong way to do it (insecure). So how else am I suppose to identify what data belongs to what users? For instance you have 10 users connected, all have specific data. The server would need to match the user data to this data we received. The only way I can see to do this is to use some sort of client/server key system and encrypt the data. I am curious as to how other applications (or games, since that's what this application is) make sure their data is genuine. Also there is the fact that encryption takes much longer to process than unencrypted. Although I am not sure by how much it will affect performance. Any information would be appreciated. Thanks.

    Read the article

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • linux bash script: set date/time variable to auto-update (for inclusion in file names)

    - by user1859492
    Essentially, I have a standard format for file naming conventions. It breaks down to this: target_dateUTC_timeUTC_tool So, for instance, if I run tcpdump on a target of 'foo', then the file would be foo_dateUTC_timeUTC_tcpdump. Simple enough, but a pain for everyone to constantly (and consistently) enter... so I've tried to create a bash script which sets system variables like so: FILENAME=$TARGET\_$UTCTIME\_$TOOL Then, I can just call the variable at runtime, like so: tcpdump -w $FILENAME.lpc All of this works like a champ. I've got a menu-driven .sh which gives the user the options of viewing the current variables as well as setting them... file generation is a breeze. Unfortunately, by setting the date/time variable, it is locked to the value at the time of creation (naturally). I set the variable like so: UTCTIME=$(/bin/date --utc +"%Y%m%d_%H%M%Z") What I really need is either a way to create a variable which updates at runtime, or (more likely) another way to skin this cat. While scouring for solutions, I came across a similar issues... like this. But, to be honest, I'm stumped on how to marry the two approaches and create a simple, distributable solution. I can post the entire .sh if anyone cares to review (about 120 lines)

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • C++ Operator Ambiguity

    - by Scott
    Forgive me, for I am fairly new to C++, but I am having some trouble regarding operator ambiguity. I think it is compiler-specific, for the code compiled on my desktop. However, it fails to compile on my laptop. I think I know what's going wrong, but I don't see an elegant way around it. Please let me know if I am making an obvious mistake. Anyhow, here's what I'm trying to do: I have made my own vector class called Vector4 which looks something like this: class Vector4 { private: GLfloat vector[4]; ... } Then I have these operators, which are causing the problem: operator GLfloat* () { return vector; } operator const GLfloat* () const { return vector; } GLfloat& operator [] (const size_t i) { return vector[i]; } const GLfloat& operator [] (const size_t i) const { return vector[i]; } I have the conversion operator so that I can pass an instance of my Vector4 class to glVertex3fv, and I have subscripting for obvious reasons. However, calls that involve subscripting the Vector4 become ambiguous to the compiler: enum {x, y, z, w} Vector4 v(1.0, 2.0, 3.0, 4.0); glTranslatef(v[x], v[y], v[z]); Here are the candidates: candidate 1: const GLfloat& Vector4:: operator[](size_t) const candidate 2: operator[](const GLfloat*, int) <built-in> Why would it try to convert my Vector4 to a GLfloat* first when the subscript operator is already defined on Vector4? Is there a simple way around this that doesn't involve typecasting? Am I just making a silly mistake? Thanks for any help in advance.

    Read the article

  • A generic C++ library that provides QtConcurrent functionality?

    - by Lucas
    QtConcurrent is awesome. I'll let the Qt docs speak for themselves: QtConcurrent includes functional programming style APIs for parallel list processing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications. For instance, you give QtConcurrent::map() an iterable sequence and a function that accepts items of the type stored in the sequence, and that function is applied to all the items in the collection. This is done in a multi-threaded manner, with a thread pool equal to the number of logical CPU's on the system. There are plenty of other function in QtConcurrent, like filter(), filteredReduced() etc. The standard CompSci map/reduce functions and the like. I'm totally in love with this, but I'm starting work on an OSS project that will not be using the Qt framework. It's a library, and I don't want to force others to depend on such a large framework like Qt. I'm trying to keep external dependencies to a minimum (it's the decent thing to do). I'm looking for a generic C++ framework that provides me with the same/similar high-level primitives that QtConcurrent does. AFAIK boost has nothing like this (I may be wrong though). boost::thread is very low-level compared to what I'm looking for. I know C# has something very similar with their Parallel Extensions so I know this isn't a Qt-only idea. What do you suggest I use?

    Read the article

  • how to bind a list to a dropdown list in gridview

    - by user3721173
    I have a GridView that it contain a Drop-down list.I have a list that wanna to bind this list to drop-down in gridview. <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False"OnSelectedIndexChanged="GridView1_SelectedIndexChanged" OnRowDataBound="GridView1_RowDataBound"> <Columns> <ItemTemplate> <asp:Label ID="Label2" runat="server"></asp:Label> <asp:DropDownList ID="DropDownList3" runat="server" AppendDataBoundItems="True" OnSelectedIndexChanged="DropDownList3_SelectedIndexChanged1" > </asp:DropDownList> </ItemTemplate> </asp:TemplateField> </Columns> and protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { DropDownList dropdown = (DropDownList)e.Row.FindControl("DropDownList3"); ClassDal obj = new ClassDal(); List<phone> list = obj.GetAll(); dropdown.DataTextField = "phone"; dropdown.DataValueField = "id"; dropdown.DataSource = list.ToList(); dropdown.DataBind(); } and namespace sample_table { public class ClassDal { public List<phone> GetAll() { using (PracticeDBEntities1 context = new PracticeDBEntities1()) { return context.phone.ToList(); } } } } but i received this exception :Object reference not set to an instance of an object on the row: dropdown.DataTextField = "phone";

    Read the article

  • VisualSVN How to roll back the revision number ?

    - by Ita
    The company I work in has suffered a major server failure. During this failure the SVN Repository was lost. But there is still hope ! We have an old backup of the repository which I've managed to successfully restore using VisualSVN. The problem I'm facing now is that I can't update / commit pre-failure checkedout folders. The reason for this problem is that for instance: a local folder has a revision number of 2361, while the repository itself holds a revision number of 2290, which is older. Is there a way to deal with this issue ? Can I some how change the revision numbers on either the local copy or the server copy? A few points: I'm using TortoiseSVN 1.6.6. I can checkout folders from the repo and the connection is active. I've picked one of my folders and used the Relocate option on it. This helped me see that there is something wrong with the revision number I've experimented a bit with the merge option but this lead me now where special. (I'm open for suggestions ) Thank you for your time, Ita

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >