Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 85/842 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • multiple ftp sites on a single iis server with one ip

    - by kacalapy
    can i have multiple iis ftp sites using something similar to web site's unasigned host headers? i have a dedicated server in a hosting facility and want to make a web site for each of my clients. to add/ remove files and content i want ftp access to each of the sites root folders. lets say i have 10 sites set up using unasigned host headers... how can i set up 10 analogous ftp sites on the same server? AND NOT USE A DEFINED IP ADDRESS FOR EACH FTP SITE thanks all

    Read the article

  • Should I define a single "DataContext" and pass references to it around or define muliple "DataConte

    - by Nate Bross
    I have a Silverlight application that consists of a MainWindow and several classes which update and draw images on the MainWindow. I'm now expanding this to keep track of everything in a database. Without going into specifics, lets say I have a structure like this: MainWindow Drawing-Surface Class1 -- Supports Drawing DataContext + DataServiceCollection<T> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) Class3 Each "Class" is passed a reference to the Drawing Surface so they can interact with it independently. I'm starting to use WCF Data Services in Class1 and its working well; however, the other classes are also going to need access to the WCF Data Services. (Should I define my "DataContext" in MainWindow and pass a reference to each child class?) Class1 will need READ access to the "transactions" data, and Class2 will need READ access to some of the drawing data. So my question is, where does it make the most sense to define my DataContext? Does it make sense to: Define a "global" WCF Data Service "Context" object and pass references to that in all of my subsequent classes? Define an instance of the "Context" for each Class1, Class2, etc Have each method that requires access to data define its own instance of the "Context" and use closures handle the async load/complete events? Would a structure like this make more sense? Is there any danger in keeping an active "DataContext" open for an extended period of time? Typical usecase of this application could range from 1 minute to 40+ minutes. MainWindow Drawing-Surface DataContext Class1 -- Supports Drawing DataServiceCollection<DrawingType> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) DataServiceCollection<TransactionType> w/events Class3 DataServiceCollection<T> w/events

    Read the article

  • Use GIS to get geographic info for a single point

    - by Patrick Scott
    I am not quite sure where to start with this. I only just started looking into this in the past week, but hopefully someone can help point me in the right direction. The goal of my project is to be able to take a geohash, decode it to latitude and longitude, check the point against some GIS data, and find out some information about that point such as the terrain(is this a body of water? A lake? An Ocean? Is this a mountainous area? Is this a field?), altitude, or other useful things. Then simply be able to display that information as a starter. What I have gathered so far is that I need to get some free GIS data (this is for school, so I have no money!). I would like to have world data, and I found some online (http://www.webgis.com/terraindata.html) but I don't know where to go from here. I saw some tools such as PostGIS as a database. I am currently using Java for some other parts of the project, so if possible I would like to stick to Java. Can someone help me out, or point me in the right direction?

    Read the article

  • Python threads all executing on a single core

    - by Rob Lourens
    I have a Python program that spawns many threads, runs 4 at a time, and each performs an expensive operation. Pseudocode: for object in list: t = Thread(target=process, args=(object)) # if fewer than 4 threads are currently running, t.start(). Otherwise, add t to queue But when the program is run, Activity Monitor in OS X shows that 1 of the 4 logical cores is at 100% and the others are at nearly 0. Obviously I can't force the OS to do anything but I've never had to pay attention to performance in multi-threaded code like this before so I was wondering if I'm just missing or misunderstanding something. Thanks.

    Read the article

  • Ruby Built In Method to Create Multidimensional Array From Single Dimensioned Array

    - by Ell
    If I have an array like this: [0, 1, 2, 3, 4, 5], is there a built in method to create this: [[0, 1, 2], [3, 4, 5]] given a width of 3? If there is no built in method, how could I improve on this? def multi_to_single(array, width) return [].tap{|md_array| (array.length.to_f / width).ceil.times {|y| row = (array[(y*width), width]) md_array.push( row + Array.new(width - row.length)) } } end I feel like I have missed something obvious because I haven't programmed ruby in a while! Thanks in advance, ell. EDIT: It needs to be in the core library, so no ruby on rails or anything.

    Read the article

  • Combining javascripts into a single file

    - by toomanyairmiles
    Having read up recently on yahoo's web optimisation tips and using YSlow I've implemented a few of their ideas on one of my sites http://www.gwynfryncottages.com you can see the file here http://www.gwynfryncottages.com/js/gw-custom.js. While this technique seems to work perfectly on most occasions, and really does speed up the site, I do notice a significantly higher number of errors where the javascripts don't load or don't load completely while I'm working on the site so three questions:- is combining scripts this way a good idea at all in terms of reliablity? is there any way to measure the number of errors? is there any way to 'pre-load' the javascript or ensure that the number of loading errors is reduced?

    Read the article

  • Lock free multiple readers single writer

    - by dummzeuch
    I have got an in memory data structure that is read by multiple threads and written by only one thread. Currently I am using a critical section to make this access threadsafe. Unfortunately this has the effect of blocking readers even though only another reader is accessing it. There are two options to remedy this: use TMultiReadExclusiveWriteSynchronizer do away with any blocking by using a lock free approach For 2. I have got the following so far (any code that doesn't matter has been left out): type TDataManager = class private FAccessCount: integer; FData: TDataClass; public procedure Read(out _Some: integer; out _Data: double); procedure Write(_Some: integer; _Data: double); end; procedure TDataManager.Read(out _Some: integer; out _Data: double); var Data: TDAtaClass; begin InterlockedIncrement(FAccessCount); try // make sure we get both values from the same TDataClass instance Data := FData; // read the actual data _Some := Data.Some; _Data := Data.Data; finally InterlockedDecrement(FAccessCount); end; end; procedure TDataManager.Write(_Some: integer; _Data: double); var NewData: TDataClass; OldData: TDataClass; ReaderCount: integer; begin NewData := TDataClass.Create(_Some, _Data); InterlockedIncrement(FAccessCount); OldData := TDataClass(InterlockedExchange(integer(FData), integer(NewData)); // now FData points to the new instance but there might still be // readers that got the old one before we exchanged it. ReaderCount := InterlockedDecrement(FAccessCount); if ReaderCount = 0 then // no active readers, so we can safely free the old instance FreeAndNil(OldData) else begin /// here is the problem end; end; Unfortunately there is the small problem of getting rid of the OldData instance after it has been replaced. If no other thread is currently within the Read method (ReaderCount=0), it can safely be disposed and that's it. But what can I do if that's not the case? I could just store it until the next call and dispose it there, but Windows scheduling could in theory let a reader thread sleep while it is within the Read method and still has got a reference to OldData. If you see any other problem with the above code, please tell me about it. This is to be run on computers with multiple cores and the above methods are to be called very frequently. In case this matters: I am using Delphi 2007 with the builtin memory manager. I am aware that the memory manager probably enforces some lock anyway when creating a new class but I want to ignore that for the moment. Edit: It may not have been clear from the above: For the full lifetime of the TDataManager object there is only one thread that writes to the data, not several that might compete for write access. So this is a special case of MREW.

    Read the article

  • Reverting single file in SVN to a particular revision

    - by Gökhan Sever
    Hello, I have a file as shown below in an SVN repo that I would like to revert to a previous version. What is the way to do this in SVN? I want only downgrade this particular file to an older version, not the whole repo. Thanks. $ svn log myfile.py ---------------------- r179 | xx | 2010-05-10 Change 3 ---------------------- r175 | xx | 2010-05-08 Change 2 ---------------------- r174 | xx | 2010-05-04 Initial

    Read the article

  • Mapping an Array to a Single Row

    - by João Bragança
    I have the following classes: public class InventoryItem { private Usage[] usages = new Usage[12]; virtual public Usage[] Usages { get { return usages; }} virtual public string Name{get;set;} } public class Usage { virtual public double Quantity{get;set;} virtual public string SomethingElse{get;set;} } I know that Usages.Length will always be 12. I think it would be best to store it in the DB like so: Name nvarchar(64), Usage_Quantity_0 float, Usage_SomethingElse_0 nvarchar(16), Usage_Quantity_1 float, Usage_SomethingElse_1 nvarchar(16), ... Usage_Quantity_11 float, Usage_SomethingElse_11 nvarchar(16), How can I get this done?

    Read the article

  • Updating a single file in a compressed tar

    - by Phil
    Given a compressed archive file such as application.tar.gz which has a folder application/x/y/z.jar among others, I'd like to be able to take my most recent version of z.jar and update/refresh the archive with it. Is there a way to do this other than something like the following? tar -xzf application.tar.gz cp ~/myupdatedfolder/z.jar application/x/y tar -czf application application.tar.gz I understand the -u switch in tar may be of use to avoid having to untar the whole thing, but I'm unsure how to use it exactly.

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • How to call different methods from single webservices class

    - by pointer
    I have a following RESTful webservice, I have two methods for http get. One function signs in and other function signs out a user from an application. Following is the code: import javax.ws.rs.core.Context; import javax.ws.rs.core.UriInfo; import javax.ws.rs.PathParam; import javax.ws.rs.Consumes; import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.Produces; import javax.ws.rs.QueryParam; /** * REST Web Service * * @author Pointer */ @Path("generic") public class GenericResource { @Context private UriInfo context; /** * Creates a new instance of GenericResource */ public GenericResource() { } /** * Retrieves representation of an instance of * com.ef.apps.xmpp.ws.GenericResource * * @return an instance of java.lang.String */ @GET @Produces("text/html") public String SignIn(@QueryParam("username") String username, @QueryParam("password") String password, @QueryParam("extension") String extension) { //TODO return proper representation object return "Credentials " + username + " : " + password + " : " + extension; } @GET @Produces("text/html") public String SignOut(@QueryParam("username") String username, @QueryParam("password") String password, @QueryParam("extension") String extension) { //TODO return proper representation object return "Credentials " + username + " : " + password + " : " + extension; } } Now, where would I specify that which function I want to call for http get?

    Read the article

  • Struts2 Converting Enum Array fills array with single null value

    - by Kyle Partridge
    For a simple action class with these member variables: ... private TestConverterEnum test; private TestConverterEnum[] tests; private List<TestConverterEnum> tList; ... And a simple enum: public enum TestConverterEnum { A, B, C; } Using the default struts2 enum conversion, when I send the request like so: TestConterter.action?test=&tests=&tList=&b=asdf For test I get a null value, which is expected. For the Array and List, however, I get and Array (or list) with one null element. Is this what is expected? Is there a way to prevent this. We recently upgraded our struts2 version, and we had our own converters, which also don't work in this case, so I was hoping to use the default conversion method. We already have code that is validating these arrays for null and length, and I don't want to have to add another condition to these branches. Is there a way to prevent this bevavior?

    Read the article

  • Join a single row in one table to n random rows in another

    - by Einar Egilsson
    Is it possible to make a join in SQL server that joins each row from table A to n random rows in another? For example, say I have a Customer table, a Product table and an Order table. I want to join each customer to 5 random products and insert these rows into the order table. (And each customer should be joined to 5 random rows of his own, I don't want all customers joining to the same 5 rows). Is this possible? I'm using SQL Server 2005 and it's fine if the solution is specific to that. This is a weird requirement but I'm basically making a small data generator to generate some random data.

    Read the article

  • Turn a single partylist field into multiple

    - by dsabater
    Is there a way to transform a partylist field like Customer in a Campaign Response to allow for multiple Contacts/Accounts/Leads? Although unsupported, I now from Jian Wang that some attributes of the lookup can be modified in the onload() event like this: crmForm.all.customer.setAttribute("lookuptypes", "1,2"); Is there a similiar attribute that would turn this into a field that allows multiple participants like the To field in an e-mail? Thank you

    Read the article

  • Source control system for single developer

    - by kaiz.net
    What's the recommended source control system for a very small team (one developer)? Price does not matter. Customer would pay :-) I'm working on Vista32 with VS 2008 in C++ and later in C# and with WPF. Setting up an extra (physical) server for this seems overkill to me. Any opinions?

    Read the article

  • Import a Single Field Text File into MySQL

    - by Kirk
    I have a text file that looks like this: value1 value2 value3 There are 32 million lines. Each line is terminated by a \n. The fields are not enclosed or delimited with any characters. I'm trying to import it into MySQL using this code, but it is not working: LOAD DATA LOCAL INFILE 'data.txt' INTO TABLE `table` FIELDS TERMINATED BY '' ENCLOSED BY '' LINES TERMINATED BY '\n' (`column1`) Can anyone tell me what I'm doing wrong?

    Read the article

  • WPF, Setting property with single value on multiple subcontrols

    - by Andre van Heerwaarde
    I have a parent contentcontrol which displays data via a datatemplate. The datatemplate contains a stackpanel with several usercontols of the same type. I like to set the property only once on the parent control, it must set the value of the property on all the subcontrols. But if there is a way to do it on the stackpanel it's also OK. The template can be changed at runtime and the values need also to be propagated to the new template. My current solution is to implement the property on both parent and subcontrol and use code to propagate the value from the parent to all the subcontrols. My question is: is there a better or other ways of doing this?

    Read the article

  • How to mark empty in a single element in a float array

    - by Vineeth Mohan
    I have a large float (primitive) array and not every element in the array is filled. How can i mark a particular element as EMPTY. I understand this can be achieved by some special symbols but still i would like to know the standard way. Even if i am using some special symbol , how will i handle a situation where the actual data item is the value of special symbol. In short my question is how to implement the NULL feature in a primitive type array in java. PS - The reason why i am not using Float object is to achieve a high memory and speed performance. Thanks Vineeth

    Read the article

  • Can this be done with single SQL query

    - by Ghostrider
    I'm using MySQL. I have a table Type SubType 1 1 1 5 1 6 1 8 2 2 2 3 3 1 3 2 3 3 For each type there is some number of subtypes. For every subtype in a type there is a corresponding subtype in the next type: (1,1) => (2,2) (1,5) => (2,3) (1,6) => (2,2) (1,8) => (2,3) (2,2) => (3,1) (2,3) => (3,2) In case you haven't seen the pattern, here it is: you sort both current and next types by subtype, then in the next type you get subtype in the same position as your current subtype in current type is. If there are more subtypes in the current type that in the next one, you warp around and start from the first subtype in the next type. Is it possible to construct a query that takes current type and subtype and returns corresponding subtype in the next type?

    Read the article

  • Best practice with respect to NPE and multiple expressions on single line

    - by JRL
    I'm wondering if it is an accepted practice or not to avoid multiple calls on the same line with respect to possible NPEs, and if so in what circumstances. For example: getThis().doThat(); vs Object o = getThis(); o.doThat(); The latter is more verbose, but if there is an NPE, you immediately know what is null. However, it also requires creating a name for the variable and more import statements. So my questions around this are: Is this problem something worth designing around? Is it better to go for the first or second possibility? Is the creation of a variable name something that would have an effect performance-wise? Is there a proposal to change the exception message to be able to determine what object is null in future versions of Java ?

    Read the article

  • How to group a database write and spreadsheet write in single "transaction"

    - by WhyGeeEx
    I have a Java program that writes results to both a DB (SQL Server) and a spreadsheet (POI), and it would be best if neither is written to if there's an error with either. It would be a lot worse if the spreadsheet was produced and then an error happened while saving to the DB, so I'm doing the DB-write first. Even so, I'm wondering if someone knows of a way to guarantee they both succeed or fail as a unit. Thanks!

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >