Search Results

Search found 4740 results on 190 pages for 'split mirror'.

Page 120/190 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Node.js/ v8: How to make my own snapshot to accelerate startup

    - by Anand
    I have a node.js (v0.6.12) application that starts by evaluating a Javascript file, startup.js. It takes a long time to evaluate startup.js, and I'd like to 'bake it in' to a custom build of Node if possible. The v8 source directory distributed with Node, node/deps/v8/src, contains a SconScript that can almost be used to do this. On line 302, we have LIBRARY_FILES = ''' runtime.js v8natives.js array.js string.js uri.js math.js messages.js apinatives.js date.js regexp.js json.js liveedit-debugger.js mirror-debugger.js debug-debugger.js '''.split() Those javascript files are present in the same directory. Something in the build process apparently evaluates them, takes a snapshot of state, and saves it as a byte string in node/out/Release/obj/release/snapshot.cc (on Mac OS). Some customization of the startup snapshot is possible by altering the SconScript. For example, I can change the definition of the builtin Date.toString by altering date.js. I can even add new global variables by adding startup.js to the list of library files, with contents global.test = 1. However, I can't put just any javascript code in startup.js. If it contains Date.toString = 1;, an error results even though the code is valid at the node repl: Build failed: -> task failed (err #2): {task: libv8.a SConstruct -> libv8.a} make: *** [program] Error 1 And it obviously can't make use of code that depends on libraries Node adds to v8. global.underscore = require('underscore'); causes the same error. I'd ideally like a tool, customSnapshot, where customSnapshot startup.js evaluates startup.js with node and then dumps a snapshot to a file, snapshot.cc, which I can put into the node source directory. I can then build node and tell it not to rebuild the snapshot.

    Read the article

  • Asp.net TreeView control - maximum number of nodes

    - by mas_oz2k1
    I have a treeView control in ASP.NET page that will be loaded with up to 12,000 nodes in different levels. For example: Node 1 Node 1.1 … Node 1.400 Node 1.400.1 … Node 1.400.6400 Node 2 Node 3 Node 4 According to this link: http://msdn.microsoft.com/en-us/library/ms529261.aspx the node limit is 1000. Is this correct or is it dependent on available memory(please specify value)? Assuming it is correct. is there any way to split the 4600 child nodes in say in chunks of 300 hundred? I am thinking that if dummy nodes are used (previous /next navigation) to navigate the chunks will easy the load of the html page. Sample code in C# will be greatly appreciated. (Or VB.NET if you can not translate it to C#)

    Read the article

  • WPF data grid - Column Header not aligned with data rows

    - by Pawan
    Hi, I am using Datagrid in WPF. This is a very simple and basic implementation. I not using any styles. I created a simple datagrid : <dg:DataGrid x:Name="dg" > </dg:DataGrid> and populated it with data as: dg.ItemsSource = " H E L L O W O R L D!".Split(); Grid gets properly populated but the columnheader of the grid is drawn with some offset. Due to this my data and header are mis aligned. I tried searching for this over net but I haven't found anything. This seems to be a straightforward implementation which is working for everyone except me :(. Can anyone please tell me what might be going wrong? I have tried using different data sets and appyling some style to test this. Thanks in advance.

    Read the article

  • Printing elements out of list

    - by chavanak
    Hi, I have a certain check to be done and if the check satisfies, I want the result to be printed. Below is the code: import string import codecs import sys y=sys.argv[1] list_1=[] f=1.0 x=0.05 write_in = open ("new_file.txt", "w") write_in_1 = open ("new_file_1.txt", "w") ligand_file=open( y, "r" ) #Open the receptor.txt file ligand_lines=ligand_file.readlines() # Read all the lines into the array ligand_lines=map( string.strip, ligand_lines ) #Remove the newline character from all the pdb file names ligand_file.close() ligand_file=open( "unique_count_c_from_ac.txt", "r" ) #Open the receptor.txt file ligand_lines_1=ligand_file.readlines() # Read all the lines into the array ligand_lines_1=map( string.strip, ligand_lines_1 ) #Remove the newline character from all the pdb file names ligand_file.close() s=[] for i in ligand_lines: for j in ligand_lines_1: j = j.split() if i == j[1]: print j The above code works great but when I print j, it prints like ['351', '342'] but I am expecting to get 351 342 (with one space in between). Since it is more of a python question, I have not included the input files (basically they are just numbers). Can anyone help me? Cheers, Chavanak

    Read the article

  • A better way of switching between Android source versions

    - by dan
    I would like to be able to switch between various android releases (1.0, 1.5, 2.0, etc.) and then access them via the file system to copy all files for that version into a tarball. Currently I am just running repo init -u <source URL> -b release-1. to get each version (changing the tag for each version I need). If this was a single git, I could check out the branch/tag I needed and the prject directory would "morph" to reflect then and I could just tar that folder. since the android source is split into multiple git repositories controlled by repo I have not yet found a way to change this other then the method mentioned above. any suggestions are appreciated.

    Read the article

  • Merging columns in a JTable

    - by Harish
    I am working in JTable and I have a requirement like this. Say There are 4 columns namely 10,20,30,40 Now the value usually comes like 10-20 20-30 and 30-40 So it was easy for us to display the name for this range. But recently the values have started to come randomly like 15-25 10-25,25-30 In this case our JTable should dynamically adjust the size of the row such that it represents that range only meaning it should not disturb the existing cells and only rows which diverge from the normal range. TO be more precise I should be able to merge and split cells based on the content of the cell.

    Read the article

  • Surgical slave reads for Ruby on Rails, mulitple databases.

    - by Daniel
    Greetings, I'm currently working on a multiple database rails application. I want to off load the SELECT queries on to the slave databases for only SOME of the databases or specific models. The issue is that in places, we swap out the current database connection and put in a different one for a short time; to load fixtures or to handle sharding. Does anyone have any recommendations on a ruby gem that 1. will split select/(sql writes) with a considerable amount of control. We want to handle just some models and we are looking for a neat surgical fix. 2. does not monkey around with activerecord. 3. is still being maintained. TIA -daniel

    Read the article

  • How to pull one commit at a time from a remote git repository?

    - by Norman Ramsey
    I'm trying to set up a darcs mirror of a git repository. I have something that works OK, but there's a significant problem: if I push a whole bunch of commits to the git repo, those commits get merged into a single darcs patchset. I really want to make sure each git commit gets set up as a single darcs patchset. I bet this is possible by doing some kind of git fetch followed by interrogation of the local copy of the remote branch, but my git fu is not up to the job. Here's the (ksh) code I'm using now, more or less: git pull -v # pulls all the commits from remote --- bad! # gets information about only the last commit pulled -- bad! author="$(git log HEAD^..HEAD --pretty=format:"%an <%ae>")" logfile=$(mktemp) git log HEAD^..HEAD --pretty=format:"%s%n%b%n" > $logfile # add all new files to darcs and record a patchset. this part is OK darcs add -q --umask=0002 -r . darcs record -a -A "$author" --logfile="$logfile" darcs push -a rm -f $logfile My idea is Try git fetch to get local copy of the remote branch (not sure exactly what arguments are needed) Somehow interrogate the local copy to get a hash for every commit since the last mirroring operation (I have no idea how to do this) Loop through all the hashes, pulling just that commit and recording the associated patchset (I'm pretty sure I know how to do this if I get my hands on the hash) I'd welcome either help fleshing out the scenario above or suggestions about something else I should try. Ideas?

    Read the article

  • Bulkinsert from CSV into db (C#) -> max number of rows in a web application?

    - by Swoosh
    Web application - C#, .Net, SQL 2k5. I recently used bulkinsert on an other application and I thought I would like to give it a try. I am going to receive a CSV file with 1000 rows, which will most likely add 500 000 (that is five hundred thousand) records in the database. I don't have any idea yet about this huge amount if it's going to work out well. I am afraid that it will time out. I didn't do any testing yet, but I am pretty sure it would time out eventually. Is there a way to make it not time out (I don't know ... split the bulkinsert into 1000 pieces :D) or I should try to do something like BCP, with a SQL job ...

    Read the article

  • why egrep's stdout did not go through pipe?

    - by ccfenix
    Hi, i got a weird problem regarding egrep and pipe I tried to filter a stream containing some lines who start with a topic name, such as "TICK:this is a tick message\n" When I try to use egrep to filter it : ./stream_generator | egrep 'TICK' | ./topic_processor It seems that the topic_processor never receives any messages However, when i use the following python script: ./stream_generator | python filter.py --topics TICK | ./topic_processor everything looks to be fine. I guess there need to be a 'flush' mechanism for egrep as well, is this correct? Can anyone here give me a clue? Thanks a million import sys from optparse import OptionParser if __name__ == '__main__': parser = OptionParser() parser.add_option("-m", "--topics", action="store", type="string", dest="topics") (opts, args) = parser.parse_args() topics = opts.topics.split(':') while True: s = sys.stdin.readline() for each in topics: if s[0:4] == each: sys.stdout.write(s) sys.stdout.flush()

    Read the article

  • Elegant way of retrieving query string parameter.

    - by Wondering
    Hi All, I am retrieving one query string parameter, and for that my code is <a href="Default2.aspx?Id=1&status=pending&year=2010">Click me</a> Now I want to retrieve "status=pending" and for that I am doing var qString = window.location.search.substring(1); var Keys = qString .split('&'); alert(Keys[1]); This works fine, but I am hard-coding [1] here. is there any elegent way of doing this without hard-coding?

    Read the article

  • Need some tips on my SQL script?

    - by Nano HE
    Hi I plan to create a tale to store the race result like this, Place RaceNumber Gender Name Result 12 0112 Male Mike Lee 1:32:40 16 0117 Female Rose Marry 2:20:40 I confused at the items type definiation. Q1.I am not sure the result can be set to varchar(32) or other type? Q2. and for racenumber, between int(11) and varchar(11), which one is better? Q3. Can I use `UNIQUE KEY` like my way? Q4. Do I need split name to firstName and lastName in my DB table? DROP TABLE IF EXISTS `race_result`; CREATE TABLE IF NOT EXISTS `race_result` ( `id` int(11) NOT NULL auto_increment, `racenumber` int(11) NOT NULL, `gender` enum('male','female') NOT NULL, `name` varchar(16) NOT NULL, `result` varchar(32) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `racenumber` (`racenumber`,`id`) ) ENGINE=MyISAM AUTO_INCREMENT=3 DEFAULT CHARSET=utf8 AUTO_INCREMENT=3;

    Read the article

  • loading data from file into 2d array

    - by Chris
    I am just starting with perl and would like some help with arrays please. I am reading lines from a data file and splitting the line into fields: open (INFILE, $infile); do { my $linedata = <INFILE>; my @data= split ',',$linedata; .... } until eof; I then want to store the individual field values (in @data) in and array so that the array looks like the input data file ie, the first "row" of the array contains the first line of data from INFILE etc. Each line of data from the infile contains 4 values, x,y,z and w and once the data are all in the array, I have to pass the array into another program which reads the x,y,z,w and displays the w value on a screen at the point determined by the x,y,z value. I can not pas the data to the other program on a row-by-row basis as the program expects the data to in a 2d matrtix format. Any help greatly appreciated. Chris

    Read the article

  • What is your ratio Bug fixing vs Enhancements ?

    - by Newtopian
    In the spirit of this question I wanted to have a sense of what is the proportion of time split between fixing bugs and implementing new features. If possible try to give an estimate for the product as a whole as opposed to individual developer stats and try to make an average over the course of a typical year. Do provide a general descriptive of the product/project to allow comparison. Specifically : Maturity of project Is it still actively developed or strictly in maintenance ? Size estimate of the product/project Size of team developing it (all inclusive) What is your team score on the Joel test. Ex : approx 80% time spent bug fixes 20% new stuff Mature software (20 years old) Actively developed 1.5M Line of Text, approx 700k - 900k LOC 12-15 actively coding in it. we got 5/12 for sure, some would say 7/12.

    Read the article

  • Unable to forward UITouch events to my view controller

    - by hyn
    I have a UISplitViewController setup with a custom view added as a subview of the view (UILayoutContainerView) of split view controller. I am trying to forward touch events from my custom view controller to the master and detail views, but the following (which was suggested here on another thread) seems to have no effect: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; // Do something [self.nextResponder touchesBegan:touches withEvent:event]; } (I couldn't get this formatted properly) As a result my custom view controller locks the events and all the UI underneath never has a chance to do anything. How can I get my master and detail view controllers to receive events?

    Read the article

  • Why do socket.makefile objects fail after the first read for UDP sockets?

    - by Eli Courtwright
    I'm using the socket.makefile method to create a file-like object on a UDP socket for the purposes of reading. When I receive a UDP packet, I can read the entire contents of the packet all at once by using the read method, but if I try to split it up into multiple reads, my program hangs. Here's a program which demonstrates this problem: import socket from sys import argv SERVER_ADDR = ("localhost", 12345) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(SERVER_ADDR) f = sock.makefile("rb") sock.sendto("HelloWorld", SERVER_ADDR) if "--all" in argv: print f.read(10) else: print f.read(5) print f.read(5) If I run the above program with the --all option, then it works perfectly and prints HelloWorld. If I run it without that option, it prints Hello and then hangs on the second read. I do not have this problem with socket.makefile objects when using TCP sockets. Why is this happening and what can I do to stop it?

    Read the article

  • How to improve my LDAP schema?

    - by asmaier
    Hello, I have a OpenLDAP Database and it holds some project objects that look like dn: cn=Proj1,ou=Project,ou=ua,dc=org cn: Proj1 objectClass: top objectClass: posixGroup member: 001ag member: 002ag System: ABEL System: PCx Budget: ABEL:1000000:0.3 Budget: PCx:300000:0.3 One can see that the Budget attribute is a ":"-separated string, where the first part holds the name of the system the budget is for, the second part holds some budget (which may change every month) and the last entry is a conversion factor for the budget of that system. Seeing this, I thought this is bad database design, since attribute values should always be atomic. But how can I improve that in LDAP, so that I can do a direct ldapsearch or a direct ldapmodify of the budget of System "ABEL" instead of writing a script, that will have to parse and split the ":"-separated string?

    Read the article

  • Flex datagrid multiple rows single file

    - by Vish
    Hi, I have a flex datagrid with 3 columns. The first column contains the image name(unique key). The other two columns have username and size details. I want to split the username into lastname, firstname, address and some other stuff. Can we have multiple rows in the grid for one image? Tried multi-line, it works but we need to keep adding spaces and its cumbersome. Since each row in the flex datagrid is represented by one index in the XMLList which is enumerated, is there a way to have more than one row assigned to one image and shown in the grid? something like this.. Thanks, Vish.

    Read the article

  • Custom Lucene Sharding with Hibernate Search

    - by Timo Westkämper
    Has anyone experience with custom Lucene sharding / paritioning using Hibernate Search? The documentation of Hibernate Search says the following about Lucene Sharding : In some cases, it is necessary to split (shard) the indexing data of a given entity type into several Lucene indexes. This solution is not recommended unless there is a pressing need because by default, searches will be slower as all shards have to be opened for a single search. In other words don't do it until you have problems :) Has anyone implemented sharding in such a way for Hibernate Search that also queries can be target to one of the shards? In our case we have Lucene queries that should target only one shard per query.

    Read the article

  • Multiple lines of text to a single map

    - by steven
    I've been trying to use Hadoop to send N amount of lines to a single mapping. I don't require for the lines to be split already. I've tried to use NLineInputFormat, however that sends N lines of text from the data to each mapper one line at a time [giving up after the Nth line]. I have tried to set the option and it only takes N lines of input sending it at 1 line at a time to each map: job.setInt("mapred.line.input.format.linespermap", 10); I've found a mailing list recommending me to override LineRecordReader::next, however that is not that simple, as that the internal data members are all private. I've just checked the source for NLineInputFormat and it hard codes LineReader, so overriding will not help. Also, btw I'm using Hadoop 0.18 for compatibility with the Amazon EC2 MapReduce.

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

  • Entropy using Decision Tree's

    - by Matt Clements
    Train a decision tree on the data represented by attributes A1, A2, A3 and outcome C described below: A1 A2 A3 C 1 0 1 0 0 1 1 1 0 0 1 0 For log2(1/3) = 1.6 and log2(2/3) = 0.6, answer the following questions: a) What is the value of entropy H for the given set of training example? b) What is the portion of the positive samples split by attribute A2? c) What is the value of information gain, G(A2), of attribute A2? d) What is IFTHEN rule(s) for the decision tree?

    Read the article

  • Compound dictionary keys

    - by John Keyes
    I have a particular case where using compound dictionary keys would make a task easier. I have a working solution, but feel it is inelegant. How would you do it? context = { 'database': { 'port': 9990, 'users': ['number2', 'dr_evil'] }, 'admins': ['[email protected]', '[email protected]'], 'domain.name': 'virtucon.com' } def getitem(key, context): if hasattr(key, 'upper') and key in context: return context[key] keys = key if hasattr(key, 'pop') else key.split('.') k = keys.pop(0) if keys: try: return getitem(keys, context[k]) except KeyError, e: raise KeyError(key) if hasattr(context, 'count'): k = int(k) return context[k] if __name__ == "__main__": print getitem('database', context) print getitem('database.port', context) print getitem('database.users.0', context) print getitem('admins', context) print getitem('domain.name', context) try: getitem('database.nosuchkey', context) except KeyError, e: print "Error:", e Thanks.

    Read the article

  • Why doesn't this Perl array sort work?

    - by Luke
    Why won't the array sort? CODE my @data = ('PJ RER Apts to Share|PROVIDENCE', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Condo|WEST WARWICK', 'PJ RER Condo|WARWICK'); foreach my $line (@data) { $count = @data; chomp($line); @fields = split(/\|/,$line); if ($fields[0] eq "PJ RER Apts to Share"){ @city = "\u\L$fields[1]"; @city_sort = sort (@city); print "@city_sort","\n"; } } print "$count","\n"; OUTPUT Providence Johnston Johnston Johnston 6

    Read the article

  • Return nested alias for linq expression

    - by Schotime
    I have the following Linq Expression var tooDeep = shoppers .Where(x => x.Cart.CartSuppliers.First().Name == "Supplier1") .ToList(); I need to turn the name part into the following string. x.Cart.CartSuppliers.Name As part of this I turned the Expression into a string and then split on the . and removed the First() argument. However, when I get to CartSuppliers this returns a Suppliers[] array. Is there a way to get the single type from this. eg. I need to get a Supplier back. Thanks

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >