Search Results

Search found 4417 results on 177 pages for 'purpose'.

Page 143/177 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Telerik RadGrid: grid clientside pagination

    - by ram
    I have a web service which returns me some data,I am massaging this data and using this as datasource for my radgrid (telerik). The datasource is quite large, and would like to paginate it. I found couple of problems when I paginate it in the server side I have to bind the grid again for pagination, which essentially means I have to make a call to WS again to get the data. This is an expensive call for me. I would rather forgo the benefits of pagination and would display all the results in the same page, except for it would be a bit clumsy During the postback RadGrid1.Items.Count happens to be the number of items getting paginated (25- in my case) which is expected as all the items in the datasource are not getting bound. This of course is not an issue. The real issue is that we have some checkboxes which get checked based on some business condition. We add this to our business object/DB later. So if the user has not navigated all the pages, these "checked" items do not get added as pagination limits the "Items" in the grid to those which get bound for that particular page index. My Thoughts: I would rather have some sort of client side pagination, where we can hide/show contents than going to the server and doing a databind every time. Though it will return all the results, the UI will not be clumsy and the grid would have "all the items" during postback Is there a way to do it ? If it were a regular asp.net gridView, can someone point me to a good article which would serve my purpose Ram PS: who else think radgrid is crazy ? (unfortunately I did not make this choice)

    Read the article

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • ASP.NET server data persistence

    - by Wayne Werner
    Hi, I'm not really sure exactly how the question should be phrased, so please be patient if I ask the wrong thing. I'm writing an ASP.NET application using VB as the code behind language. I have a data access class that connects to the DB to run the query (parameterized, of course), and another class to perform the validation tasks - I access this class from my aspx page. What I would like is to be able to store the data server side and wait for the user to choose from a few options based on the validity of the data. But unless my understanding is completely off, having persistent data objects on the server will give problems when multiple users connect? My ultimate goal is that once the data has been validated the end user can't modify it. Currently I'm validating the data, but I still have to retrieve it from the web form AFTER the user says OK, which obviously leaves open the possibility of injecting bad data either accidentally (unlikely) or on purpose (also unlikely for the use, but I'd prefer not to take the chance). So am I completely off in my understanding? If so, can someone point me to a resource that provides some instructions on keeping persistent data on the server, or provide instruction? Thanks!

    Read the article

  • How can I create global context variables in JBOSS?

    - by NobodyMan
    This is a follow-up to a question I posted a while back: "Can I use a single WAR in multiple environments?". I was able to create a single-war solution in Tomcat, but now we are migrating our app to JBoss 4.2 and I can't figure out how to set up global environment variables. In Tomcat 6 this was pretty straightforward: I simply put the following snippet in tomcat/conf/Catalina/myappname.xml: <Context ...> <Environment name="TARGET_ENV" value="DEV" type="java.lang.String" override="false"/> </Context> Then in my app I was able to resolve the environment name with the following: Context context = (Context) InitialContext().lookup("java:comp/env"); String targetEnvironment = (String) context.lookup("TARGET_ENV"); The problem is that I can't find out where/how to place global variables in JBoss. I've tried putting the <Environment> tag in the following files to no avail: server/all/deploy/jboss-web.deployer/context.xml server/default/deploy/jboss-web.deployer/context.xml I know that I can put environment variables in my app's web.xml but that defeats the purpose of having a unified war - I'd still need custom .war's for dev, qa and prod. I'm a JBoss newbie so if there's any additional information that would help just let me know and I'll append to this question. Many thanks! --N

    Read the article

  • How can I evaluate the connectedness of my nodes?

    - by Travis Leleu
    I've got a space that has nodes that are all interconnected, based on a "similarity score". I would like to determine how "connected" a node is with the others. My purpose is to find nodes that are poorly connected to make sure that the backlink from the other node is prioritized. Perhaps an example would help. I've got a web page that links to my other pages based on a similarity score. Suppose I have the pages: A, B, C, ... A has a backlink from every other page, so it's very well connected. It also has links to all my other pages (each line in the graph is essentially bidirectional). B only has 1 backlink, from A. C has a link from A and D. I would like to make sure that the A-B link is prioritized over the A-C link (even if the similarity score between C and A is higher than B and A). In short, I would like to evaluate which nodes are least and best connected, so that I can mangle the results to my means. I believe this is Graph Connectedness, but I'm at a loss to develop a (simple) algorithm that will help me here. Simply counting the backlinks to a node may be a starting point -- but then how do I take the next step, which is to properly weight the links on the original node (A, in the example above)?

    Read the article

  • Compact data structure for storing a large set of integral values

    - by Odrade
    I'm working on an application that needs to pass around large sets of Int32 values. The sets are expected to contain ~1,000,000-50,000,000 items, where each item is a database key in the range 0-50,000,000. I expect distribution of ids in any given set to be effectively random over this range. The operations I need on the set are dirt simple: Add a new value Iterate over all of the values. There is a serious concern about the memory usage of these sets, so I'm looking for a data structure that can store the ids more efficiently than a simple List<int>or HashSet<int>. I've looked at BitArray, but that can be wasteful depending on how sparse the ids are. I've also considered a bitwise trie, but I'm unsure how to calculate the space efficiency of that solution for the expected data. A Bloom Filter would be great, if only I could tolerate the false negatives. I would appreciate any suggestions of data structures suitable for this purpose. I'm interested in both out-of-the-box and custom solutions. EDIT: To answer your questions: No, the items don't need to be sorted By "pass around" I mean both pass between methods and serialize and send over the wire. I clearly should have mentioned this. There could be a decent number of these sets in memory at once (~100).

    Read the article

  • Rewriting jQuery to plain old javascript - are the performance gains worth it?

    - by Swader
    Since jQuery is an incredibly easy and banal library, I've developed a rather complex project fairly quickly with it. The entire interface is jQuery based, and memory is cleaned regularly to maintain optimum performance. Everything works very well in Firefox, and exceptionally so in Chrome (other browsers are of no concern for me as this is not a commercial or publicly available product). What I'm wondering now is - since pure plain old javascript is really not a complicated language to master, would it be performance enhancing to rewrite the whole thing in plain old JS, and if so, how much of a boost would you expect to get from it? If the answers prove positive enough, I'll go ahead and do it, run a benchmark and report back with the precise findings. Cheers Edit: Thanks guys, valuable insight. The purpose was not to "re-invent the wheel" - it was just for experience and personal improvement. Just because something exists, doesn't mean you shouldn't explore it into greater detail, know how it works or try to recreate it. This is the same reason I seldom use frameworks, I would much rather use my own code and iron it out and gain massive experience doing it, than start off by using someone else's code, regardless of how ironed out it is. Anyway, won't be doing it, thanks for saving me the effort :)

    Read the article

  • Rails: Rendered JS file doesn't execute using UJS

    - by Hassinus
    I would like to display a Rails edit form using JS instead of redirecting with HTML. To do this, I use UJS for the edit link: <%= link_to "Edit user info", edit_user_path(1), :remote => true %> Then, the "edit" action of User controller is like this (simplified version): controllers/users_controller.rb: def edit # Step 1: Get the edit HTML form @html = render_to_string(:template => "users/edit.html") # Step 2: Use JS to display the form in the correct place render "users/edit.js" end As you may guess, I have two views: The html version of "edit" action which contains the form in HTML format. Let's consider a test version: views/users/edit.html.erb: <h1>This is just a test</h1> The js version that will display the form in the correct place, using jQuery for example. Again, for test purpose, let's just popup the html text: views/users/edit.js.erb: alert("<%= @html %>"); The problem is that nothing is executed (no popup) Using the inspector (from Chrome web browser), I get the response as text format: alert("<h1>This is just a test</h1>"); Do you have any idea? Why do the rendered JS is not executed? Thanks in advance.

    Read the article

  • Optimizing this "Boundarize" method for Numerics in Ruby

    - by mstksg
    I'm extending Numerics with a method I call "Boundarize" for lack of better name; I'm sure there are actually real names for this. But its basic purpose is to reset a given point to be within a boundary. That is, "wrapping" a point around the boundary; if the area is betweeon 0 and 100, if the point goes to -1, -1.boundarize(0,100) = 99 (going one too far to the negative "wraps" the point around to one from the max). 102.boundarize(0,100) = 2 It's a very simple function to implement; when the number is below the minimum, simply add (max-min) until it's in the boundary. If the number is above the maximum, simply subtract (max-min) until it's in the boundary. One thing I also need to account for is that, there are cases where I don't want to include the minimum in the range, and cases where I don't want to include the maximum in the range. This is specified as an argument. However, I fear that my current implementation is horribly, terribly, grossly inefficient. And because every time something moves on the screen, it has to re-run this, this is one of the bottlenecks of my application. Anyone have any ideas? module Boundarizer def boundarize min=0,max=1,allow_min=true,allow_max=false raise "Improper boundaries #{min}/#{max}" if min >= max new_num = self if allow_min while new_num < min new_num += (max-min) end else while new_num <= min new_num += (max-min) end end if allow_max while new_num > max new_num -= (max-min) end else while new_num >= max new_num -= (max-min) end end return new_num end end class Numeric include Boundarizer end

    Read the article

  • Finding What You Need in R: function arguments/parameters from outside the function's package

    - by doug
    Often in R, there are a dozen functions scattered across as many packages--all of which have the same purpose but of course differ in accuracy, performance, theoretical rigor, and so on. How do you gather all of these in one place before you start your task? So for instance: the generic plot function. Setting secondary ticks is much easier (IMHO) using a function outside of the base package, minor.tick(nx=n, ny=n, tick.ratio=n), found in Hmisc. Of course, that doesn't show up in plot's docstring. Likewise, the data-input arguments to 'plot' can be supplied by an object returned from the function 'hexbin', again, from a library outside of the base installation (where 'plot' resides). What would be great obviously is a programmatic way to gather these function arguments from the various libraries and put them in a single namespace. edit: (trying to re-state my example just above more clearly:) the arguments to plot supplied in the base package for, e.g., setting the axis tick frequency are xaxp/yaxp; however, one can also set a/t/f via a function outside of the base package, again, as in the minor.tick function from the Hmisc package--but you wouldn't know that just from looking at the plot method signature. Is there a meta function in R for this? So far, as i come across them, i've been manually gathering them in a TextMate 'snippet' (along with the attendant library imports). This isn't that difficult or time consuming, but i can only update my snippet as i find out about these additional arguments/parameters. Is there a canonical R way to do this, or at least an easier way? Just in case that wasn't clear, i am not talking about the case where multiple packages provide functions directed to the same statistic or view (e.g., 'boxplot' in the base package; 'boxplot.matrix' in gplots; and 'bplots' in Rlab). What i am talking is the case in which the function name is the same across two or more packages.

    Read the article

  • Creating a web application that can be extended by plugins/modules

    - by Adam Pope
    I'm currently involved with developing a C# CMS-like web application which will be used to standardise our development of websites. From the outset, the idea has been to keep the core as simple as possible to avoid the complexity and menu/option overload that blights many CMS systems. This simple core is now complete and working very well. We envisisaged that the system would be able to accept plugins or modules which would extend the core functionality to suit a given projects needs. These would also be re-usable across projects. For example, a basic catalogue and shopping basket might be needed. All the code for such extensions should be in seperate assemblies. They should be able to provide their own admin interfaces and front-end code from this library. The system should search for available plugins and give the admin user the option to enable/disable the feature. (This is all very much like WordPress plugins) It is crucial that we attack this problem in the correct way, so I'm trying to perform as much due dilligence as possible before jumping in. I am aware of the Plugin Pattern (http://msdn.microsoft.com/en-us/library/ms972962.aspx) and have read some articles on it's use. It seems reasonable but I'm not convinced it's necessarily the correct/best technique for this situation. It seems more suited to processing applications (image/audio manipulation, maths etc). Are there any other options for achieving this kind of UI extensibility functionality? Or is the plugin pattern the way to go? I'd also be interested if anybody has links to articles that explain using the plugin pattern for this purpose?

    Read the article

  • Use MySQL trigger to update another table when duplicate key found

    - by Jason
    Been scratching my head on this one, hoping one of you kind people and direct me towards solving this problem. I have a mysql table of customers, it contains a lot of data, but for the purpose of this question, we only need to worry about 4 columns 'ID', 'Firstname', 'Lastname', 'Postcode' Problem is, the table contains a lot of duplicated customers. A new table is being created where each customer is unique and for us, we decide a unique customer is based on 'Firstname', 'Lastname' and 'Postcode' However, (this is the important bit) we need to ensure each new "unique" customer record also can be matched to the original multiple entries of that customer in the original table. I believe the best way to do this is to have a third table, that has 'NewUniqueID', 'OldCustomerID'. So we can search this table for 'NewUniqueID' = '123' and it would return multiple 'OldCustomerID' values where appropriate. I am hoping to make this work using a trigger and the on duplicate key syntax. So what would happen is as follows: An query is run taking the old customer table and inserting it in to the new unique table. (A standard Insert Select query) On duplicate key continue adding records, but add one entry in to the third table noting the 'NewUniqueID' that duped along with the 'OldCustomerID' of the record we were trying to insert. Hope this makes sense, my apologies if it isn't clear. I welcome and appreciate any thoughts on this one! Many thanks Jason

    Read the article

  • C# Multiple constraints

    - by John
    I have an application with lots of generics and IoC. I have an interface like this: public interface IRepository<TType, TKeyType> : IRepo Then I have a bunch of tests for my different implementations of IRepository. Many of the objects have dependencies on other objects so for the purpose of testing I want to just grab one that is valid. I can define a separate method for each of them: public static EmailType GetEmailType() { return ContainerManager.Container.Resolve<IEmailTypeRepository>().GetList().FirstOrDefault(); } But I want to make this generic so it can by used to get any object from the repository it works with. I defined this: public static R GetItem<T, R>() where T : IRepository<R, int> { return ContainerManager.Container.Resolve<T>().GetList().FirstOrDefault(); } This works fine for the implementations that use an integer for the key. But I also have repositories that use string. So, I do this now: public static R GetItem<T, R, W>() where T : IRepository<R, W> This works fine. But I'd like to restrict 'W' to either int or string. Is there a way to do that? The shortest question is, can I constrain a generic parameter to one of multiple types?

    Read the article

  • Can an interface define the signature of a c#-class

    - by happyclicker
    I have a .net-app that provides a mechanism to extend the app with plugins. Each plugin must implement a plugin-interface and must provide furthermore a constructor that receives one parameter (a resource context). During the instantiation of the plugin-class I look via reflection, if the needed constructor exists and if yes, I instantiate the class (via Reflection). If the constructor does not exists, I throw an exception that says that the plugin not could be created, because the desired constructor is not available. My question is, if there is a way to declare the signature of a constructor in the plugin-interface so that everyone that implements the plugin-interface must also provide a constructor with the desired signature. This would ease the creation of plugins. I don’t think that such a possibility exists because I think such a feature falls not in the main purpose for what interfaces were designed for but perhaps someone knows a statement that does this, something like: public interface IPlugin { ctor(IResourceContext resourceContext); int AnotherPluginFunction(); } I want to add that I don't want to change the constructor to be parameterless and then set the resource-context through a property, because this will make the creation of plugins much more complicated. The persons that write plugins are not persons with deep programming experience. The plugins are used to calculate statistical data that will be visualized by the app.

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • Concise description of how .h and .m files interact in objective c?

    - by RJ86
    I have just started learning objective C and am really confused how the .h and .m files interact with each other. This simple program has 3 files: Fraction.h #import <Foundation/NSObject.h> @interface Fraction : NSObject { int numerator; int denominator; } - (void) print; - (void) setNumerator: (int) n; - (void) setDenominator: (int) d; - (int) numerator; - (int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end Main.m #import <stdio.h> #import "Fraction.h" int main(int argc, char *argv[]) { Fraction *frac = [[Fraction alloc] init]; [frac setNumerator: 1]; [frac setDenominator: 3]; printf( "The fraction is: " ); [frac print]; printf( "\n" ); [frac release]; return 0; } From what I understand, the program initially starts running the main.m file. I understand the basic C concepts but this whole "class" and "instance" stuff is really confusing. In the Fraction.h file the @interface is defining numerator and denominator as an integer, but what else is it doing below with the (void)? and what is the purpose of re-defining below? I am also quite confused as to what is happening with the (void) and (int) portions of the Fraction.m and how all of this is brought together in the main.m file. I guess what I am trying to say is that this seems like a fairly easy program to learn how the different portions work with each other - could anyone explain in non-tech jargon?

    Read the article

  • GUID or int entity key with SQL Compact/EF4?

    - by David Veeneman
    This is a follow-up to an earlier question I posted on EF4 entity keys with SQL Compact. SQL Compact doesn't allow server-generated identity keys, so I am left with creating my own keys as objects are added to the ObjectContext. My first choice would be an integer key, and the previous answer linked to a blog post that shows an extension method that uses the Max operator with a selector expression to find the next available key: public static TResult NextId<TSource, TResult>(this ObjectSet<TSource> table, Expression<Func<TSource, TResult>> selector) where TSource : class { TResult lastId = table.Any() ? table.Max(selector) : default(TResult); if (lastId is int) { lastId = (TResult)(object)(((int)(object)lastId) + 1); } return lastId; } Here's my take on the extension method: It will work fine if the ObjectContext that I am working with has an unfiltered entity set. In that case, the ObjectContext will contain all rows from the data table, and I will get an accurate result. But if the entity set is the result of a query filter, the method will return the last entity key in the filtered entity set, which will not necessarily be the last key in the data table. So I think the extension method won't really work. At this point, the obvious solution seems to be to simply use a GUID as the entity key. That way, I only need to call Guid.NewGuid() method to set the ID property before I add a new entity to my ObjectContext. Here is my question: Is there a simple way of getting the last primary key in the data store from EF4 (without having to create a second ObjectContext for that purpose)? Any other reason not to take the easy way out and simply use a GUID? Thanks for your help.

    Read the article

  • Looking for Go equivalent of scanf.

    - by Stephen Hsu
    I'm looking for the Go equivalent of scanf(). I tried with following code: 1 package main 2 3 import ( 4 "scanner" 5 "os" 6 "fmt" 7 ) 8 9 func main() { 10 var s scanner.Scanner 11 s.Init(os.Stdin) 12 s.Mode = scanner.ScanInts 13 tok := s.Scan() 14 for tok != scanner.EOF { 15 fmt.Printf("%d ", tok) 16 tok = s.Scan() 17 } 18 fmt.Println() 19 } I run it with input from a text with a line of integers. But it always output -3 -3 ... And how to scan a line composed of a string and some integers? Changing the mode whenever encounter a new data type? The Package documentation: Package scanner A general-purpose scanner for UTF-8 encoded text. But it seems that the scanner is not for general use. Updated code: func main() { n := scanf() fmt.Println(n) fmt.Println(len(n)) } func scanf() []int { nums := new(vector.IntVector) reader := bufio.NewReader(os.Stdin) str, err := reader.ReadString('\n') for err != os.EOF { fields := strings.Fields(str) for _, f := range fields { i, _ := strconv.Atoi(f) nums.Push(i) } str, err = reader.ReadString('\n') } r := make([]int, nums.Len()) for i := 0; i < nums.Len(); i++ { r[i] = nums.At(i) } return r }

    Read the article

  • Rendering maps from raw SVG data in Java

    - by Lunikon
    In an application of mine I have to display locations and great circle paths in a map which is rendered to PNG and then displayed on the web. For this I simply use a world map (NASA's Blue Marbel in fact) scaled to various "zoom levels" as base image and only display the a part of it matching the final image size and fitting all items to be displayed. Straight forward so far. Now I came across Wikipedia's awesome blank SVG maps which contain all the country codes for easy reference and I was wondering whether it was possible to use those to have more customized colors and to highliht countries etc. So I did a bit of googling and was looking for Java libraries which would enable me to load the blank SVG map to memory allows for easy reference/selection of certain paths do manipulations of coloring, stroke widths etc render to a buffered image as the background for the great-circle paths/nodes What I came across quite often was Batik, but it looks like a really heavy framework and I'm not quite sure whether it is what I'm looking for. I have recently played around with Raphaël a bit and I like the way it handles working with vector graphics in code. If the Java framework for my purpose would feature a similar interface, that would be a nice-to-have. Any recommendations what toolset would be approriate for my purposes?

    Read the article

  • How can I build a wrapper to wait for listening on a port?

    - by BillyBBone
    Hi, I am looking for a way of programmatically testing a script written with the asyncore Python module. My test consists of launching the script in question -- if a TCP listen socket is opened, the test passes. Otherwise, if the script dies before getting to that point, the test fails. The purpose of this is knowing if a nightly build works (at least up to a point) or not. I was thinking the best way to test would be to launch the script in some kind of sandbox wrapper which waits for a socket request. I don't care about actually listening for anything on that port, just intercepting the request and using that as an indication that my test passed. I think it would be preferable to intercept the open socket request, rather than polling at set intervals (I hate polling!). But I'm a bit out of my depths as far as how exactly to do this. Can I do this with a shell script? Or perhaps I need to override the asyncore module at the Python level? Thanks in advance, - B

    Read the article

  • Howto use predicates in LINQ to Entities for Entity Framework objects

    - by user274947
    I'm using LINQ to Entities for Entity Framework objects in my Data Access Layer. My goal is to filter as much as I can from the database, without applying filtering logic on in-memory results. For that purpose Business Logic Layer passes a predicate to Data Access Layer. I mean Func<MyEntity, bool> So, if I use this predicate directly, like public IQueryable<MyEntity> GetAllMatchedEntities(Func<MyEntity, Boolean> isMatched) { return qry = _Context.MyEntities.Where(x => isMatched(x)); } I'm getting the exception [System.NotSupportedException] --- {"The LINQ expression node type 'Invoke' is not supported in LINQ to Entities."} Solution in that question suggests to use AsExpandable() method from LINQKit library. But again, using public IQueryable<MyEntity> GetAllMatchedEntities(Func<MyEntity, Boolean> isMatched) { return qry = _Context.MyEntities.AsExpandable().Where(x => isMatched(x)); } I'm getting the exception Unable to cast object of type 'System.Linq.Expressions.FieldExpression' to type 'System.Linq.Expressions.LambdaExpression' Is there way to use predicate in LINQ to Entities query for Entity Framework objects, so that it is correctly transformed it into a SQL statement. Thank you.

    Read the article

  • How can I get git to work with a remote server?

    - by Adrienne
    I am the CM person for a small company that just started using Git. We have two Git repositories currently hosted on a Windows box that is our all-purpose Windows server. But, we just set up a dedicated server for our CM software on an Ubuntu Linux server named "Callisto". So I created a test Git repository on Callisto. I gave its directory all of the proper permissions recursively. I had the sysadmin create a login for me on Callisto, and I created a key to use for logging in via SSH. I set up my key to use a passphrase; I don't know if that could be contributing to my problems? Anyway, I know my SSH login works because I tested it through puTTY. But, even after hours of trials and head scratching, I can't get my Windows Git bash (mSysGit) to talk to Callisto for the purposes of pushing or pulling Callisto's git repository files. I keep getting "Fatal error. The remote end hung up unexpectedly." And I've even gotten the error that Git doesn't recognize the test repository on Callisto as a git repository. I read online that the "Fatal error...hung up unexpectedly" is usually a problem with the server connection or permissions. So what am I missing or overlooking here? And why doesn't a pull using the git:// protocol work, since that only uses read-only access? Group and public permissions for the git repository's directory on Callisto are set to read and execute, but not write. If anyone could help, I would be so grateful. Thank you.

    Read the article

  • Can you recommend an easy to use easy to develop CMS?

    - by el_at_yahoo
    We need some easy way to manage web sites at our company, and we are evaluating some CMS tools for this purpose. We do not yet know what features the sites will need to have (but it will definitely be something with lots of functionalities), so we are looking for something with lots of features and more importantly to be easily extensible (if it does not have some feature, we at least want to be able to build-it by ourselves). We have no experience with Content Management Systems but we do with Java, so it has to be something written in Java. We evaluated some tools and from our perspective the following seem the promising of them (in no particular order): OpenCMS dotCMS (Community Edition vs Enterprise Edition) InfoGlue Alfresco (EE vs CE) Magnolia (EE vs EE Pro vs CE) Jahia (CE vs EE) Since we have no experience with either one of them, we were wondering if someone of you who have can share some information about how good they are or how easily they can be used and extended. I know similar questions have been asked on SO and I also know this is highly subjective and people will vote for closing it as soon as it is posted, but for us it is important to know what difficulties other people have been facing in using the above tools (we don’t want to walk a path that takes nowhere if other people already know it leads nowhere). Others could then vote on the posted answers if they agree or not. From your experience, which from the above mentioned CMSs is the more easily extensible, the easier to use, the easiest to learn etc? Thank you and Happy Holidays to all.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >