Search Results

Search found 13757 results on 551 pages for 'decimal format'.

Page 140/551 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Moses v1.0 multi language ini file

    - by Milan Kocic
    I was working with mosesserver 0.91 and everything works fine but now there is version 1.0 and nothing is same as before. Here is my situation: I want to have multi language translation from arabic to english and from english to arabic. All data and configuration file I have works with 0.91 version of mosesserver. Here is my config file: ------------------------------------------------- ######################### ### MOSES CONFIG FILE ### ######################### # D - decoding path, R - reordering model, L - language model [translation-systems] ar-en D 0 R 0 L 0 en-ar D 1 R 1 L 1 # input factors [input-factors] 0 # mapping steps [mapping] 0 T 0 1 T 1 # translation tables: table type (hierarchical(0), textual (0), binary (1)), source-factors, target-factors, number of scores, file # OLD FORMAT is still handled for back-compatibility # OLD FORMAT translation tables: source-factors, target-factors, number of scores, file # OLD FORMAT a binary table type (1) is assumed [ttable-file] 1 0 0 5 /mnt/models/ar-en/phrase-table/phrase-table 1 0 0 5 /mnt/models/en-ar/phrase-table/phrase-table # no generation models, no generation-file section # language models: type(srilm/irstlm), factors, order, file [lmodel-file] 1 0 5 /mnt/models/ar-en/language-model/en.qblm.mm 1 0 5 /mnt/models/en-ar/language-model/ar.lm.d1.blm.mm # limit on how many phrase translations e for each phrase f are loaded # 0 = all elements loaded [ttable-limit] 20 # distortion (reordering) files [distortion-file] 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/ar-en/reordering-table/reordering-table.wbe-msd-bidirectional-fe.gz 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/en-ar/reordering-model/reordering-table.wbe-msd-bidirectional-fe.gz # distortion (reordering) weight [weight-d] 0.3 0.3 # lexicalised distortion weights [weight-lr] 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 # language model weights [weight-l] 0.5000 0.5000 # translation model weights [weight-t] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 # no generation models, no weight-generation section # word penalty [weight-w] -1 -1 [distortion-limit] 12 --------------------------------------------------------- So please can someone help me and rewrite this config file so it can work in version 1.0. And i need some python sample code of translation. I am using xmlrpc in python and earler I sent http request with: import xmlrpclib client = xmlrpclib.ServerProxy('http://localhost:8080') client.translate({'text': 'some text', 'system': 'en-ar'}) but now seems there is no more 'system' parameter and moses use always default settings.

    Read the article

  • How to limit traffic using multicast over localhost

    - by Shane Holloway
    I'm using multicast UDP over localhost to implement a loose collection of cooperative programs running on a single machine. The following code works well on Mac OSX, Windows and linux. The flaw is that the code will receive UDP packets outside of the localhost network as well. For example, sendSock.sendto(pkt, ('192.168.0.25', 1600)) is received by my test machine when sent from another box on my network. import platform, time, socket, select addr = ("239.255.2.9", 1600) sendSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 24) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_IF, socket.inet_aton("127.0.0.1")) recvSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) if hasattr(socket, 'SO_REUSEPORT'): recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, True) recvSock.bind(("0.0.0.0", addr[1])) status = recvSock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, socket.inet_aton(addr[0]) + socket.inet_aton("127.0.0.1")); while 1: pkt = "Hello host: {1} time: {0}".format(time.ctime(), platform.node()) print "SEND to: {0} data: {1}".format(addr, pkt) r = sendSock.sendto(pkt, addr) while select.select([recvSock], [], [], 0)[0]: data, fromAddr = recvSock.recvfrom(1024) print "RECV from: {0} data: {1}".format(fromAddr, data) time.sleep(2) I've attempted to recvSock.bind(("127.0.0.1", addr[1])), but that prevents the socket from receiving any multicast traffic. Is there a proper way to configure recvSock to only accept multicast packets from the 127/24 network, or do I need to test the address of each received packet?

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • Nhibernate/Hibernate, lookup tables and object design

    - by Simon G
    Hi, I've got two tables. Invoice with columns CustomerID, InvoiceDate, Value, InvoiceTypeID (CustomerID and InvoiceDate make up a composite key) and InvoiceType with InvoiceTypeID and InvoiceTypeName columns. I know I can create my objects like: public class Invoice { public virtual int CustomerID { get; set; } public virtual DateTime InvoiceDate { get; set; } public virtual decimal Value { get; set; } public virtual InvoiceType InvoiceType { get; set; } } public class InvoiceType { public virtual InvoiceTypeID { get; set; } public virtual InvoiceTypeName { get; set; } } So the generated sql would look something like: SELECT CustomerID, InvoiceDate, Value, InvoiceTypeID FROM Invoice WHERE CustomerID = x AND InvoiceDate = y SELECT InvoiceTypeID, InvoiceTypeName FROM InvoiceType WHERE InvoiceTypeID = z But rather that having two select queries executed to retrieve the data I would rather have one. I would also like to avoid using child object for simple lookup lists. So my object would look something like: public class Invoice { public virtual int CustomerID { get; set; } public virtual DateTime InvoiceDate { get; set; } public virtual decimal Value { get; set; } public virtual InvoiceTypeID { get; set; } public virtual InvoiceTypeName { get; set; } } And my sql would look something like: SELECT CustomerID, InvoiceDate, Value, InvoiceTypeID FROM Invoice INNER JOIN InvoiceType ON Invoice.InvoiceTypeID = InvoiceType.InvoiceTypeID WHERE CustomerID = x AND InvoiceDate = y My question is how do I create the mapping for this? I've tried using join but this tried to join using CustomerID and InvoiceDate, am I missing something obvious? Thanks

    Read the article

  • Re-using aggregate level formulas in SQL - any good tactics?

    - by Cade Roux
    Imagine this case, but with a lot more component buckets and a lot more intermediates and outputs. Many of the intermediates are calculated at the detail level, but a few things are calculated at the aggregate level: DECLARE @Profitability AS TABLE ( Cust INT NOT NULL ,Category VARCHAR(10) NOT NULL ,Income DECIMAL(10, 2) NOT NULL ,Expense DECIMAL(10, 2) NOT NULL ) ; INSERT INTO @Profitability VALUES ( 1, 'Software', 100, 50 ) ; INSERT INTO @Profitability VALUES ( 2, 'Software', 100, 20 ) ; INSERT INTO @Profitability VALUES ( 3, 'Software', 100, 60 ) ; INSERT INTO @Profitability VALUES ( 4, 'Software', 500, 400 ) ; INSERT INTO @Profitability VALUES ( 5, 'Hardware', 1000, 550 ) ; INSERT INTO @Profitability VALUES ( 6, 'Hardware', 1000, 250 ) ; INSERT INTO @Profitability VALUES ( 7, 'Hardware', 1000, 700 ) ; INSERT INTO @Profitability VALUES ( 8, 'Hardware', 5000, 4500 ) ; SELECT Cust ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Cust SELECT Category ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Category SELECT Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability Notice how the same formulae have to be used at the different aggregation levels. This results in code duplication. I have thought of using UDFs (either scalar or table valued with an OUTER APPLY, since many of the final results may share intermediates which have to be calculated at the aggregate level), but in my experience the scalar and multi-statement table-valued UDFs perform very poorly. Also thought about using more dynamic SQL and applying the formulas by name, basically. Any other tricks, techniques or tactics to keeping these kinds of formulae which need to be applied at different levels in sync and/or organized?

    Read the article

  • asp.net mvc - How to create fake test objects quickly and efficiently

    - by Simon G
    Hi, I'm currently testing the controller in my mvc app and I'm creating a fake repository for testing. However I seem to be writing more code and spending more time for the fakes than I do on the actual repositories. Is this right? The code I have is as follows: Controller public partial class SomeController : Controller { IRepository repository; public SomeController(IRepository rep) { repository = rep; } public virtaul ActionResult Index() { // Some logic var model = repository.GetSomething(); return View(model); } } IRepository public interface IRepository { Something GetSomething(); } Fake Repository public class FakeRepository : IRepository { private List<Something> somethingList; public FakeRepository(List<Something> somethings) { somthingList = somthings; } public Something GetSomething() { return somethingList; } } Fake Data class FakeSomethingData { public static List<Something> CreateSomethingData() { var somethings = new List<Something>(); for (int i = 0; i < 100; i++) { somethings.Add(new Something { value1 = String.Format("value{0}", i), value2 = String.Format("value{0}", i), value3 = String.Format("value{0}", i) }); } return somethings; } } Actual Test [TestClass] public class SomethingControllerTest { SomethingController CreateSomethingController() { var testData = FakeSomethingData.CreateSomethingData(); var repository = new FakeSomethingRepository(testData); SomethingController controller = new SomethingController(repository); return controller; } [TestMethod] public void SomeTest() { // Arrange var controller = CreateSomethingController(); // Act // Some test here // Arrange } } All this seems to be a lot of extra code, especially as I have more than one repository. Is there a more efficient way of doing this? Maybe using mocks? Thanks

    Read the article

  • Updating / refreshing a geojson layer

    - by Ozaki
    TLDR My geoJSON layer will display the vector mark but will not destroy the features or re add the features in causing a moving / updating look to the OpenLayers map. Hey S O. I have a geojson layer set up as follows: //GeoJSON Layer// var layer1 = new OpenLayers.Layer.GML("My GeoJSON Layer", "coordinates", {format: OpenLayers.Format.GeoJSON, styleMap: style_red}); My features are set up as follows: var f = new OpenLayers.Feature.Vector(null,style_red); var style_red = OpenLayers.Util.extend({}, layer_style); style_red.strokeColor = "red"; style_red.fillColor = "black"; style_red.fillOpacity = 0.5; style_red.graphicName = "circle"; style_red.pointRadius = 3.8; style_red.strokeWidth = 2; style_red.strokeLinecap = "butt"; and my layer updating function: function UpdateLayer(){ var p = new OpenLayers.Format.GeoJSON({ 'internalProjection': new OpenLayers.Projection("EPSG:900913"), 'externalProjection': new OpenLayers.Projection("EPSG:4326") }); var url = "coordinates.geojson"; OpenLayers.loadURL(url, {}, null, function(r) { var f = p.read(r.responseText); map.layers[1].destroyFeatures(); map.layers[1].addFeatures(f); }); setTimeout("UpdateLayer()",1000) } Any idea what I am doing wrong or what I am missing?

    Read the article

  • Requested Service not found - .NET Remoting

    - by bharat
    I am Getting this exception System.Runtime.Remoting.RemotingException occurred Message="Object '/55337266_9751_4f58_8446_c54ff254222e/rkutlpt5hvsxipmzhb+jkqyl_98.rem' has been disconnected or does not exist at the server." Source="mscorlib" StackTrace: Server stack trace: at System.Runtime.Remoting.Channels.ChannelServices.CheckDisconnectedOrCreateWellKnownObject(IMessage msg) at System.Runtime.Remoting.Channels.ChannelServices.DispatchMessage(IServerChannelSinkStack sinkStack, IMessage msg, IMessage& replyMsg) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at Common.Interface.Repository.NA.INRR.GetNRun(Int32 NetworkRunId) at Module.NA.ViewModel.NRDViewModel.get_AS() InnerException: Most of the times i am getting this exception this is my remotingDomain method string tcpURL; TcpChannel channel; channel = new TcpChannel(); ChannelServices.RegisterChannel(channel, false); //-- // Remote domain objects //-- tcpURL = string.Format("tcp://{0}:{1}/DomainComposition", ServerName, TcpPort); RemotingConfiguration.RegisterWellKnownClientType(typeof(DomainComposition), tcpURL); //-- // Remote repository objects //-- tcpURL = string.Format("tcp://{0}:{1}/RepositoryComposition", ServerName, TcpPort); RemotingConfiguration.RegisterWellKnownClientType(typeof(RepositoryComposition), tcpURL); //-- // Remote utility objects //-- tcpURL = string.Format("tcp://{0}:{1}/UtilityComposition", ServerName, TcpPort); RemotingConfiguration.RegisterWellKnownClientType(typeof(UtilityComposition), tcpURL); this.Domain = new DomainComposition(); this.Repository = new RepositoryComposition(); this.Utility = new UtilityComposition(); How to Check if the object is disconnect then re initiate the service

    Read the article

  • Any suggestions for good automated web load testing tool?

    - by fmunkert
    What are some good automated tools for load testing (stress testing) web applications, that do not use record and replay of HTTP network packets? I am aware that there are numerous load testing tools on the market that record and replay HTTP network packets. But these are unsuitable for my purpose, because of this: The HTTP packet format changes very often in our application (e.g. when we optimize an AJAX call). We do not want to adapt all test scripts just because there is a slight change in HTTP packet format. Our test team shall not need to know any internals about our application to write their test scripts. A tool that replays HTTP packets, however, requires the team to know the format of HTTP requests and responses, such that they can adapt details of the replayed HTTP packets (e.g. user name). The automated load testing tool I am looking for should be able to let the test team write "black box" test scripts such as: Invoke web page at URL http://... . First, enter XXX into text field XXX. Then, press button XXX. Wait until response has been received from web server. Verify that text field XXX now contains the text XXX. The tool should be able to simulate up to several 1000 users, and it should be compatible with web applications using ASP.NET and AJAX.

    Read the article

  • Not getting response using SOAP and PHP.

    - by Nitish
    I'm using PHP5 and NuSOAP - SOAP Toolkit for PHP. I created the server using the code below: <?php function getStockQuote($symbol) { mysql_connect('localhost','user','pass'); mysql_select_db('test'); $query = "SELECT stock_price FROM stockprices WHERE stock_symbol = '$symbol'"; $result = mysql_query($query); $row = mysql_fetch_assoc($result); echo $row['stock_price']; } $a=require('lib/nusoap.php'); $server = new soap_server(); $server->configureWSDL('stockserver', 'urn:stockquote'); $server->register("getStockQuote", array('symbol' => 'xsd:string'), array('return' => 'xsd:decimal'), 'urn:stockquote', 'urn:stockquote#getStockQuote'); $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server->service($HTTP_RAW_POST_DATA); ?> The client has the following code: <?php require_once('lib/nusoap.php'); $c = new soapclientNusoap('http://localhost/stockserver.php?wsdl'); $stockprice = $c->call('getStockQuote', array('symbol' => 'ABC')); echo "The stock price for 'ABC' is $stockprice."; ?> The database was created using the code below: CREATE TABLE `stockprices` ( `stock_id` INT UNSIGNED NOT NULL AUTO_INCREMENT , `stock_symbol` CHAR( 3 ) NOT NULL , `stock_price` DECIMAL(8,2) NOT NULL , PRIMARY KEY ( `stock_id` ) ); INSERT INTO `stockprices` VALUES (1, 'ABC', '75.00'); INSERT INTO `stockprices` VALUES (2, 'DEF', '45.00'); INSERT INTO `stockprices` VALUES (3, 'GHI', '12.00'); INSERT INTO `stockprices` VALUES (4, 'JKL', '34.00'); When I run the client the result I get is this: The stock price for 'ABC' is . 75.00 is not being printed as the price.

    Read the article

  • How do I set default search conditions with Searchlogic?

    - by Danger Angell
    I've got a search form on this page: http://staging-checkpointtracker.aptanacloud.com/events If you select a State from the dropdown you get zero results because you didn't select one or more Event Division (checkboxes). What I want is to default the checkboxes to "checked" when the page first loads...to display Events in all Divisions...but I want changes made by the user to be reflected when they filter. Here's the index method in my Events controller: def index @search = Event.search(params[:search]) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @events } end end Here's my search form: <% form_for @search do |f| %> <div> <%= f.label :state_is, "State" %> <%= f.select :state_is, ['AK','AL','AR','AZ','CA','CO','CT','DC','DE','FL','GA','HI','IA','ID','IL','IN','KS','KY','LA','MA','MD','ME','MI','MN','MO','MS','MT','NC','ND','NE','NH','NJ','NM','NV','NY','OH','OK','OR','PA','RI','SC','SD','TN','TX','UT','VA','VT','WA','WI','WV','WY'], :include_blank => true %> </div> <div> <%= f.check_box :division_like_any, {:name => "search[:division_like_any][]"}, "Sprint", :checked => true %> Sprint (2+ hours)<br/> <%= f.check_box :division_like_any, {:name => "search[:division_like_any][]"}, "Sport" %> Sport (12+ hours)<br/> <%= f.check_box :division_like_any, {:name => "search[:division_like_any][]"}, "Adventure" %> Adventure (18+ hours)<br/> <%= f.check_box :division_like_any, {:name => "search[:division_like_any][]"}, "Expedition" %> Expedition (48+ hours)<br/> </div> <%= f.submit "Find Events" %> <%= link_to 'Clear', '/events' %> <% end %>

    Read the article

  • find(:all) and then add data from another table to the object

    - by Koning Baard XIV
    I have two tables: create_table "friendships", :force => true do |t| t.integer "user1_id" t.integer "user2_id" t.boolean "hasaccepted" t.datetime "created_at" t.datetime "updated_at" end and create_table "users", :force => true do |t| t.string "email" t.string "password" t.string "phone" t.boolean "gender" t.datetime "created_at" t.datetime "updated_at" t.string "firstname" t.string "lastname" t.date "birthday" end I need to show the user a list of Friendrequests, so I use this method in my controller: def getfriendrequests respond_to do |format| case params[:id] when "to_me" @friendrequests = Friendship.find(:all, :conditions => { :user2_id => session[:user], :hasaccepted => false }) when "from_me" @friendrequests = Friendship.find(:all, :conditions => { :user1_id => session[:user], :hasaccepted => false }) end format.xml { render :xml => @friendrequests } format.json { render :json => @friendrequests } end end I do nearly everything using AJAX, so to fetch the First and Last name of the user with UID user2_id (the to_me param comes later, don't worry right now), I need a for loop which make multiple AJAX calls. This sucks and costs much bandwidth. So I'd rather like that getfriendrequests also returns the First and Last name of the corresponding users, so, e.g. the JSON response would not be: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3 } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4 } } ] but rather: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3, "firstname": "Jon", "lastname": "Skeet" } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4, "firstname": "Mark", "lastname": "Gravell" } } ] I thought of a for loop in the getfriendrequests method, but I don't know how to implement this, and maybe there is an easier way. It must also work for XML. Can anyone help me? Thanks

    Read the article

  • Logical file not working for SUBFILE/SETLL?

    - by user1516536
    I'm using three logical file with different Record Format where on the first subfile I'm using LF1 and LF2 where on the first subfile I cannot use *LOVAL SETLL it will give me Run Time Error. not sure why? then the program will lead me to second subfile and I'm using LF3 it seems fine. but then If I back to first subfile the subfile turn to blanks.???? why? this my subroutine for building my subfile: C CLRSR BEGSR C EVAL *IN55='0' C WRITE USQLSCTL C EVAL *IN55='1' C ENDSR C* C*BUILDING SUBFILE C BLDSR BEGSR C *LOVAL SETLL USRLGX C EVAL RECNO=0 C EXSR TMISR1 C EXSR REDSR1 C DOW NOT %EOF C IF USRID<>IDD C EXSR MVESR C EXSR DIMSR C MOVE USRID IDD C EVAL RECNO=RECNO+1 C WRITE USQLS C ENDIF C EXSR TMISR1 C EXSR REDSR1 C ENDDO C ENDSR and the related subroutine C TMISR1 BEGSR C READ USRLGX C MOVE USRTI MINTI C ENDSR C REDSR1 BEGSR C READ USRLG C MOVE USRTO MAXTO C ENDSR 6 n the LF I used is USRLG and USRLGX. where both LF refer to the same record format. but each LF has different sorted order. *record format has been RENAME on F-Spec I have this two problem which is: I only can use *LOVAL setll logical-file once only. n the result for coding above sometime it will give result for UserTimeIn some time it equals to blanks.(000000)

    Read the article

  • Passing two variables to separate table...associations problem

    - by bgadoci
    I have developed an application and I seem to be having some problems with my associations. I have the following: class User < ActiveRecord::Base acts_as_authentic has_many :questions, :dependent => :destroy has_many :sites , :dependent => :destroy end Questions class Question < ActiveRecord::Base has_many :sites, :dependent => :destroy has_many :notes, :through => :sites belongs_to :user end Sites (think of this as answers to questions) class Site < ActiveRecord::Base acts_as_voteable :vote_counter => true belongs_to :question belongs_to :user has_many :notes, :dependent => :destroy has_many :likes, :dependent => :destroy has_attached_file :photo, :styles => { :small => "250x250>" } validates_presence_of :name, :description end When a Site (answer) is created I am successfully passing the question_id to the Sites table but I can't figure out how to also pass the user_id. Here is my SitesController#create def create @question = Question.find(params[:question_id]) @site = @question.sites.create!(params[:site]) respond_to do |format| format.html { redirect_to(@question) } format.js end end

    Read the article

  • Version Control and Coding Formatting

    - by Martin Giffy D'Souza
    Hi, I'm currently part of the team implementing a new version control system (Subversion) within my organization. There's been a bit of a debate on how to handle code formatting and I'd like to get other peoples opinions and experiences on this topic. We currently have ~10 developers each using different tools (due to licensing and preference). Some of these tools have automatic code formatters and others don't. If we allow "blind" checkins the code will look drastically different each time someone does a check in. This will make things such as diffs and merges complicated. I've talked to several people and they've mentioned the following solutions: Use the same developer program with the same code formatter (not really an option due to licensing) Have a hook (either client or server side) which will automatically format the code before going into the repository Manually format the code. Regarding the 3rd point, the concept is to never auto-format the code and have some standards. Right now that seems to be what we're leaning towards. I'm a bit hesitant on that approach as it could lead to developers spending a lot of time manually formatting code. If anyone can please provide some their thoughts and experience on this that would be great. Thank you, Martin

    Read the article

  • iphone download several files

    - by Floo
    hi all !  In my app i need to download several plist.  to download a plist i use the NSURLconnection  in my code i use an UIAlertView with a UIActivityIndicator then when the download is finished i add a button to the alert to dismiss it.  To download the plist i use somewhere in my code an NSURL set to the adresse where the plist is, next i set a NSURLRequest with the url cache policy and a timeout interval.  Then i set my NSMutableData to the NSURL connection with a NSURLRequest.  In the delegate didReceiveData: i append data to my mutable data object, in the didFailWithError: i handle error. And finaly in the connectionDidFinishLoading  i serialize my data to a plist so i can write to file my plist, and release my alertview.  My problem is : how can i do if i have sevetal file to download because the connectionDidFinishLoading is called each time my NSURLConnection is finished but i want to release my UiAlert when everything is finished. But when the first plist is downloaded my code in the connectionDidFinishLoading will fire.  here is my code :  in the view did load :  // set the UiAlert in the view did load  NSURL *theUrl = [NSURL URLWithString:@"http://adress.com/plist/myPlist.plist"]; NSURLRequest *theRequest = [NSURLRequest requestWithURL:theUrl cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; self.plistConnection = [[ NSURLConnection alloc] initwithRequest:theRequest delegate:self startImmediatly:YES]; //plistConnection is a NSURLConnection - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data {  [incomingPListData appendData:data]; } -(void)connection:(NSURLConnection *)connectionDidFailWithError:(NSError *)error { // handle error here  } -(void)connectionDidFinisloading:(NSURLConnection *) connection {  NSPropertyListFormat format; NSString *serialErrorString;  NSData *plist = [NSPropertyListSerialisation propertyListFromData:incomingPlistData mutabilityOption:NSPropertyListImmutable format:&format errorDescription:&serialErrorString]; if (serialErrorString) {//error} else { // create path and write plist to path} // change message and title of the alert so if i want todownload an another file  where do i put the request the connection and how can i tell the didFinishLoading to fire code when all my file are downloaded.  thanks to all

    Read the article

  • Is there a standard lexer/parser tool for Python?

    - by Salim Fadhley
    A volunteer job requires us to convert a large number of LaTeX documents into ePub format. It's a series of open-source fiction book which has so far only been produced only on paper via a print on demand service. We'd like to be able to offer the book to users of book-reader devices (such as Kindle) which require the ePub format for best results. Fortunately, ePub is a very simple format, however there's no trivial way for LaTeX to produce the XHTML outut required. We experimented with alternative LaTeX compilers (e.g. plastex) but in the end we figured that it would probably be a lot easier to simply write our own compiler which understands a tiny subset of the LaTeX language and compiles directly to XHTML / ePub. Previously I used a tool on Windows called GOLD. This allowed me to go directly from BNF grammars to a stub parser. It also alllowed me to implement the parser in any language I liked. (I'd choose Python). This product has to work on Linux, so I'm wondering if there's an equivalent toolchain that works as well under Ubutnu / Eclipse / Python. The idea is that we will take the grammar of TeX and just implement a teeny subset of that, but we do not want to spend a huge amount of time worrying about grammar and parsing. A parser generator would obviously save us a great deal of time. Sal UPDATE 1: Bonus marks for a solution with excellent documentation or tutorials.

    Read the article

  • Strategies for serializing an object for auditing/logging purpose in .NET?

    - by Jiho Han
    Let's say I have an application that processes messages. Messages are just objects in this case that implements IMessage interface which is just a marker. In this app, if a message fails to process, then I want to log it, first of all for auditing and troubleshooting purposes. Secondly I might want to use it for re-processing. Ideally, I want the message to be serialized into a format that is human-readable. The first candidate is XML although there are others like JSON. If I were to serialize the messages as XML, I want to know whether the message object is XML-serializable. One way is to reflect on the type and to see if it has a parameter-less constructor and the other is to require IXmlSerializable interface. I'm not too happy with either of these approaches. There is a third option which is to try to serialize it and catch exceptions. This doesn't really help - I want to, in some way, stipulate that IMessage (or a derived type) should be xml-serializable. The reflection route has obvious disadvantages such as security, performance, etc. IXmlSerializable route locks down my messages to one format, when in the future, I might want to change the serialization format to be JSON. The other thing is even the simplest objects now must implement ReadXml and WriteXml methods. Is there a route that involves the least amount of work that lets me serialize an arbitrary object (as long as it implements the marker interface) into XML but not lock future messages into XML?

    Read the article

  • changing value of a textview while change in other textview by multiplying

    - by sur007
    Here I am getting parsed data from a URL and now I am trying to change the value of parse data to users only dynamically on an text view and my code is package com.mokshya.jsontutorial; import java.text.DecimalFormat; import java.util.ArrayList; import java.util.HashMap; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList; import com.mokshya.jsontutorialhos.xmltest.R; import android.app.AlertDialog; import android.app.ListActivity; import android.content.DialogInterface; import android.content.Intent; import android.os.Bundle; import android.text.Editable; import android.text.TextWatcher; import android.util.Log; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.EditText; import android.widget.ListAdapter; import android.widget.ListView; import android.widget.SimpleAdapter; import android.widget.TextView; import android.widget.Toast; public class Main extends ListActivity { EditText resultTxt; public double C_webuserDouble; public double C_cashDouble; public double C_transferDouble; public double S_webuserDouble; public double S_cashDouble; public double S_transferDouble; TextView cashTxtView; TextView webuserTxtView; TextView transferTxtView; TextView S_cashTxtView; TextView S_webuserTxtView; TextView S_transferTxtView; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.listplaceholder); cashTxtView = (TextView) findViewById(R.id.cashTxtView); webuserTxtView = (TextView) findViewById(R.id.webuserTxtView); transferTxtView = (TextView) findViewById(R.id.transferTxtView); S_cashTxtView = (TextView) findViewById(R.id.S_cashTxtView); S_webuserTxtView = (TextView) findViewById(R.id.S_webuserTxtView); S_transferTxtView = (TextView) findViewById(R.id.S_transferTxtView); ArrayList<HashMap<String, String>> mylist = new ArrayList<HashMap<String, String>>(); JSONObject json = JSONfunctions .getJSONfromURL("http://ldsclient.com/ftp/strtojson.php"); try { JSONArray netfoxlimited = json.getJSONArray("netfoxlimited"); for (inti = 0; i < netfoxlimited.length(); i++) { HashMap<String, String> map = new HashMap<String, String>(); JSONObject e = netfoxlimited.getJSONObject(i); map.put("date", e.getString("date")); map.put("c_web", e.getString("c_web")); map.put("c_bank", e.getString("c_bank")); map.put("c_cash", e.getString("c_cash")); map.put("s_web", e.getString("s_web")); map.put("s_bank", e.getString("s_bank")); map.put("s_cash", e.getString("s_cash")); mylist.add(map); } } catch (JSONException e) { Log.e("log_tag", "Error parsing data " + e.toString()); } ListAdapter adapter = new SimpleAdapter(this, mylist, R.layout.main, new String[] { "date", "c_web", "c_bank", "c_cash", "s_web", "s_bank", "s_cash", }, new int[] { R.id.item_title, R.id.webuserTxtView, R.id.transferTxtView, R.id.cashTxtView, R.id.S_webuserTxtView, R.id.S_transferTxtView, R.id.S_cashTxtView }); setListAdapter(adapter); final ListView lv = getListView(); lv.setTextFilterEnabled(true); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { @SuppressWarnings("unchecked") HashMap<String, String> o = (HashMap<String, String>) lv .getItemAtPosition(position); Toast.makeText(Main.this, "ID '" + o.get("id") + "' was clicked.", Toast.LENGTH_SHORT).show(); } }); resultTxt = (EditText) findViewById(R.id.editText1); resultTxt.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { // TODO Auto-generated method stub resultTxt.setText(""); } }); resultTxt.addTextChangedListener(new TextWatcher() { public void afterTextChanged(Editable arg0) { // TODO Auto-generated method stub String text; text = resultTxt.getText().toString(); if (resultTxt.getText().length() > 5) { calculateSum(C_webuserDouble, C_cashDouble, C_transferDouble); calculateSunrise(S_webuserDouble, S_cashDouble, S_transferDouble); } else { } } public void beforeTextChanged(CharSequence s, int start, int count, int after) { // TODO Auto-generated method stub } public void onTextChanged(CharSequence s, int start, int before, int count) { // TODO Auto-generated method stub } }); } private void calculateSum(Double webuserDouble, Double cashDouble, Double transferDouble) { String Qty; Qty = resultTxt.getText().toString(); if (Qty.length() > 0) { double QtyValue = Double.parseDouble(Qty); double cashResult; double webuserResult; double transferResult; cashResult = cashDouble * QtyValue; webuserResult = webuserDouble * QtyValue; transferResult = transferDouble * QtyValue; DecimalFormat df = new DecimalFormat("#.##"); String cashResultStr = df.format(cashResult); String webuserResultStr = df.format(webuserResult); String transferResultStr = df.format(transferResult); cashTxtView.setText(String.valueOf(cashResultStr)); webuserTxtView.setText(String.valueOf(webuserResultStr)); transferTxtView.setText(String.valueOf(transferResultStr)); // cashTxtView.setFilters(new InputFilter[] {new // DecimalDigitsInputFilter(2)}); } if (Qty.length() == 0) { cashTxtView.setText(String.valueOf(cashDouble)); webuserTxtView.setText(String.valueOf(webuserDouble)); transferTxtView.setText(String.valueOf(transferDouble)); } } private void calculateSunrise(Double webuserDouble, Double cashDouble, Double transferDouble) { String Qty; Qty = resultTxt.getText().toString(); if (Qty.length() > 0) { double QtyValue = Double.parseDouble(Qty); double cashResult; double webuserResult; double transferResult; cashResult = cashDouble * QtyValue; webuserResult = webuserDouble * QtyValue; transferResult = transferDouble * QtyValue; DecimalFormat df = new DecimalFormat("#.##"); String cashResultStr = df.format(cashResult); String webuserResultStr = df.format(webuserResult); String transferResultStr = df.format(transferResult); S_cashTxtView.setText(String.valueOf(cashResultStr)); S_webuserTxtView.setText(String.valueOf(webuserResultStr)); S_transferTxtView.setText(String.valueOf(transferResultStr)); } if (Qty.length() == 0) { S_cashTxtView.setText(String.valueOf(cashDouble)); S_webuserTxtView.setText(String.valueOf(webuserDouble)); S_transferTxtView.setText(String.valueOf(transferDouble)); } } } and I am getting following error on logcat 08-28 15:04:12.839: E/AndroidRuntime(584): Uncaught handler: thread main exiting due to uncaught exception 08-28 15:04:12.848: E/AndroidRuntime(584): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.mokshya.jsontutorialhos.xmltest/com.mokshya.jsontutorial.Main}: java.lang.NullPointerException 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2401) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.os.Handler.dispatchMessage(Handler.java:99) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.os.Looper.loop(Looper.java:123) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.main(ActivityThread.java:4203) 08-28 15:04:12.848: E/AndroidRuntime(584): at java.lang.reflect.Method.invokeNative(Native Method) 08-28 15:04:12.848: E/AndroidRuntime(584): at java.lang.reflect.Method.invoke(Method.java:521) 08-28 15:04:12.848: E/AndroidRuntime(584): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 08-28 15:04:12.848: E/AndroidRuntime(584): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 08-28 15:04:12.848: E/AndroidRuntime(584): at dalvik.system.NativeStart.main(Native Method) 08-28 15:04:12.848: E/AndroidRuntime(584): Caused by: java.lang.NullPointerException 08-28 15:04:12.848: E/AndroidRuntime(584): at com.mokshya.jsontutorial.Main.onCreate(Main.java:111) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 08-28 15:04:12.848: E/AndroidRuntime(584): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364)

    Read the article

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

  • SQL ORACLE - Datatable in where clause

    - by Gage
    Currently I have a sql call returning a dataset from a MSSQL database and I want to take a column from that data and return ID's based off that column from the ORACLE database. I can do this one at a time but that requires multiple calls, I am wondering if this can be done with one call. String sql=String.Format(@"Select DIST_NO FROM DISTRICT WHERE DIST_DESC = '{0}'", row.Table.Rows[0]["Op_Centre"].ToString()); Above is the string I am using to return one ID at a time. I know the {0} can be used to format your value into the string and maybe there is a way to do that with a datatable. Also to use multiple values in the where clause it would be: String sql=String.Format(@"Select DIST_NO FROM DISTRICT WHERE DIST_DESC in ('{0}')", row.Table.Rows[0] ["Op_Centre"].ToString()); Although I realize all of this can be done I am wondering if theres an easy way to add it all to the sql string in one call. As I am writing this I am realizing I could break the string into sections then just add every row value to the SQL string within the "WHERE DIST_DESC IN (" clause... I am still curious to see if there is another way though, and because someone else may come across this problem I will post a solution if I develop one. Thanks in advance.

    Read the article

  • VEMap and a GeoRSS feed(hosted separately)

    - by Alexis Abril
    The scenario is as follows: A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it. A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object). Now, VEMap can accept an input feed in this format via the following: var layer = new VEShapeLayer(); var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer); map.ImportShapeLayerData(veLayerSpec, onComplete, true); onComplete is a callback function I'm using to replace the default pin graphic with something custom. The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format. var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer); When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error. The order of operations is currently: remote feed - local handler - VEMap import If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • How to access "Custom" or non-System TFS workitem fields using PowerShell?

    - by DaBozUK
    When using PowerShell to extract information from TFS, I find that I can get at the standard fields but not "Custom" fields. I'm not sure custom is the correct term, but for example if I look at the Process Editor in VS2008 and edit the Work Item type, there are fields such as listed below, with Name, Type and RefName: Title String System.Title State String System.State Rev Integer System.Rev Changed By String System.ChangedBy I can access these with Get-TfsItemHistory: Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table Title, State, Rev, ChangedBy -Auto So far so good. However, there are also some other fields in the WorkItem type, which I'm calling "Custom" or non-System fields, e.g.: Activated By String Microsoft.VSTS.Common.ActivatedBy Resolved By String Microsoft.VSTS.Common.ResolvedBy And the following command does not retrieve the data, just spaces. Get-TfsItemHistory "$/path" -Version "D01/12/10~" -R | Select -exp WorkItems | Format-Table ActivatedBy, ResolvedBy -Auto I've also tried the names in quotes, the fully qualified refname, but no luck. How do you access these "non-System" fields? Thanks Boz UPDATE: From Keith's answer I can get the fields I need: Get-TfsItemHistory "$/Hermes/Main" -Version "D01/12/10~" -Recurse ` | Select ChangeSetId, Comment -exp WorkItems ` | Select ChangeSetId, Comment, @{n='WI-Id'; e={$_.Id}}, Title -exp Fields ` | Where {$_.ReferenceName -eq 'Microsoft.VSTS.Common.ResolvedBy'} ` | Format-Table ChangesetId, Comment, WI-Id, Title, @{n='Resolved By'; e={$_.Value}} -Auto

    Read the article

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >