Search Results

Search found 23082 results on 924 pages for 'address space'.

Page 133/924 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • How to do a true Java ping from Windows?

    - by stjowa
    I have a device on a network that I am attempting to ping through my Java program. Through my windows command prompt, I can ping the device address fine and do a tracert on the address fine. Online, I have seen that in order to do a ping through Java you have to do the following: InetAddress.getByName(address).isReachable(timeout); But, when I use this code on my device address, it always returns false in my program. I am using the correct IPv4 address with a good timeout value. Also, if I use a localhost address, it works fine. Why can I ping the device through cmd, but not through my program? I have heard in various places that this is not a true ping. Is there a better way to emulate a ping in Java? Thanks

    Read the article

  • Google Map API V3 geocoder not showing the correct place

    - by TTCG
    I am upgrading my codes from Google Map API V2 to V3. In V2, I used GlocalSearch to get the latitude and longitude for the given address. In V3, I saw google.maps.Geocoder() and try to get the similar detail. However, the lat & long given by the V3 function is not accurate. Pls see the following screenshot here: My codes for V3 are as follow: var geocoder = new google.maps.Geocoder(); function codeAddress(address) { if (geocoder) { address = address + ", UK"; geocoder.geocode( { 'address': address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var latlng = results[0].geometry.location; addMarker(latlng); //Adding Marker here } else { alert("Geocode was not successful for the following reason: " + status); } }); } } Is there better way to get the accurate result in API V3? Thanks.

    Read the article

  • How to get the place name by latitude and longitude using openstreetmap in android

    - by Gaurav kumar
    In my app i am using osm rather than google map.I have latitude and longitude.So from here how i will query to get the city name from osm database..please help me. final String requestString = "http://nominatim.openstreetmap.org/reverse?format=json&lat=" + Double.toString(lat) + "&lon=" + Double.toString(lon) + "&zoom=18&addressdetails=1"; RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL.encode(requestString)); try { @SuppressWarnings("unused") Request request = builder.sendRequest(null, new RequestCallback() { @Override public void onResponseReceived(Request request, Response response) { if (response.getStatusCode() == 200) { String city = ""; try { JSONValue json = JSONParser.parseStrict(response); JSONObject address = json.isObject().get("address").isObject(); final String quotes = "^\"|\"$"; if (address.get("city") != null) { city = address.get("city").toString().replaceAll(quotes, ""); } else if (address.get("village") != null) { city = address.get("village").toString().replaceAll(quotes, ""); } } catch (Exception e) { } } } }); } catch (Exception e1) { }

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • google maps everytime fails to place some markers on the map

    - by Luca
    hello! im trying to place like 130/140 markers on a custom google map. i inject the map with jquery and gmaps (http://gmap.nurtext.de/) everytime, at random (not related to specific markers) a lots of markers are not shown. firebug report this error: a is null and this error comes from this file: http://maps.gstatic.com/intl/it_ALL/mapfiles/285c/maps2.api/main.js if i refresh the page...some other markers are "hidden" and other ones are shown. anyone had this problem/can help me or suggest another safe way to show all markers? thanks a lot! EDIT: this is how i inject the map and the markers (with a lots of address, but in this example only few) $(document).ready(function() { $("#container").gMap( { scrollwheel: false, maptype: G_PHYSICAL_MAP, icon: { image: "files/images/gmap_pin.png", iconsize: [32, 37], iconanchor: [32, 37], infowindowanchor: [12, 0] }, address: "Milano", zoom: 4, markers: [ { address: "Viale Certosa, Milano" }, { address: "Viale Ceccarini, Milano" }, { address: "Viale Italia, Milano" }, { address: "Via Rodi, Milano" }, ] }); });

    Read the article

  • Why can't I reclaim my dynamically allocated memory using the "delete" keyword?

    - by synaptik
    I have the following class: class Patient { public: Patient(int x); ~Patient(); private: int* RP; }; Patient::Patient(int x) { RP = new int [x]; } Patient::~Patient() { delete [] RP; } I create an instance of this class on the stack as follows: void f() { Patient p(10); } Now, when f() returns, I get a "double free or corruption" error, which signals to me that something is attempted to be deleted more than once. But I don't understand why that would be so. The space for the array is created on the heap, and just because the function from inside which the space was allocated returns, I wouldn't expect the space to be reclaimed. I thought that if I allocate space on the heap (using the new keyword), then the only way to reclaim that space is to use the delete keyword. Help! :)

    Read the article

  • How to do a true Java ping?

    - by stjowa
    I have a device on a network that I am attempting to ping through my Java program. Through my windows command prompt, I can ping the device address fine and do a tracert on the address fine. Online, I have seen that in order to do a ping through Java you have to do the following: InetAddress.getByName(address).isReachable(timeout); But, when I use this code on my device address, it always returns false in my program. I am using the correct IPv4 address with a good timeout value. Also, if I use a localhost address, it works fine. Why can I ping the device through cmd, but not through my program? I have heard in various places that this is not a true ping. Is there a better way to emulate a ping in Java? Thanks

    Read the article

  • Http Geocoder (Google) Accuracy level

    - by sushruth
    I am geocoding a large amount of user entered addresses and interested in the accuracy levels returned. My GOAL is to get the BEST POSSIBLE ACCURACY score for a given address. I call the geocder api following way http://maps.google.com/maps/geo?q={address}&output=csv&sensor=false&key=xx now the accuracy levels returned for same address with/without premise name q = Key Arena, 305 Harrison Street, Seattle, WA 98109 (Accuracy is 5) q = 305 Harrison Street Seattle, WA 98109 (Accuracy is 8) q = Key Arena, Seattle, WA 98109 (Accuracy is 9.) Its obvious from the above that the google servers does not return the best accuracy when street name is appended with premise/venue. the question is :) is there a way to pass the complete address ( with premise name / i.e case 1 ) and get the Max Accuracy. ( or how can tell the google server that the address is passed with premise/building name and street name) ( if you are thinking why not just use case 3, the answer is these are user entered addresses, they could enter "my moms's house" for premise, with accurate street address. in which case i want the accuracy to be 8 not 5)

    Read the article

  • How to make query on a property from a joined table in Hibernate using Criteria

    - by Palo
    Hello, I have the following mapping: <hibernate-mapping package="server.modules.stats.data"> <class name="User" table="user"> <id name="id"> <generator class="native"></generator> </id> <many-to-one name="address" column="addressId" unique="true" lazy="false" /> </class> <class name="Address" table="address"> <id name="id"> <generator class="native"></generator> </id> <property name="street" /> </class> </hibernate-mapping> How can I do a Criteria query to select all users living on some street? That is create Criteria query for this SQL: Select * from user join address on user.addressId = address.id where address.street='someStreet'

    Read the article

  • immediate=true is set on a jsf command button but still seeing validation

    - by Zack Macomber
    I have the following command button set up in a facelet: <h:commandButton action="#{addressAction.deletePreviousAddress}" value="#{bundle['button.deleteAddress']}" styleClass="deg-form-button" immediate="true"> <f:setPropertyActionListener target="#{addressAction.addressActionForm.previousAddress}" value="#{address}"> </f:setPropertyActionListener> </h:commandButton> In AddressAction, the following code gets run to delete a previous address on the form: public Enum<NavigationConstants> deletePreviousAddress() { addressActionForm.getPreviousAddresses().remove(addressActionForm.getPreviousAddress()); return NavigationConstants.addresses; } Before I made the address input components "required=true", this code worked fine and removed the previous address from the jsf form successfully. Right now, I can't successfully delete a previous address because validation is occurring and stating that the input components need to be filled in on the previous address record on the form. How can I bypass this validation? I thought the "immediate=true" attribute on the command button would have accomplished it but that's not cutting it in my case...

    Read the article

  • JPA One to Many using JoinTable Error

    - by user553015
    I am trying to model 1:N (Person & Address) relationship using a junction table (Person_Address). 1.Person (personId PK) 2.Address (addressId PK) 3.PersonAddress ( personId, addressId composite PK, personId FK references Person, addressid FK references Address ) @Entity public class Person { @OneToMany @JoinTable( name="PersonAddress", joinColumns = @JoinColumn( name="personId"), inverseJoinColumns = @JoinColumn( name="addressId") ) public Set<Address> getAddresses() {...} ... } I encounter following error. Not able to find any solution. Caused by: org.hibernate.MappingException: Could not determine type for: com.realestate.details.Address, at table: Person, for columns: [org.hibernate.mapping.Column(address)] at org.hibernate.mapping.SimpleValue.getType(SimpleValue.java:269) at org.hibernate.mapping.SimpleValue.isValid(SimpleValue.java:253) at org.hibernate.mapping.Property.isValid(Property.java:185) at org.hibernate.mapping.PersistentClass.validate(PersistentClass.java:440) at org.hibernate.mapping.RootClass.validate(RootClass.java:192) at org.hibernate.cfg.Configuration.validate(Configuration.java:1108) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1293)

    Read the article

  • PHP String Encoding Error

    - by Brian
    I'm trying to get the following code to output an IMG tag with the URL for Google Static Maps API http://code.google.com/apis/maps/documentation/staticmaps/#Imagesizes embedded in there... the result is that everything except the $address is being output successfully... what am I doing wrong? function event_map_img($echo = true){ global $post; $address = get_post_meta($post->ID, 'date_address', true); if($echo): echo '<img src="'.'http://maps.google.com/maps/api/staticmap?center='.$address.'&zoom=14&size=700x512&maptype=roadmap&markers=color:blue|label:X|'.$address.'&sensor=false" />'; else: return '<img src="'.'http://maps.google.com/maps/api/staticmap?center='.$address.'&zoom=14&size=700x512&maptype=roadmap&markers=color:blue|label:X|'.$address.'&sensor=false" />'; endif; }

    Read the article

  • How do I model a has_many :through and with aggregation in Rails?

    - by Angela
    How do I model having multiple Addresses for a Company and assign a single Address to a Contact? Contacts belong_to a Company. A Company has_many Contacts. A Company also has_many Addresses. And each Contact belongs_to an Address. How do I model this? I have Model/Contacts.rb belong_to :Company belong_to :Address (?) Model/Company.rb has_many :Contacts has_many :Addresses Address is an aggregation of :street1, :street2, :city, :state, :zip so not clear exactly what to do there. So what would I do in my _form so that when I have a contact/new I am able to either default to a main address or select one of the others? If none of them match, adding for a Contact makes that address available to any subsequent contact?

    Read the article

  • Did C++11 address concerns passing std lib objects between dynamic/shared library boundaries? (ie dlls and so)?

    - by Doug T.
    One of my major complaints about C++ is how hard in practice it is to pass std library objects outside of dynamic library (ie dll/so) boundaries. The std library is often header-only. Which is great for doing some awesome optimizations. However, for dll's, they are often built with different compiler settings that may impact the internal structure/code of a std library containers. For example, in MSVC one dll may build with iterator debugging on while another builds with it off. These two dlls may run into issues passing std containers around. If I expose std::string in my interface, I can't guarantee the code the client is using for std::string is an exact match of my library's std::string. This leads to hard to debug problems, headaches, etc. You either rigidly control the compiler settings in your organization to prevent these issues or you use a simpler C interface that won't have these problems. Or specify to your clients the expected compiler settings they should use (which sucks if another library specifies other compiler settings). My question is whether or not C++11 tried to do anything to solve these issues?

    Read the article

  • How to use email adresses with special chars such as Ø

    - by Sir Code-A-Lot
    By writing this: var recipient = new MailAddress("name@abcø.dk"); Notice the "ø" in the domain part. I get an exception stating: System.FormatException: The specified string is not in the form required for an e-mail address. at System.Net.Mime.MailBnfHelper.ReadMailAddress(String data, Int32& offset, String& displayName) at System.Net.Mail.MailAddress.ParseValue(String address) at System.Net.Mail.MailAddress..ctor(String address, String displayName, Encoding displayNameEncoding) at System.Net.Mail.MailAddress..ctor(String address) The address used should be perfectly valid. So I'm guessing I have to encode the address somehow?

    Read the article

  • extracting multiple tags from xml using PHP

    - by user1479431
    Here is my address.xml <?xml version="1.0" ?> <!--Sample XML document --> <AddressBook> <Addressentry> <firstName>jack</firstName> <lastName>S</lastName> <Address>2899,Ray Road</Address> <Email>[email protected]</Email> </Addressentry> <Addressentry> <firstName>Sid</firstName> <lastName>K</lastName> <Address>238,Baseline Road,TX</Address> <Email>[email protected]</Email> <Email>[email protected]</Email> </Addressentry> <Addressentry> <firstName>Satya</firstName> <lastName>Yar</lastName> <Address>6,Rural Road,Tempe,AZ</Address> <Email>[email protected]</Email> <Email>[email protected]</Email> <Email>[email protected]</Email> </Addressentry> </AddressBook> I am trying to load all the entries using PHP code as below. Each addressentry can have one or more tags. Right now from the code below I am able to extract only one tag. My question is how do I extract all tags associated with particular Addressentry. that is I want to print all emails on the same line. <?php $theData = simplexml_load_File("address.xml"); foreach($theData->Addressentry as $theAddress) { $theFirstName = $theAddress->firstName; $theLastName = $theAddress->lastName; $theAdd = $theAddress->Address; echo "<p>".$theFirstName." ".$theLastName."<br/> ".$theAdd."<br/> ".$theAddress->Email."<br/> </p>"; unset($theFirstName); unset($theLastName); unset($theAdd); unset($theEmail); } ?> Any help would be appreciated

    Read the article

  • How do I return the IDENTITY for an inserted record from a stored Proecedure?

    - by user54197
    I am adding data to my database, but would like to retrieve the UnitID that is Auto generated. using (SqlConnection connect = new SqlConnection(connections)) { SqlCommand command = new SqlCommand("ContactInfo_Add", connect); command.Parameters.Add(new SqlParameter("name", name)); command.Parameters.Add(new SqlParameter("address", address)); command.Parameters.Add(new SqlParameter("Product", name)); command.Parameters.Add(new SqlParameter("Quantity", address)); command.Parameters.Add(new SqlParameter("DueDate", city)); connect.Open(); command.ExecuteNonQuery(); } ... ALTER PROCEDURE [dbo].[Contact_Add] @name varchar(40), @address varchar(60), @Product varchar(40), @Quantity varchar(5), @DueDate datetime AS BEGIN SET NOCOUNT ON; INSERT INTO DBO.PERSON (Name, Address) VALUES (@name, @address) INSERT INTO DBO.PRODUCT_DATA (PersonID, Product, Quantity, DueDate) VALUES (@Product, @Quantity, @DueDate) END

    Read the article

  • Generic object load function for scala

    - by Isaac Oates
    I'm starting on a Scala application which uses Hibernate (JPA) on the back end. In order to load an object, I use this line of code: val addr = s.load(classOf[Address], addr_id).asInstanceOf[Address]; Needless to say, that's a little painful. I wrote a helper class which looks like this: import org.hibernate.Session class DataLoader(s: Session) { def loadAddress(id: Long): Address = { return s.load(classOf[Address], id).asInstanceOf[Address]; } ... } So, now I can do this: val dl = new DataLoader(s) val addr = dl loadAddress(addr_id) Here's the question: How do I write a generic parametrized method which can load any object using this same pattern? i.e val addr = dl load[Address](addr_id) (or something along those lines.) I'm new to Scala so please forgive anything here that's especially hideous.

    Read the article

  • Preserve trailing whitespace Sybase

    - by AngryWhenHungry
    I have a big chunk of textual data which I split and write multiple rows of a varchar(255) column of a table. Sometimes, the last character happens to be a space. When I read back this row, the trailing space is chopped and I get only 254 characters. This messes up my data when I append the next row to the end of this one. My code sends the full 255 char (incl space) to the DB API. How can I check that the trailing space is actually written to the table? I am not in a position to rewrite/redesign legacy code. Is there any setting - either in the DB, DB interface, read/write calls etc - that I can use to preserve the trailing space?

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >