Search Results

Search found 13467 results on 539 pages for 'port mapping'.

Page 515/539 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Meteor exception in Meteor.flush when updating a collection breaks reactivity across clients

    - by Harel
    When I'm calling Collection.update from the front end (the method call is allowed), the update does work ok, but the exception below is being thrown (in Chrome's JS console, not in the server). Although the update took place, other clients connected to the same collection do not see the updates until they refresh the browser - I suspect because of the exception. Any idea what might cause this? Exception from Meteor.flush: Error: Can't create second landmark in same branch at Object.Spark.createLandmark (http://checkadoo.com/packages/spark/spark.js?8b4e0abcbf865e6ad778592160ec3b3401d7abd2:1085:13) at http://checkadoo.com/packages/templating/deftemplate.js?7f4bb363e9e340dbaaea8d74ac670af40ac82d0a:115:26 at Object.Spark.labelBranch (http://checkadoo.com/packages/spark/spark.js?8b4e0abcbf865e6ad778592160ec3b3401d7abd2:1030:14) at Object.partial [as list_item] (http://checkadoo.com/packages/templating/deftemplate.js?7f4bb363e9e340dbaaea8d74ac670af40ac82d0a:114:24) at http://checkadoo.com/packages/handlebars/evaluate.js?ab265dbab665c32cfd7ec343166437f2e03f1a54:349:48 at Object.Spark.labelBranch (http://checkadoo.com/packages/spark/spark.js?8b4e0abcbf865e6ad778592160ec3b3401d7abd2:1030:14) at branch (http://checkadoo.com/packages/handlebars/evaluate.js?ab265dbab665c32cfd7ec343166437f2e03f1a54:308:20) at http://checkadoo.com/packages/handlebars/evaluate.js?ab265dbab665c32cfd7ec343166437f2e03f1a54:348:20 at Array.forEach (native) at Function._.each._.forEach (http://checkadoo.com/packages/underscore/underscore.js?772b2587aa2fa345fb760eff9ebe5acd97937243:76:11) EDIT Here is my template for the clickable item that triggers the update: <template name="list_item"> <li class="checklistitemli"> <div class="{{checkbox_class}}" id="clitem_{{index}}"> <input type="checkbox" name="item_checked" value="1" id="clcheck_{{index}}" class="checklist_item_check" {{checkbox_ticked}}> {{title}} </div> </li> </template> and here's the event handler for clicks on 'list_item': var visualCheck = function(el, checked) { var checkId = el.id.replace('clitem','clcheck'); if (checked) { addClass(el, 'strikethrough'); $('') } else { removeClass(el, 'strikethrough'); } $('#'+checkId)[0].checked=checked; }; Template.list_item.events = { 'click .checklistitem' : function(ev) { this.checked = !this.checked; var reverseState = !this.checked; var updateItem = {}, self = this, el = ev.target; visualCheck(el, this.checked); updateItem['items.'+this.index+'.checked'] = this.checked; console.log("The error happens here"); Lists.update({_id: this._id}, {$set:updateItem}, {multi:false} , function(err) { console.log("In callback, after the error"); if (err) { visualCheck(el, reverseState); } }); } } The whole thing is available at http://checkadoo.com (Its a port of a Tornado based Python app of mine)

    Read the article

  • GCC: Simple inheritance test fails

    - by knight666
    I'm building an open source 2D game engine called YoghurtGum. Right now I'm working on the Android port, using the NDK provided by Google. I was going mad because of the errors I was getting in my application, so I made a simple test program: class Base { public: Base() { } virtual ~Base() { } }; // class Base class Vehicle : virtual public Base { public: Vehicle() : Base() { } ~Vehicle() { } }; // class Vehicle class Car : public Vehicle { public: Car() : Base(), Vehicle() { } ~Car() { } }; // class Car int main(int a_Data, char** argv) { Car* stupid = new Car(); return 0; } Seems easy enough, right? Here's how I compile it, which is the same way I compile the rest of my code: /home/oem/android-ndk-r3/build/prebuilt/linux-x86/arm-eabi-4.4.0/bin/arm-eabi-g++ -g -std=c99 -Wall -Werror -O2 -w -shared -fshort-enums -I ../../YoghurtGum/src/GLES -I ../../YoghurtGum/src -I /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/include -c src/Inheritance.cpp -o intermediate/Inheritance.o (Line breaks are added for clarity). This compiles fine. But then we get to the linker: /home/oem/android-ndk-r3/build/prebuilt/linux-x86/arm-eabi-4.4.0/bin/arm-eabi-gcc -lstdc++ -Wl, --entry=main, -rpath-link=/system/lib, -rpath-link=/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib, -dynamic-linker=/system/bin/linker, -L/home/oem/android-ndk-r3/build/prebuilt/linux-x86/arm-eabi-4.4.0/lib/gcc/arm-eabi/4.4.0, -L/home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib, -rpath=../../YoghurtGum/lib/GLES -nostdlib -lm -lc -lGLESv1_CM -z /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib/crtbegin_dynamic.o /home/oem/android-ndk-r3/build/platforms/android-5/arch-arm/usr/lib/crtend_android.o intermediate/Inheritance.o ../../YoghurtGum/bin/YoghurtGum.a -o bin/Galaxians.android As you can probably tell, there's a lot of cruft in there that isn't really needed. That's because it doesn't work. It fails with the following errors: intermediate/Inheritance.o:(.rodata._ZTI3Car[typeinfo for Car]+0x0): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Inheritance.o:(.rodata._ZTI7Vehicle[typeinfo for Vehicle]+0x0): undefined reference to `vtable for __cxxabiv1::__vmi_class_type_info' intermediate/Inheritance.o:(.rodata._ZTI4Base[typeinfo for Base]+0x0): undefined reference to `vtable for __cxxabiv1::__class_type_info' collect2: ld returned 1 exit status make: *** [bin/Galaxians.android] Fout 1 These are the same errors I get from my actual application. If someone could explain to me where I went wrong in my test or what option or I forgot in my linker, I would be very, extremely grateful. Thanks in advance. UPDATE: When I make my destructors non-inlined, I get new and more exciting link errors: intermediate/Inheritance.o:(.rodata+0x78): undefined reference to `vtable for __cxxabiv1::__si_class_type_info' intermediate/Inheritance.o:(.rodata+0x90): undefined reference to `vtable for __cxxabiv1::__vmi_class_type_info' intermediate/Inheritance.o:(.rodata+0xb0): undefined reference to `vtable for __cxxabiv1::__class_type_info' collect2: ld returned 1 exit status make: *** [bin/Galaxians.android] Fout 1

    Read the article

  • Zend database query result converts column values to null

    - by David Zapata
    Hi again. I am using the next instructions to get some registers from my Database. Create the needed models (from the params module): $obj_paramtype_model = new Params_Model_DbTable_Paramtype(); $obj_param_model = new Params_Model_DbTable_Param(); Getting the available locales from the database // This returns a Zend_Db_Table_Row_Abstract class object $obj_paramtype = $obj_paramtype_model->getParamtypeByValue('available_locales'); // This is a query used to add conditions to the next sentence. This is executed from the Params_Model_DbTable_Param instance class, that depends from Params_Model_DbTable_Paramtype class (reference map and dependentTables arrays are fine in both classes) $obj_select = $this->select()->where('deleted_at IS NULL')->order('name'); // Execute the next query, applying the select restrictions. This returns a Zend_Db_Table_Rowset_Abstract class object. This means "Find Params by Paramtype" $obj_params_rowset = $obj_paramtype->findDependentRowset('Params_Model_DbTable_Param', 'Paramtype', $obj_paramtype); // Here the firebug log displays the queries.... Zend_Registry::get('log')->debug($obj_params_rowset); I have a profiler for all my DB executions from Zend. At this point the log and profiler objects (that includes Firebug writers), shows the executed SQL Queries, and the last line displays the resulting Zend_Db_Table_Rowset_Abstract class object. If I execute the SQL Queries in some MySQL Client, the results are as expected. But the Zend Firebug log writer displays as NULL the column values with latin characters (ñ). In other words, the external SQL client shows es_CO | Español de Colombia and en_US | English of United States but the Query results from Zend displays (and returns) es_CO | null and en_US | English of United States. I've deleted the ñ character from Español de Colombia and the query results are just fine in my Zend Log Firebug screen, and in the final Zend Form element. The MySQL database, tables and columns are in UTF-8 - utf8_unicode_ci collation. All my zend framework pages are in UTF-8 charset. I'm using XAMPP 1.7.1 (PHP 5.2.9, Apache at port 90 and MySQL 5.1.33-community) running on Windows 7 Ultimate; Zend Framework 1.10.1. I'm sorry if there is so much information, but I don't really know why could that happen, so I tryed to provide as much related information as I could to help to find some answer.

    Read the article

  • Whats wrong with this code.Runtime error

    - by javacode
    Hi I am writing this application in eclipse I added all the jar files.I am pasting the code and error.Please let me know what changes I should make to run the application properly. import javax.mail.*; import javax.mail.internet.*; import java.util.*; public class SendMail { public static void main(String [] args) { SendMail sm=new SendMail(); try{ sm.postMail(new String[]{"[email protected]"},"hi","hello","[email protected]"); } catch(MessagingException e) { e.printStackTrace(); } } public void postMail( String recipients[ ], String subject, String message , String from) throws MessagingException { boolean debug = false; //Set the host smtp address Properties props = new Properties(); props.put("mail.smtp.starttls.enable","true"); props.put("mail.smtp.host", "smtp.gmail.com"); props.setProperty("mail.smtp.port", "25"); // create some properties and get the default Session Session session = Session.getDefaultInstance(props, null); session.setDebug(debug); // create a message Message msg = new MimeMessage(session); // set the from and to address InternetAddress addressFrom = new InternetAddress(from); msg.setFrom(addressFrom); InternetAddress[] addressTo = new InternetAddress[recipients.length]; for (int i = 0; i < recipients.length; i++) { addressTo[i] = new InternetAddress(recipients[i]); } msg.setRecipients(Message.RecipientType.TO, addressTo); // Optional : You can also set your custom headers in the Email if you Want msg.addHeader("MyHeaderName", "myHeaderValue"); // Setting the Subject and Content Type msg.setSubject(subject); msg.setContent(message, "text/plain"); Transport.send(msg); } } Error: com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.0 Must issue a STARTTLS command first. 13sm646598ewy.13 at com.sun.mail.smtp.SMTPTransport.issueSendCommand(SMTPTransport.java:1829) at com.sun.mail.smtp.SMTPTransport.mailFrom(SMTPTransport.java:1368) at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:886) at javax.mail.Transport.send0(Transport.java:191) at javax.mail.Transport.send(Transport.java:120) at SendMail.postMail(SendMail.java:54) at SendMail.main(SendMail.java:10)

    Read the article

  • A SelfHosted WCF Service over Basic HTTP Binding doesn't support more than 1000 concurrent requests

    - by Krishnan
    I have self hosted a WCF Service over BasicHttpBinding consumed by an ASMX Client. I'm simulating a concurrent user load of 1200 users. The service method takes a string parameter and returns a string. The data exchanged is less than 10KB. The processing time for a request is fixed at 2 seconds by having a Thread.Sleep(2000) statement. Nothing additional. I have removed all the DB Hits / business logic. The same piece of code runs fine for 1000 concurrent users. I get the following error when I bump up the number to 1200 users. System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at WCF.Throttling.Client.Service.Function2(String param) This exception is often reported on DataContract mismatch and large data exchange. But never when doing a load test. I have browsed enough and have tried most of the options which include, Enabled Trace & Message log on server side. But no errors logged. To overcome Port Exhaustion MaxUserPort is set to 65535, and TcpTimedWaitDelay 30 secs. MaxConcurrent Calls is set to 600, and MaxConcurrentInstances is set to 1200. The Open, Close, Send and Receive Timeouts are set to 10 Minutes. The HTTPWebRequest KeepAlive set to false. I have not been able to nail down the issue for the past two days. Any help would be appreciated. Thank you.

    Read the article

  • How to solve Python memory leak when using urrlib2?

    - by b_m
    Hi, I'm trying to write a simple Python script for my mobile phone to periodically load a web page using urrlib2. In fact I don't really care about the server response, I'd only like to pass some values in the URL to the PHP. The problem is that Python for S60 uses the old 2.5.4 Python core, which seems to have a memory leak in the urrlib2 module. As I read there's seems to be such problems in every type of network communications as well. This bug have been reported here a couple of years ago, while some workarounds were posted as well. I've tried everything I could find on that page, and with the help of Google, but my phone still runs out of memory after ~70 page loads. Strangely the Garbege Collector does not seem to make any difference either, except making my script much slower. It is said that, that the newer (3.1) core solves this issue, but unfortunately I can't wait a year (or more) for the S60 port to come. here's how my script looks after adding every little trick I've found: import urrlib2, httplib, gc while(true): url = "http://something.com/foo.php?parameter=" + value f = urllib2.urlopen(url) f.read(1) f.fp._sock.recv=None # hacky avoidance f.close() del f gc.collect() Any suggestions, how to make it work forever without getting the "cannot allocate memory" error? Thanks for advance, cheers, b_m update: I've managed to connect 92 times before it ran out of memory, but It's still not good enough. update2: Tried the socket method as suggested earlier, this is the second best (wrong) solution so far: class UpdateSocketThread(threading.Thread): def run(self): global data while 1: url = "/foo.php?parameter=%d"%data s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('something.com', 80)) s.send('GET '+url+' HTTP/1.0\r\n\r\n') s.close() sleep(1) I tried the little tricks, from above too. The thread closes after ~50 uploads (the phone has 50MB of memory left, obviously the Python shell has not.) UPDATE: I think I'm getting closer to the solution! I tried sending multiple data without closing and reopening the socket. This may be the key since this method will only leave one open file descriptor. The problem is: import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) socket.connect(("something.com", 80)) socket.send("test") #returns 4 (sent bytes, which is cool) socket.send("test") #4 socket.send("test") #4 socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns the number of sent bytes, ok socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns 0 on the phone, error on Windows7* socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns 0 on the phone, error on Windows7* socket.send("test") #returns 0, strange... *: error message: 10053, software caused connection abort Why can't I send multiple messages??

    Read the article

  • C# problem with two threads and hardware access

    - by mack369
    I'm creating an application which communicates with the device via FT2232H USB/RS232 converter. For communication I'm using FTD2XX_NET.dll library from FTDI website. I'm using two threads: first thread continuously reads data from the device the second thread is the main thread of the Windows Form Application I've got a problem when I'm trying to write any data to the device while the receiver's thread is running. The main thread simply hangs up on ftdiDevice.Write function. I tried to synchronize both threads so that only one thread can use Read/Write function at the same time, but it didn't help. Below code responsible for the communication. Note that following functions are methods of FtdiPort class. Receiver's thread private void receiverLoop() { if (this.DataReceivedHandler == null) { throw new BackendException("dataReceived delegate is not set"); } FTDI.FT_STATUS ftStatus = FTDI.FT_STATUS.FT_OK; byte[] readBytes = new byte[this.ReadBufferSize]; while (true) { lock (FtdiPort.threadLocker) { UInt32 numBytesRead = 0; ftStatus = ftdiDevice.Read(readBytes, this.ReadBufferSize, ref numBytesRead); if (ftStatus == FTDI.FT_STATUS.FT_OK) { this.DataReceivedHandler(readBytes, numBytesRead); } else { Trace.WriteLine(String.Format("Couldn't read data from ftdi: status {0}", ftStatus)); Thread.Sleep(10); } } Thread.Sleep(this.RXThreadDelay); } } Write function called from main thread public void Write(byte[] data, int length) { if (this.IsOpened) { uint i = 0; lock (FtdiPort.threadLocker) { this.ftdiDevice.Write(data, length, ref i); } Thread.Sleep(1); if (i != (int)length) { throw new BackendException("Couldnt send all data"); } } else { throw new BackendException("Port is closed"); } } Object used to synchronize two threads static Object threadLocker = new Object(); Method that starts the receiver's thread private void startReceiver() { if (this.DataReceivedHandler == null) { return; } if (this.IsOpened == false) { throw new BackendException("Trying to start listening for raw data while disconnected"); } this.receiverThread = new Thread(this.receiverLoop); //this.receiverThread.Name = "protocolListener"; this.receiverThread.IsBackground = true; this.receiverThread.Start(); } The ftdiDevice.Write function doesn't hang up if I comment following line: ftStatus = ftdiDevice.Read(readBytes, this.ReadBufferSize, ref numBytesRead);

    Read the article

  • Which MarkDown (WMD) javascript editor should I use?

    - by Edan Maor
    Background I'm working on an application which requires user-entered content, and I've decided to use a StackOverflow-style MarkDown editor. After researching this topic for the last few days, I realize there are numerous forks of the base WMD editor, some with a few basic enhancements and some with serious differences from the StackOverflow one. Since this will be the heart of the application, I'd like to start with the best code base I can. I'd be happy if anyone can recommend which one of the many solutions out there best fits my needs. Below is requirements, plus what I've managed to find already. I'm hoping this question will help me decide which version to go with, and maybe help me discover a port out there that's an even better fit for my needs. The requirements for my project Live Preview Multiple editors on the same page (not know how many in advance, since the user can dynamically add another editing box). Ability to extend with extra buttons (I'd like a button to upload a picture, instead of just adding an img url). Ability to dynamically show/hide the edit box (and only see the preview box). Not an absolute must, but I'd prefer to stick as close to StackOverflow's look and feel, since it's well known. Don't know if this matters, but the backend is written in Django. Editors I've looked at Here are a few of the code bases I've looked at, with thoughts. Obviously, I might be missing another solution out there. The derobins version. From what I can tell, this is the official StackOverflow version. Seems like it doesn't support multiple editors on one page. JQuery.MarkEdit. Looks very good, but is pretty different from the StackOverflow version. MooWMD. Looks like the winner right now, but I'm a little concerned since it looks less active/hackable than MarkEdit. The wmd-new version. Not sure, looks like an old codebase without much use. The SocialSite branch. Seems like it's not for public use.

    Read the article

  • Fluent NHibernate/SQL Server 2008 insert query problem

    - by Mark
    Hi all, I'm new to Fluent NHibernate and I'm running into a problem. I have a mapping defined as follows: public PersonMapping() { Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.FirstName).Not.Nullable().Length(50); Map(p => p.MiddleInitial).Nullable().Length(1); Map(p => p.LastName).Not.Nullable().Length(50); Map(p => p.Suffix).Nullable().Length(3); Map(p => p.SSN).Nullable().Length(11); Map(p => p.BirthDate).Nullable(); Map(p => p.CellPhone).Nullable().Length(12); Map(p => p.HomePhone).Nullable().Length(12); Map(p => p.WorkPhone).Nullable().Length(12); Map(p => p.OtherPhone).Nullable().Length(12); Map(p => p.EmailAddress).Nullable().Length(50); Map(p => p.DriversLicenseNumber).Nullable().Length(50); Component<Address>(p => p.CurrentAddress, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); Map(p => p.EyeColor).Nullable().Length(3); Map(p => p.HairColor).Nullable().Length(3); Map(p => p.Gender).Nullable().Length(1); Map(p => p.Height).Nullable(); Map(p => p.Weight).Nullable(); Map(p => p.Race).Nullable().Length(1); Map(p => p.SkinTone).Nullable().Length(3); HasMany(p => p.PriorAddresses).Cascade.All(); } public PreviousAddressMapping() { Table("PriorAddress"); Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.EndEffectiveDate).Not.Nullable(); Component<Address>(p => p.Address, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); } My test is [Test] public void can_correctly_map_Person_with_Addresses() { var myPerson = new Person("Jane", "", "Doe"); var priorAddresses = new[] { new PreviousAddress(ObjectMother.GetAddress1(), DateTime.Parse("05/13/2010")), new PreviousAddress(ObjectMother.GetAddress2(), DateTime.Parse("05/20/2010")) }; new PersistenceSpecification<Person>(Session) .CheckProperty(c => c.FirstName, myPerson.FirstName) .CheckProperty(c => c.LastName, myPerson.LastName) .CheckProperty(c => c.MiddleInitial, myPerson.MiddleInitial) .CheckList(c => c.PriorAddresses, priorAddresses) .VerifyTheMappings(); } GetAddress1() (yeah, horrible name) has Line2 == null The tables seem to be created correctly in sql server 2008, but the test fails with a SQLException "String or binary data would be truncated." When I grab the sql statement in SQL Profiler, I get exec sp_executesql N'INSERT INTO PriorAddress (Line1, Line2, City, State, Zip, EndEffectiveDate, Id) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6)',N'@p0 nvarchar(18),@p1 nvarchar(4000),@p2 nvarchar(10),@p3 nvarchar(2),@p4 nvarchar(5),@p5 datetime,@p6 int',@p0=N'6789 Somewhere Rd.',@p1=NULL,@p2=N'Hot Coffee',@p3=N'MS',@p4=N'09876',@p5='2010-05-13 00:00:00',@p6=1001 Notice the @p1 parameter is being set to nvarchar(4000) and being passed a NULL value. Why is it setting the parameter to nvarchar(4000)? How can I fix it? Thanks!

    Read the article

  • JSF 2.1 Spring 3.0 Integration

    - by danny.lesnik
    I'm trying to make very simple Spring 3 + JSF2.1 integration according to examples I googled in the web. So here is my code: My HTML submitted to actionController.actionSubmitted() method: <h:form> <h:message for="textPanel" style="color:red;" /> <h:panelGrid columns="3" rows="5" id="textPanel"> //all my bean prperties mapped to HTML code. </h:panelGrid> <h:commandButton value="Submit" action="#{actionController.actionSubmitted}" /> </h:form> now the Action Controller itself: @ManagedBean(name="actionController") @SessionScoped public class ActionController implements Serializable{ @ManagedProperty(value="#{user}") User user; @ManagedProperty(value="#{mailService}") MailService mailService; public void setMailService(MailService mailService) { this.mailService = mailService; } public void setUser(User user) { this.user = user; } private static final long serialVersionUID = 1L; public ActionController() {} public String actionSubmitted(){ System.out.println(user.getEmail()); mailService.sendUserMail(user); return "success"; } } Now my bean Spring: public interface MailService { void sendUserMail(User user); } public class MailServiceImpl implements MailService{ @Override public void sendUserMail(User user) { System.out.println("Mail to "+user.getEmail()+" sent." ); } } This is my web.xml <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <listener> <listener-class> org.springframework.web.context.request.RequestContextListener </listener-class> </listener> <!-- Welcome page --> <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> <!-- JSF mapping --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> my applicationContext.xml <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="mailService" class="com.vanilla.jsf.services.MailServiceImpl"> </bean> </beans> my faces-config.xml is the following: <application> <el-resolver> org.springframework.web.jsf.el.SpringBeanFacesELResolver </el-resolver> <message-bundle> com.vanilla.jsf.validators.MyMessages </message-bundle> </application> <managed-bean> <managed-bean-name>actionController</managed-bean-name> <managed-bean-class>com.vanilla.jsf.controllers.ActionController</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> <managed-property> <property-name>mailService</property-name> <value>#{mailService}</value> </managed-property> </managed-bean> <navigation-rule> <from-view-id>index.xhtml</from-view-id> <navigation-case> <from-action>#{actionController.actionSubmitted}</from-action> <from-outcome>success</from-outcome> <to-view-id>submitted.xhtml</to-view-id> <redirect /> </navigation-case> </navigation-rule> My Problem is that I'm getting NullPointerExeption because my mailService Spring bean is null. public String actionSubmitted(){ System.out.println(user.getEmail()); //mailService is null Getting NullPointerException mailService.sendUserMail(user); return "success"; }

    Read the article

  • sendmail and MX records when mail server is not on web host

    - by Jim Nelson
    This is a problem I'm sure is easy to fix, but I've been banging my head on it all day. I'm developing a new web site for a client. The web site resides at (this is an example) website.com. I have a PHP form script to email visitors' requests to [email protected]. When I coded this on a staging server on a different domain, all worked fine. When I moved it to website.com, the mail messages never arrived. The web server is on a virtual host with a major ISP. Here's what I've learned since then: My client's mail server is Microsoft Exchange on a box physically in their office. Whenever someone on the outside world emails [email protected], the mail arrives. But if the web server sends to the same email address, it fails every time. This is not a PHP problem. I secure shell in to the web server and have tested this both with sendmail and the UNIX mail application. I've also tested it by emailing various email accounts from the shell. I can email myself, for example, just nobody at the website.com domain. In short, when I'm logged in to website.com, mail to [email protected], [email protected], [email protected] all fail. All other addresses work fine. What I've discovered is those dropped emails are routed to the web server's "catchall" account where they sit in its inbox. I've done an MX lookup on website.com. The MX record points to mailsec.website.com. I can telnet to mailsec.website.com port 25 and see the SMTP server. It appears to me that website.com isn't doing an MX lookup when it's sending mail to [email protected]. My theory is that it recognizes the domain as local, sees that there's no "requests" user account to deliver it to, and drops the mail into the catchall account. What I want is to force sendmail to do the MX lookup and send the message on to the Exchange server. I'm at wit's end here. I can't figure out how to do this. For that matter, I may be way off base here and have misdiagnosed this entirely. Internet mail and MX has always seemed a black art to me, and my ignorance is certainly showing in this question.

    Read the article

  • Editing/Updating one of the results in a search query.

    - by eggman20
    Hi guys. I'm creating a page that searches for an item and then be able to edit/update it. I was able to do it when it returns just one result but when it gives me multiple results I could only edit the very last item. Below is my code: ....... $dj =$_POST[djnum]; $sql= "SELECT * From dj WHERE datajack LIKE '$dj%'"; $result = mysql_query($sql); //more code in here// while ($info =mysql_fetch_array($result)) { // display the result echo "<form action=\"dj_update.php\" method=\"POST\"><input type=\"hidden\" name=\"djnumber\" value=\"".$info['datajack']."\">"; echo "<tr><td>DJ ".$info['datajack']."</td>"; echo "<td>".$info['building']."&nbsp;</td>"; echo "<td>Rm ".$info['room']."&nbsp;</td>"; echo "<td>".$info['switch']."&nbsp;</td>"; echo "<td>".$info['port']."&nbsp;</td>"; echo "<td>".$info['notes']."&nbsp;</td>"; echo "<td style=\"text-align:center;\"><input type=\"Submit\" value=\"Edit\" ></td></tr>"; } // more code here // Then this is the screen shot of the result: The idea is the user should be able to click on "Edit" and be able to edit/update that particular item. But when I click any of the Edit button I could only edit the last item. What am I missing here? Is there an easier way to do this? Thanks guys and Happy new year!

    Read the article

  • RNDC fails: permission denied

    - by pawz
    Named works great. It creates a pid in /var/run/named/named.pid as expected. It is listening on port 953 as shown by the log: Apr 20 14:42:38 guchuko named[9115]: command channel listening on 127.0.0.1#953 But whenever I try to run "rndc reload" I get: rndc: 'reload' failed: permission denied What file is it being denied permission to ? It doesn't log anything so I don't know why it's not working. I've compiled bind 9.4-ESV-R1 from source and I've patched it with the mysql mod. my named.conf: options { directory "/var/bind"; forwarders { 203.82.213.101; 203.188.144.1; }; listen-on-v6 { none; }; listen-on { 127.0.0.1; 192.168.0.6; }; pid-file "/var/run/named/named.pid"; }; logging { channel simple_log { file "/var/log/named.log" versions 3 size 5m; severity debug 5; print-time yes; print-severity yes; print-category yes; }; category default { simple_log; }; }; zone "." IN { type hint; file "named.ca"; }; zone "localhost" IN { type master; file "pri/localhost.zone"; allow-update { none; }; notify no; }; include "/etc/rndc.key" my rndc.conf options { default-server 127.0.0.1; default-key "rndc-key"; }; server 127.0.0.1 { key "rndc-key"; }; include "/etc/rndc.key"; my rndc.key: key "rndc-key" { algorithm hmac-md5; secret "XFc8C+yCLK0mIheTSBj41g=="; };

    Read the article

  • How to generate and encode (for use in GA), random, strict, binary rooted trees with N leaves?

    - by Peter Simon
    First, I am an engineer, not a computer scientist, so I apologize in advance for any misuse of nomenclature and general ignorance of CS background. Here is the motivational background for my question: I am contemplating writing a genetic algorithm optimizer to aid in designing a power divider network (also called a beam forming network, or BFN for short). The BFN is intended to distribute power to each of N radiating elements in an array of antennas. The fraction of the total input power to be delivered to each radiating element has been specified. Topologically speaking, a BFN is a strictly binary, rooted tree. Each of the (N-1) interior nodes of the tree represents the input port of an unequal, binary power splitter. The N leaves of the tree are the power divider outputs. Given a particular power divider topology, one is still free to map the power divider outputs to the array inputs in an arbitrary order. There are N! such permutations of the outputs. There are several considerations in choosing the desired ordering: 1) The power ratio for each binary coupler should be within a specified range of values. 2) The ordering should be chosen to simplify the mechanical routing of the transmission lines connecting the power divider. The number of ouputs N of the BFN may range from, say, 6 to 22. I have already written a genetic algorithm optimizer that, given a particular BFN topology and desired array input power distribution, will search through the N! permutations of the BFN outputs to generate a design with compliant power ratios and good mechanical routing. I would now like to generalize my program to automatically generate and search through the space of possible BFN topologies. As I understand it, for N outputs (leaves of the binary tree), there are $C_{N-1}$ different topologies that can be constructed, where $C_N$ is the Catalan number. I would like to know how to encode an arbitrary tree having N leaves in a way that is consistent with a chromosomal description for use in a genetic algorithm. Also associated with this is the need to generate random instances for filling the initial population, and to implement crossover and mutations operators for this type of chromosome. Any suggestions will be welcome. Please minimize the amount of CS lingo in your reply, since I am not likely to be acquainted with it. Thanks in advance, Peter

    Read the article

  • closing MQ connection

    - by OakvilleWork
    Good afternoon, I wrote a project to Get Park Queue Info from the IBM MQ, it has producing an error when attempting to close the connection though. It is written in java. Under application in Event Viewer on the MQ machine it displays two errors. They are: “Channel program ended abnormally. Channel program ‘system.def.surconn’ ended abnormally. Look at previous error messages for channel program ‘system.def.surconn’ in the error files to determine the cause of the failure. The other message states: “Error on receive from host rnanaj (10.10.12.34) An error occurred receiving data from rnanaj (10.10.12.34) over tcp/ip. This may be due to a communications failure. The return code from tcp/ip recv() call was 10054 (X’2746’). Record these values.” This must be something how I try to connect or close the connection, below I have my code to connect and close, any ideas?? Connect: _logger.info("Start"); File outputFile = new File(System.getProperty("PROJECT_HOME"), "run/" + this.getClass().getSimpleName() + "." + System.getProperty("qmgr") + ".txt"); FileUtils.mkdirs(outputFile.getParentFile()); Connection jmsConn = null; Session jmsSession = null; QueueBrowser queueBrowser = null; BufferedWriter commandsBw = null; try { // get queue connection MQConnectionFactory MQConn = new MQConnectionFactory(); MQConn.setHostName(System.getProperty("host")); MQConn.setPort(Integer.valueOf(System.getProperty("port"))); MQConn.setQueueManager(System.getProperty("qmgr")); MQConn.setChannel("SYSTEM.DEF.SVRCONN"); MQConn.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP); jmsConn = (Connection) MQConn.createConnection(); jmsSession = jmsConn.createSession(false, Session.AUTO_ACKNOWLEDGE); Queue jmsQueue = jmsSession.createQueue("PARK"); // browse thru messages queueBrowser = jmsSession.createBrowser(jmsQueue); Enumeration msgEnum = queueBrowser.getEnumeration(); commandsBw = new BufferedWriter(new FileWriter(outputFile)); // String line = "DateTime\tMsgID\tOrigMsgID\tCorrelationID\tComputerName\tSubsystem\tDispatcherName\tProcessor\tJobID\tErrorMsg"; commandsBw.write(line); commandsBw.newLine(); while (msgEnum.hasMoreElements()) { Message message = (Message) msgEnum.nextElement(); line = dateFormatter.format(new Date(message.getJMSTimestamp())) + "\t" + message.getJMSMessageID() + "\t" + message.getStringProperty("pkd_orig_jms_msg_id") + "\t" + message.getJMSCorrelationID() + "\t" + message.getStringProperty("pkd_computer_name") + "\t" + message.getStringProperty("pkd_subsystem") + "\t" + message.getStringProperty("pkd_dispatcher_name") + "\t" + message.getStringProperty("pkd_processor") + "\t" + message.getStringProperty("pkd_job_id") + "\t" + message.getStringProperty("pkd_sysex_msg"); _logger.info(line); commandsBw.write(line); commandsBw.newLine(); } } Close: finally { IO.close(commandsBw); if (queueBrowser != null) { try { queueBrowser.close();} catch (Exception ignore) {}} if (jmsSession != null) { try { jmsSession.close();} catch (Exception ignore) {}} if (jmsConn != null) { try { jmsConn.stop();} catch (Exception ignore) {}} }

    Read the article

  • How to change the extension of a processed xml file (using eXist & cocoon)

    - by Carsten C.
    Hi all, I'm really new to this whole web stuff, so please be nice if I missed something important to post. Short: Is there a possibility to change the name of a processed file (eXist-DB) after serialization? Here my case, the following request to my eXist-db: http://localhost:8080/exist/cocoon/db/caos/test.xml and I want after serialization the follwing (xslt is working fine): http://localhost:8080/exist/cocoon/db/caos/test.html I'm using the followong sitemap.xmap with cocoon (hoping this is responsible for it) <map:match pattern="db/caos/**"> <!-- if we have an xpath query --> <map:match pattern="xpath" type="request-parameter"> <map:generate src="xmldb:exist:///db/caos/{../1}/#{1}"/> <map:act type="request"> <map:parameter name="parameters" value="true"/> <map:parameter name="default.howmany" value="1000"/> <map:parameter name="default.start" value="1"/> <map:transform type="filter"> <map:parameter name="element-name" value="result"/> <map:parameter name="count" value="{howmany}"/> <map:parameter name="blocknr" value="{start}"/> </map:transform> <map:transform src=".snip./webapp/stylesheets/db2html.xsl"> <map:parameter name="block" value="{start}"/> <map:parameter name="collection" value="{../../1}"/> </map:transform> </map:act> <map:serialize type="html" encoding="UTF-8"/> </map:match> <!-- if the whole file will be displayed --> <map:generate src="xmldb:exist:/db/caos/{1}"/> <map:transform src="..snip../stylesheets/caos2soac.xsl"> <map:parameter name="collection" value="{1}"/> </map:transform> <map:transform type="encodeURL"/> <map:serialize type="html" encoding="UTF-8"/> </map:match> So my Question is: How do I change the extension of the test.xml to test.html after processing the xml file? Background: I'm generating some information out of some xml-dbs, this infos will be displayed in html (which is working), but i want to change some entrys later, after I generated the html site. To make this confortable, I want to use Jquery & Jeditable, but the code does not work on the xml files. Saving the generated html is not an option. tia for any suggestions [and|or] help CC Edit: After reading all over: could it be, that the extension is irrelevant and that this is only a problem of port 8080? I'm confused...

    Read the article

  • Dependency Injection: I don't get where to start!

    - by Andy
    I have several articles about Dependency Injection, and I can see the benefits, especially when it comes to unit testing. The units can me loosely coupled, and mocking of dependencies can be made. The trouble is - I just don't get where to start. Consider this snippet below of (much edited for the purpose of this post) code that I have. I am instantiating a Plc object from the main form, and passing in a communications mode via the Connect method. In it's present form it becomes hard to test, because I can't isolate the Plc from the CommsChannel to unit test it. (Can I?) The class depends on using a CommsChannel object, but I am only passing in a mode that is used to create this channel within the Plc itself. To use dependancy injection, I should really pass in an already created CommsChannel (via an 'ICommsChannel' interface perhaps) to the Connect method, or maybe via the Plc constructor. Is that right? But then that would mean creating the CommsChannel in my main form first, and this doesn't seem right either, because it feels like everything will come back to the base layer of the main form, where everything begins. Somehow it feels like I am missing a crucial piece of the puzzle. Where do you start? You have to create an instance of something somewhere, but I'm struggling to understand where that should be. public class Plc() { public bool Connect(CommsMode commsMode) { bool success = false; // Create new comms channel. this._commsChannel = this.GetCommsChannel(commsMode); // Attempt connection success = this._commsChannel.Connect(); return this._connected; } private CommsChannel GetCommsChannel(CommsMode mode) { CommsChannel channel; switch (mode) { case CommsMode.RS232: channel = new SerialCommsChannel( SerialCommsSettings.Default.ComPort, SerialCommsSettings.Default.BaudRate, SerialCommsSettings.Default.DataBits, SerialCommsSettings.Default.Parity, SerialCommsSettings.Default.StopBits); break; case CommsMode.Tcp: channel = new TcpCommsChannel( TCPCommsSettings.Default.IP_Address, TCPCommsSettings.Default.Port); break; default: // Throw unknown comms channel exception. } return channel; } }

    Read the article

  • XmlHttpRequest bug?

    - by valdo
    Hello all. I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job. Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread). Nice-to-have requirements: Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file. It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status. I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode. However I noticed that after some period it just stops working. That is, after several successful file downloads it stops downloading anything. I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state). The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object. Does anyone know something about this? Is this a known bug? I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working. BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.

    Read the article

  • Nhibernate multilevel hierarchy save error?

    - by nisbus
    Hi, I have a database with a 6 level hierarchy and a domain model on top of that. something like this: Category -SubCategory -Container -DataDescription | Meta data -Data The mapping I'm using follows the following pattern: <class name="Category, Sample" table="Categories"> <id name="Id" column="Id" type="System.Int32" unsaved-value="0"> <generator class="native"/> </id> <property name="Name" access="property" type="String" column="Name"/> <property name="Metadata" access="property" type="String" column="Metadata"/> <bag name="SubCategories" cascade="save-update" lazy="true" inverse="true"> <key column="Id" foreign-key="category_subCategory_fk"/> <one-to-many class="SubCategory, Sample" /> </bag> </class> <class name="SubCategory, Sample" table="SubCategories"> <id name="Id" column="Id" type="System.Int32" unsaved-value="0"> <generator class="native"/> </id> <many-to-one name="Category" class="Category, Sample" foreign-key="subCat_category_fk"/> <property name="Name" access="property" type="String"/> <property name="Metadata" access="property" type="String"/> <bag name="Containers" inverse="true" cascade="save-update" lazy="true"> <key column="Id" foreign-key="subCat_container_fk" /> <one-to-many class="Container, Sample" /> </bag> </class> <class name="Container, Sample" table="Containers"> <id name="Id" column="Id" type="System.Int32" unsaved-value="0"> <generator class="assigned"/> </id> <many-to-one name="SubCategory" class="SubCategory,Sample" foreign-key="container_subCat_fk"/> <property name="Name" access="property" type="String" column="Name"/> <bag name="DataDescription" cascade="all" lazy="true" inverse="true"> <key column="Id" foreign-key="container_ DataDescription_fk"/> <one-to-many class="DataDescription, Sample" /> </bag> <bag name="MetaData" cascade="all" lazy="true" inverse="true"> <key column="Id" foreign-key="container_metadata_cat_fk"/> <one-to-many class="MetaData, Sample" /> </bag> </class> For some reason when I try to save the category (with the subcategory, container etc. attached) I get a foreign key violation from the database. The code is something like this (Pseudo). var category = new Category(); var subCategory = new SubCategory(); var container = new Container(); var dataDescription = new DataDescription(); var metaData = new MetaData(); category.AddSubCategory(subCategory); subCategory.AddContainer(container); container.AddDataDescription(dataDescription); container.AddMetaData(metaData); Session.Save(category); Here is the log from this test : DEBUG NHibernate.SQL - INSERT INTO Categories (Name, Metadata) VALUES (@p0, @p1); select SCOPE_IDENTITY(); @p0 = 'Unit test', @p1 = 'unit test' DEBUG NHibernate.SQL - INSERT INTO SubCategories (Category, Name, Metadata) VALUES (@p0, @p1, @p2); select SCOPE_IDENTITY(); @p0 = '1', @p1 = 'Unit test', @p2 = 'unit test' DEBUG NHibernate.SQL - INSERT INTO Containers (SubCategory, Name, Frequency, Scale, Measurement, Currency, Metadata, Id) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7); @p0 = '1', @p1 = 'Unit test', @p2 = '15', @p3 = '1', @p4 = '1', @p5 = '1', @p6 = 'unit test', @p7 = '0' ERROR NHibernate.Util.ADOExceptionReporter - The INSERT statement conflicted with the FOREIGN KEY constraint "subCat_container_fk". The conflict occurred in database "Sample", table "dbo.SubCategories", column 'Id'. The methods for adding items to objects is always as follows: public void AddSubCategory(ISubCategory subCategory) { subCategory.Category = this; SubCategories.Add(subCategory); } What am I missing?? Thanks, nisbus

    Read the article

  • How to compare a memory bits in C++?

    - by Trunet
    Hi, I need help with a memory bit comparison function. I bought a LED Matrix here with 4 x HT1632C chips and I'm using it on my arduino mega2560. There're no code available for this chipset(it's not the same as HT1632) and I'm writing on my own. I have a plot function that get x,y coordinates and a color and that pixel turn on. Only this is working perfectly. But I need more performance on my display so I tried to make a shadowRam variable that is a "copy" of my device memory. Before I plot anything on display it checks on shadowRam to see if it's really necessary to change that pixel. When I enabled this(getShadowRam) on plot function my display has some, just SOME(like 3 or 4 on entire display) ghost pixels(pixels that is not supposed to be turned on). If I just comment the prev_color if's on my plot function it works perfectly. Also, I'm cleaning my shadowRam array setting all matrix to zero. variables: #define BLACK 0 #define GREEN 1 #define RED 2 #define ORANGE 3 #define CHIP_MAX 8 byte shadowRam[63][CHIP_MAX-1] = {0}; getShadowRam function: byte HT1632C::getShadowRam(byte x, byte y) { byte addr, bitval, nChip; if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } bitval = 8>>(y&3); x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); if ((shadowRam[addr][nChip-1] & bitval) && (shadowRam[addr+32][nChip-1] & bitval)) { return ORANGE; } else if (shadowRam[addr][nChip-1] & bitval) { return GREEN; } else if (shadowRam[addr+32][nChip-1] & bitval) { return RED; } else { return BLACK; } } plot function: void HT1632C::plot (int x, int y, int color) { if (x<0 || x>X_MAX || y<0 || y>Y_MAX) return; if (color != BLACK && color != GREEN && color != RED && color != ORANGE) return; char addr, bitval; byte nChip; byte prev_color = HT1632C::getShadowRam(x,y); bitval = 8>>(y&3); if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); switch(color) { case BLACK: if (prev_color != BLACK) { // compare with memory to only set if pixel is other color // clear the bit in both planes; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case GREEN: if (prev_color != GREEN) { // compare with memory to only set if pixel is other color // set the bit in the green plane and clear the bit in the red plane; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case RED: if (prev_color != RED) { // compare with memory to only set if pixel is other color // clear the bit in green plane and set the bit in the red plane; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case ORANGE: if (prev_color != ORANGE) { // compare with memory to only set if pixel is other color // set the bit in both the green and red planes; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; } } If helps: The datasheet of board I'm using. On page 7 has the memory mapping I'm using. Also, I have a video of display working.

    Read the article

  • Any socket programmers out there? How can I obtain the IPv4 address of the client?

    - by Dr Dork
    Hello! I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code setup to listen for incoming TCP connection requests from clients after the parent socket has been created and is set to listen... int sockfd, newfd; unsigned int len; socklen_t sin_size; char msg[]="Test message sent"; char buf[MAXLEN]; int st, rv; struct addrinfo hints, *serverinfo, *p; struct sockaddr_storage client; char ip[INET6_ADDRSTRLEN]; . . //parent socket creation and listen code omitted for simplicity . //wait for connection requests from clients while(1) { //Returns the socketID and address of client connecting to socket if( ( newfd = accept(sockfd, (struct sockaddr *)&client, &len) ) == -1 ){ perror("Accept"); exit(-1); } if( (rv = recv(newfd, buf, MAXLEN-1, 0 )) == -1) { perror("Recv"); exit(-1); } struct sockaddr_in *clientAddr = ( struct sockaddr_in *) get_in_addr((struct sockaddr *)&client); inet_ntop(client.ss_family, clientAddr, ip, sizeof ip); printf("Receive from %s: query type is %s\n", ip, buf); if( ( st = send(newfd, msg, strlen(msg), 0)) == -1 ) { perror("Send"); exit(-1); } //ntohs is used to avoid big-endian and little endian compatibility issues printf("Send %d byte to port %d\n", ntohs(clientAddr->sin_port) ); close(newfd); } } I found the get_in_addr function online and placed it at the top of my code and use it to obtain the IP address of the client connecting... // get sockaddr, IPv4 or IPv6: void *get_in_addr(struct sockaddr *sa) { if (sa->sa_family == AF_INET) { return &(((struct sockaddr_in*)sa)->sin_addr); } return &(((struct sockaddr_in6*)sa)->sin6_addr); } but the function always returns the IPv6 IP address since thats what the sa_family property is set as. My question is, is the IPv4 IP address stored anywhere in the data I'm using and, if so, how can I access it? Thanks so much in advance for all your help!

    Read the article

  • Specifying a Single Request To Use Credentials with HttpClient

    - by jiduvah
    I am using OAuth2 on my android project. The idea is to use a singleton HttpClient used with a ThreadSafeClientConnManager. For a normal request to the server we construct an Authorization header and send that. The header is constructed from values received from the server. This works fine. However every 15 minutes we must get new values from the server to construct the header. To Received these values I must set the credentials like so. client.getCredentialsProvider().setCredentials( new AuthScope(AuthScope.ANY_HOST, AuthScope.ANY_PORT), new UsernamePasswordCredentials(creds.clientId, creds.clientSecret)); In order for this to work I must set up and new DefaultHttpClient. If I use the original singleton httpclient I receive some errors. My question is.. is it possible to set the credentials to be used only on this one request? I noticed that there is an AuthScope. The host and port would not be suitable for this but maybe the realm would? I can't find anything that tells me what a realm is or how to use it. 06-05 10:12:55.969: W/System.err(23843): org.apache.http.NoHttpResponseException: The target server failed to respond 06-05 10:12:55.969: W/System.err(23843): at org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:85) 06-05 10:12:55.969: W/System.err(23843): at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:174) 06-05 10:12:55.969: W/System.err(23843): at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:179) 06-05 10:12:55.969: W/System.err(23843): at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:235) 06-05 10:12:55.969: W/System.err(23843): at org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:259) 06-05 10:12:55.975: W/System.err(23843): at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:279) 06-05 10:12:55.975: W/System.err(23843): at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:121) 06-05 10:12:55.975: W/System.err(23843): at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:504) 06-05 10:12:55.975: W/System.err(23843): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555) 06-05 10:12:55.975: W/System.err(23843): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487) 06-05 10:12:55.975: W/System.err(23843): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465) So After more testing I have found where the problem lies. I want to configure a pooled connection manager like so SchemeRegistry schemeRegistry = new SchemeRegistry(); schemeRegistry.register( new Scheme("http", PlainSocketFactory.getSocketFactory(), 80)); schemeRegistry.register( new Scheme("https", PlainSocketFactory.getSocketFactory(), 443)); ClientConnectionManager conManager = new ThreadSafeClientConnManager(new BasicHttpParams(), schemeRegistry); DefaultHttpClient httpClient = new DefaultHttpClient(); But when configure like this, I get the error above. If I use the normal default httpclient like so DefaultHttpClient httpClient = new DefaultHttpClient(); Then it works fine. Any ideas?

    Read the article

  • UIScrollView Infinite Scrolling

    - by Ben Robinson
    I'm attempting to setup a scrollview with infinite (horizontal) scrolling. Scrolling forward is easy - I have implemented scrollViewDidScroll, and when the contentOffset gets near the end I make the scrollview contentsize bigger and add more data into the space (i'll have to deal with the crippling effect this will have later!) My problem is scrolling back - the plan is to see when I get near the beginning of the scroll view, then when I do make the contentsize bigger, move the existing content along, add the new data to the beginning and then - importantly adjust the contentOffset so the data under the view port stays the same. This works perfectly if I scroll slowly (or enable paging) but if I go fast (not even very fast!) it goes mad! Heres the code: - (void) scrollViewDidScroll:(UIScrollView *)scrollView { float pageNumber = scrollView.contentOffset.x / 320; float pageCount = scrollView.contentSize.width / 320; if (pageNumber > pageCount-4) { //Add 10 new pages to end mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); //add new data here at (320*pageCount, 0); } //*** the problem is here - I use updatingScrollingContent to make sure its only called once (for accurate testing!) if (pageNumber < 4 && !updatingScrollingContent) { updatingScrollingContent = YES; mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); mainScrollView.contentOffset = CGPointMake(mainScrollView.contentOffset.x + 3200, 0); for (UIView *view in [mainContainerView subviews]) { view.frame = CGRectMake(view.frame.origin.x+3200, view.frame.origin.y, view.frame.size.width, view.frame.size.height); } //add new data here at (0, 0); } //** MY CHECK! NSLog(@"%f", mainScrollView.contentOffset.x); } As the scrolling happens the log reads: 1286.500000 1285.500000 1284.500000 1283.500000 1282.500000 1281.500000 1280.500000 Then, when pageNumber<4 (we're getting near the beginning): 4479.500000 4479.500000 Great! - but the numbers should continue to go down in the 4,000s but the next log entries read: 1278.000000 1277.000000 1276.500000 1275.500000 etc.... Continiuing from where it left off! Just for the record, if scrolled slowly the log reads: 1294.500000 1290.000000 1284.500000 1280.500000 4476.000000 4476.000000 4473.000000 4470.000000 4467.500000 4464.000000 4460.500000 4457.500000 etc.... Any ideas???? Thanks Ben.

    Read the article

  • Why sockets does not die when server dies? Why socket dies when server is alive?

    - by Roman
    I try to play with sockets a bit. For that I wrote very simple "client" and "server" applications. Client: import java.net.*; public class client { public static void main(String[] args) throws Exception { InetAddress localhost = InetAddress.getLocalHost(); System.out.println("before"); Socket clientSideSocket = null; try { clientSideSocket = new Socket(localhost,12345,localhost,54321); } catch (ConnectException e) { System.out.println("Connection Refused"); } System.out.println("after"); if (clientSideSocket != null) { clientSideSocket.close(); } } } Server: import java.net.*; public class server { public static void main(String[] args) throws Exception { ServerSocket listener = new ServerSocket(12345); while (true) { Socket serverSideSocket = listener.accept(); System.out.println("A client-request is accepted."); } } } And I found a behavior that I cannot explain: I start a server, than I start a client. Connection is successfully established (client stops running and server is running). Then I close the server and start it again in a second. After that I start a client and it writes "Connection Refused". It seems to me that the server "remember" the old connection and does not want to open the second connection twice. But I do not understand how it is possible. Because I killed the previous server and started a new one! I do not start the server immediately after the previous one was killed (I wait like 20 seconds). In this case the server "forget" the socket from the previous server and accepts the request from the client. I start the server and then I start the client. Connection is established (server writes: "A client-request is accepted"). Then I wait a minute and start the client again. And server (which was running the whole time) accept the request again! Why? The server should not accept the request from the same client-IP and client-port but it does!

    Read the article

  • Bluetooth service problem

    - by hara
    hi I need to create a custom bluetooth service and I have to develop it using c++. I read a lot of examples but I didn't success in publishing a new service with a custom UUID. I need to specify a UUID in order to be able to connect to the service from an android app. This is what i wrote: GUID service_UUID = { /* 00000003-0000-1000-8000-00805F9B34FB */ 0x00000003, 0x0000, 0x1000, {0x80, 0x00, 0x00, 0x80, 0x5F, 0x9B, 0x34, 0xFB} }; SOCKET s, s2; SOCKADDR_BTH sab if (WSAStartup(MAKEWORD(2, 2), &wsd) != 0) return 1; printf("installing a new service\n"); s = socket(AF_BTH, SOCK_STREAM, BTHPROTO_RFCOMM); if (s == INVALID_SOCKET) { printf ("Socket creation failed, error %d\n", WSAGetLastError()); return 1; } memset (&sab, 0, sizeof(sab)); sab.addressFamily = AF_BTH; sab.port = BT_PORT_ANY; sab.serviceClassId = service_UUID; if (0 != bind(s, (SOCKADDR *) &sab, sizeof(sab))) { printf ("bind() failed with error code %d\n", WSAGetLastError()); closesocket (s); return 1; } int result=sizeof(sab); getsockname(s,(SOCKADDR *) &sab, &result ); printSOCKADDR_BTH(sab); if(listen (s, 5) == 0) printf("listen() is OK! Listening for connection... :)\n"); else printf("listen() failed with error code %d\n", WSAGetLastError()); printf("waiting connection"); for ( ; ; ) { int ilen = sizeof(sab2); s2 = accept (s, (SOCKADDR *)&sab2, &ilen); printf ("accepted"); } if(closesocket(s) == 0) printf("closesocket() pretty fine!\n"); if(WSACleanup () == 0) printf("WSACleanup() is OK!\n"); return 0; When i print the SOCKADDR_BTH structure retrieved with get getsockname i get an UUID that is not the mine. Furthermore if i use the UUID read from getsockname to connect the Android application the connection fails with this exception: java.io.IOException: Service discovery failed Could you help me?? Thanks!

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >