Search Results

Search found 178685 results on 7148 pages for 'not null'.

Page 82/7148 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Put/Post json not working with ODataController if Model has Int64

    - by daryl
    I have this Data Object with an Int64 column: [TableAttribute(Name="dbo.vw_RelationLineOfBusiness")] [DataServiceKey("ProviderRelationLobId")] public partial class RelationLineOfBusiness { #region Column Mappings private System.Guid _Lineofbusiness; private System.String _ContractNumber; private System.Nullable<System.Int32> _ProviderType; private System.String _InsuredProviderType; private System.Guid _ProviderRelationLobId; private System.String _LineOfBusinessDesc; private System.String _CultureCode; private System.String _ContractDesc; private System.Nullable<System.Guid> _ProviderRelationKey; private System.String _ProviderRelationNbr; **private System.Int64 _AssignedNbr;** When I post/Put object through my OData controller using HttpClient and NewtsonSoft: partial class RelationLineOfBusinessController : ODataController { public HttpResponseMessage PutRelationLineOfBusiness([FromODataUri] System.Guid key, Invidasys.VidaPro.Model.RelationLineOfBusiness entity) the entity object is null and the error in my modelstate : "Cannot convert a primitive value to the expected type 'Edm.Int64'. See the inner exception for more details." I noticed when I do a get on my object using the below URL: Invidasys.Rest.Service/VidaPro/RelationLineOfBusiness(guid'c6824edc-23b4-4f76-a777-108d482c0fee') my json looks like the following - I noticed that the AssignedNbr is treated as a string. { "odata.metadata":"Invidasys.Rest.Service/VIDAPro/$metadata#RelationLineOfBusiness/@Element", "Lineofbusiness":"ba129c95-c5bb-4e40-993e-c28ca86fffe4","ContractNumber":null,"ProviderType":null, "InsuredProviderType":"PCP","ProviderRelationLobId":"c6824edc-23b4-4f76-a777-108d482c0fee", "LineOfBusinessDesc":"MEDICAID","CultureCode":"en-US","ContractDesc":null, "ProviderRelationKey":"a2d3b61f-3d76-46f4-9887-f2b0c8966914","ProviderRelationNbr":"4565454645", "AssignedNbr":"1000000045","Ispar":true,"ProviderTypeDesc":null,"InsuredProviderTypeDesc":"Primary Care Physician", "StartDate":"2012-01-01T00:00:00","EndDate":"2014-01-01T00:00:00","Created":"2014-06-13T10:59:33.567", "CreatedBy":"Michael","Updated":"2014-06-13T10:59:33.567","UpdatedBy":"Michael" } When I do a PUT with httpclient the JSON is showing up in my restful services as the following and the json for the AssignedNbr column is not in quotes which results in the restful services failing to build the JSON back to an object. I played with the JSON and put the AssignedNbr in quotes and the request goes through correctly. {"AssignedNbr":1000000045,"ContractDesc":null,"ContractNumber":null,"Created":"/Date(1402682373567-0700)/", "CreatedBy":"Michael","CultureCode":"en-US","EndDate":"/Date(1388559600000-0700)/","InsuredProviderType":"PCP", "InsuredProviderTypeDesc":"Primary Care Physician","Ispar":true,"LineOfBusinessDesc":"MEDICAID", "Lineofbusiness":"ba129c95-c5bb-4e40-993e-c28ca86fffe4","ProviderRelationKey":"a2d3b61f-3d76-46f4-9887-f2b0c8966914", "ProviderRelationLobId":"c6824edc-23b4-4f76-a777-108d482c0fee","ProviderRelationNbr":"4565454645","ProviderType":null, "ProviderTypeDesc":null,"StartDate":"/Date(1325401200000-0700)/","Updated":"/Date(1408374995760-0700)/","UpdatedBy":"ED"} The reason we wanted to expose our business model as restful services was to hide any data validation and expose all our databases in format that is easy to develop against. I looked at the DataServiceContext to see if it would work and it does but it uses XML to communicate between the restful services and the client. Which would work but DataServiceContext does not give the level of messaging that HttpRequestMessage/HttpResponseMessage gives me for informing users on the errors/missing information with their post. We are planning on supporting multiple devices from our restful services platform but that requires that I can use NewtonSoft Json as well as Microsoft's DataContractJsonSerializer if need be. My question is for a restful service standpoint - is there a way I can configure/code the restful services to take in the AssignedNbr as in JSON as without the quotes. Or from a JSON standpoint is their a way I can get the JSON built without getting into the serializing business nor do I want our clients to have deal with custom serializers if they want to write their own apps against our restful services. Any suggestions? Thanks.

    Read the article

  • How to make MySQL utilize available system resources, or find "the real problem"?

    - by anonymous coward
    This is a MySQL 5.0.26 server, running on SuSE Enterprise 10. This may be a Serverfault question. The web user interface that uses these particular queries (below) is showing sometimes 30+, even up to 120+ seconds at the worst, to generate the pages involved. On development, when the queries are run alone, they take up to 20 seconds on the first run (with no query cache enabled) but anywhere from 2 to 7 seconds after that - I assume because the tables and indexes involved have been placed into ram. From what I can tell, the longest load times are caused by Read/Update Locking. These are MyISAM tables. So it looks like a long update comes in, followed by a couple 7 second queries, and they're just adding up. And I'm fine with that explanation. What I'm not fine with is that MySQL doesn't appear to be utilizing the hardware it's on, and while the bottleneck seems to be the database, I can't understand why. I would say "throw more hardware at it", but we did and it doesn't appear to have changed the situation. Viewing a 'top' during the slowest times never shows much cpu or memory utilization by mysqld, as if the server is having no trouble at all - but then, why are the queries taking so long? How can I make MySQL use the crap out of this hardware, or find out what I'm doing wrong? Extra Details: On the "Memory Health" tab in the MySQL Administrator (for Windows), the Key Buffer is less than 1/8th used - so all the indexes should be in RAM. I can provide a screen shot of any graphs that might help. So desperate to fix this issue. Suffice it to say, there is legacy code "generating" these queries, and they're pretty much stuck the way they are. I have tried every combination of Indexes on the tables involved, but any suggestions are welcome. Here's the current Create Table statement from development (the 'experimental' key I have added, seems to help a little, for the example query only): CREATE TABLE `registration_task` ( `id` varchar(36) NOT NULL default '', `date_entered` datetime NOT NULL default '0000-00-00 00:00:00', `date_modified` datetime NOT NULL default '0000-00-00 00:00:00', `assigned_user_id` varchar(36) default NULL, `modified_user_id` varchar(36) default NULL, `created_by` varchar(36) default NULL, `name` varchar(80) NOT NULL default '', `status` varchar(255) default NULL, `date_due` date default NULL, `time_due` time default NULL, `date_start` date default NULL, `time_start` time default NULL, `parent_id` varchar(36) NOT NULL default '', `priority` varchar(255) NOT NULL default '9', `description` text, `order_number` int(11) default '1', `task_number` int(11) default NULL, `depends_on_id` varchar(36) default NULL, `milestone_flag` varchar(255) default NULL, `estimated_effort` int(11) default NULL, `actual_effort` int(11) default NULL, `utilization` int(11) default '100', `percent_complete` int(11) default '0', `deleted` tinyint(1) NOT NULL default '0', `wf_task_id` varchar(36) default '0', `reg_field` varchar(8) default '', `date_offset` int(11) default '0', `date_source` varchar(10) default '', `date_completed` date default '0000-00-00', `completed_id` varchar(36) default NULL, `original_name` varchar(80) default NULL, PRIMARY KEY (`id`), KEY `idx_reg_task_p` (`deleted`,`parent_id`), KEY `By_Assignee` (`assigned_user_id`,`deleted`), KEY `status_assignee` (`status`,`deleted`), KEY `experimental` (`deleted`,`status`,`assigned_user_id`,`parent_id`,`date_due`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 And one of the ridiculous queries in question: SELECT users.user_name assigned_user_name, registration.FIELD001 parent_name, registration_task.status status, registration_task.date_modified date_modified, registration_task.date_due date_due, registration.FIELD240 assigned_wf, if(LENGTH(registration_task.description)>0,1,0) has_description, registration_task.* FROM registration_task LEFT JOIN users ON registration_task.assigned_user_id=users.id LEFT JOIN registration ON registration_task.parent_id=registration.id where (registration_task.status != 'Completed' AND registration.FIELD001 LIKE '%' AND registration_task.name LIKE '%' AND registration.FIELD060 LIKE 'GN001472%') AND registration_task.deleted=0 ORDER BY date_due asc LIMIT 0,20; my.cnf - '[mysqld]' section. [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking key_buffer = 384M max_allowed_packet = 100M table_cache = 2048 sort_buffer_size = 2M net_buffer_length = 100M read_buffer_size = 2M read_rnd_buffer_size = 160M myisam_sort_buffer_size = 128M query_cache_size = 16M query_cache_limit = 1M EXPLAIN above query, without additional index: +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | 1 | SIMPLE | registration_task | ref | idx_reg_task_p,status_assignee | idx_reg_task_p | 1 | const | 1067354 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ EXPLAIN above query, with 'experimental' index: +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | registration_task | range | idx_reg_task_p,status_assignee,NewIndex1,tcg_experimental | tcg_experimental | 259 | NULL | 103345 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+

    Read the article

  • Microphones not working on Apple macbook Air 1,1 (Early 2008) under Linux

    - by jj_p
    I'm running Linux on an mba. I can't make the microphones (neither external nor internal) work. I test using alsamixer and arecord -d 5 test-mic.waw together with aplay test-mic.waw It seems there is a problem with kernel trying to decipher Apple (intentionally) corrupted 'bios', in particular the mic pins are wrongly assigned. As far as we are concerned here, is there any difference between using EFI and BIOS-compatibility mode? (see https://wiki.archlinux.org/index.php/MacBook where they claim to have everything working out of the box on mba1,1) A nice proposal would be to compile the latest Linux kernel and run hda-jack-retask to find the right configuration (in the case of Realtek codec, the missing things I'm supposed to check are either some vendor-specific COEF verbs, EAPD or GPIO setup.), and then come up with a kernel patch to address the issue. Since I'm not that familiar with this last part of the story, can anyone help me through this process? Some useful data: The output from alsa script run as root http://www.alsa-project.org/db/?f=adae8ebee1007043fe83414ac4972319e02255fa The command hda-jack-sense-test -a (with external mic in) Pin 0x14 (Internal Speaker): present = No Pin 0x15 (Green HP Out): present = Yes Pin 0x16 (Not connected): present = No Pin 0x17 (Not connected): present = No Pin 0x18 (Not connected): present = No Pin 0x19 (Not connected): present = No Pin 0x1a (Not connected): present = No Pin 0x1b (Not connected): present = No Pin 0x1c (Not connected): present = No Pin 0x1d (Not connected): present = No Pin 0x1e (Not connected): present = No Pin 0x1f (Not connected): present = No Most likely the chip is Realtek ALC885 (compare also ALC889A) http://guide-images.ifixit.net/igi/bBTSqaeK5JpQ1AWe.large , although at the moment alsa reads it as ALC889A Takashi Iwai's tutorial https://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio.txt Some people researched the original files from a running OS X installation on this same model (I think the relevant files are AppleHDA.kext/Contents/MacOS/AppleHDA AppleHDA.kext/Contents/PlugIns/AppleHDAHardwareConfigDriver.kext/Contents/Info.p????list AppleHDA.kext/Contents/Resources/layout12.xml.zlib AppleHDA.kext/Contents/Resources/Platforms.xml.zlib) http://www.insanelymac.com/forum/topic/220090-alc889a-pin-configuration/#entry1554954 Datasheet http://www.realtek.info/pdf/ALC885_1-1.pdf (from the same Realtek, one can also try to download Linux driver, but this is just taken from ALSA project, as stated in the readme file.) Compare with this Arch guy http://www.alsa-project.org/db/?f=3ca8243c0626844f0264a3faad0aa72018bc14f4 Here for the first time support to audio (except mics) for mba1,2 (which is morally the same as 1,1) is patched into the kernel http://www.alsa-project.org/pipermail/alsa-devel/2010-February/025511.html The same jack supposedly works both for HP and ext MIC, I think it's called TRRS, and it's the same as the one used e.g. for iphones This guy might have done a similar job, though to a more recent version and for sound globally, not just mics: http://blogs.aerys.in/jeanmarc-leroux/2013/09/15/fixing-2013-macbook-air-ubuntu-sound-issue/ (this is mirror to http://unix.stackexchange.com/questions/73044/microphones-not-working-on-apple-macbook-air-1-1-early-2008-under-linux )

    Read the article

  • data is not inserting in my db table [closed]

    - by Sarojit Chakraborty
    Please see my below(SubjectDetailsDao.java) code of addZoneToDb method. My debugger is nicely running upto ** session.getTransaction().commit();** code. but after that debugger stops,I do not know why it stops after that line? .And because of this i am unable to insert my data into my database table. I don't know what to do.Why it is not inserting my data into my database table? Plz help me for this. H Now i am getting this Error: Struts Problem Report Struts has detected an unhandled exception: Messages: org.hibernate.event.PreInsertEvent.getSource()Lorg/hibernate/event/EventSource; File: org/hibernate/validator/event/ValidateEventListener.java Line number: 172 Stacktraces java.lang.reflect.InvocationTargetException sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:601) com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:122) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:179) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:94) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:235) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:89) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:130) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:267) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:126) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:138) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:165) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:179) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:176) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:52) org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:488) org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:498) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:394) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:243) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188) org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) java.lang.Thread.run(Thread.java:722) java.lang.NoSuchMethodError: org.hibernate.event.PreInsertEvent.getSource()Lorg/hibernate/event/EventSource; org.hibernate.validator.event.ValidateEventListener.onPreInsert(ValidateEventListener.java:172) org.hibernate.action.EntityInsertAction.preInsert(EntityInsertAction.java:156) org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:49) org.hibernate.engine.ActionQueue.execute(ActionQueue.java:250) org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:234) org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:141) org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298) org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27) org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338) org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106) v.esoft.dao.SubjectdetailsDAO.SubjectdetailsDAO.addZoneToDb(SubjectdetailsDAO.java:185) v.esoft.actions.LoginAction.datatobeinsert(LoginAction.java:53) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:601) com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) ............................... ............................... SubjectDetailsDao.java(I have problem in addZoneToDb) package v.esoft.dao.SubjectdetailsDAO; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import org.hibernate.HibernateException; import org.hibernate.Query; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.criterion.Order; import com.opensymphony.xwork2.ActionSupport; import v.esoft.connection.HibernateUtil; import v.esoft.pojos.Subjectdetails; public class SubjectdetailsDAO extends ActionSupport { private static Session session = null; private static SessionFactory sessionFactory = null; static Transaction transaction = null; private String currentDate; SimpleDateFormat formatter1 = new SimpleDateFormat("yyyy-MM-dd"); private java.util.Date currentdate; public SubjectdetailsDAO() { sessionFactory = HibernateUtil.getSessionFactory(); SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd"); currentdate = new java.util.Date(); currentDate = formatter.format(currentdate); } public List getAllCustomTempleteRoutinesForGrid() { List list = new ArrayList(); try { session = sessionFactory.openSession(); list = session.createCriteria(Subjectdetails.class).addOrder(Order.desc("subjectId")).list(); } catch (Exception e) { System.out.println("Exepetion in getAllCustomTempleteRoutines" + e); } finally { try { // HibernateUtil.shutdown(); } catch (Exception e) { System.out.println("Exception In getExerciseListByLoginId Resource closing :" + e); } } return list; } //**showing list on grid private static List<Subjectdetails> custLst=new ArrayList<Subjectdetails>(); static int total=50; static { SubjectdetailsDAO cts = new SubjectdetailsDAO(); Iterator iterator1 = cts.getAllCustomTempleteRoutinesForGrid().iterator(); while (iterator1.hasNext()) { Subjectdetails get = (Subjectdetails) iterator1.next(); custLst.add(get); } } /****************************************update Routines List by WorkId************************************/ public int updatesub(Subjectdetails s) { int updated = 0; try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); Query query = session.createQuery("UPDATE Subjectdetails set subjectName = :routineName1 WHERE subjectId=:workoutId1"); query.setString("routineName1", s.getSubjectName()); query.setInteger("workoutId1", s.getSubjectId()); updated = query.executeUpdate(); if (updated != 0) { } transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { System.out.println("Exception in addUser() Rollback :" + e1); } } } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In addUser Resource closing :" + e); } } return updated; } /****************************************update Routines List by WorkId************************************/ public int addSubjectt(Subjectdetails s) { int inserted = 0; Subjectdetails ss=new Subjectdetails(); try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); ss. setSubjectName(s.getSubjectName()); session.save(ss); System.out.println("Successfully data insert in database"); inserted++; if (inserted != 0) { } transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { System.out.println("Exception in addUser() Rollback :" + e1); } } } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In addUser Resource closing :" + e); } } return inserted; } /******************************************Get all Routines List by LoginID************************************/ public List getSubjects() { List list = null; try { session = sessionFactory.openSession(); list = session.createCriteria(Subjectdetails.class).list(); } catch (Exception e) { System.out.println("Exception in getRoutineList() :" + e); } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In getUserList Resource closing :" + e); } } return list; } //---\ public int addZoneToDb(String countryName, Integer loginId) { int inserted = 0; try { System.out.println("---------1--------"); Session session = HibernateUtil.getSessionFactory().openSession(); System.out.println("---------2------session--"+session); session.beginTransaction(); Subjectdetails country = new Subjectdetails(countryName, loginId, currentdate, loginId, currentdate); System.out.println("---------2------country--"+country); session.save(country); System.out.println("-------after save--"); inserted++; session.getTransaction().commit(); System.out.println("-------after commits--"); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { } } } finally { try { } catch (Exception e) { } } return inserted; } //-- public int nextId() { return total++; } public List<Subjectdetails> buildList() { return custLst; } public static int count() { return custLst.size(); } public static List<Subjectdetails> find(int o,int q) { return custLst.subList(o, q); } public void save(Subjectdetails c) { custLst.add(c); } public static Subjectdetails findById(Integer id) { try { for(Subjectdetails c:custLst) { if(c.getSubjectId()==id) { return c; } } } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } public void update(Subjectdetails c) { for(Subjectdetails x:custLst) { if(x.getSubjectId()==c.getSubjectId()) { x.setSubjectName(c.getSubjectName()); } } } public void delete(Subjectdetails c) { custLst.remove(c); } public static List<Subjectdetails> findNotById(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()!=id) { temp.add(c); } } return temp; } public static List<Subjectdetails> findLesserAsId(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()<=id) { temp.add(c); } } return temp; } public static List<Subjectdetails> findGreaterAsId(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()>=id) { temp.add(c); } } return temp; } } Subjectdetails.hbm.xml <hibernate-mapping> <class name="vb.sofware.pojos.Subjectdetails" table="subjectdetails" catalog="vbsoftware"> <id name="subjectId" type="int"> <column name="subject_id" /> <generator class="increment"/> </id> <property name="subjectName" type="string"> <column name="subject_name" length="150" /> </property> <property name="createrId" type="java.lang.Integer"> <column name="creater_id" /> </property> <property name="createdDate" type="timestamp"> <column name="created_date" length="19" /> </property> <property name="updateId" type="java.lang.Integer"> <column name="update_id" /> </property> <property name="updatedDate" type="timestamp"> <column name="updated_date" length="19" /> </property> </class> </hibernate-mapping> My POJO - Subjectdetails.java package v.esoft.pojos; // Generated Oct 6, 2012 1:58:21 PM by Hibernate Tools 3.4.0.CR1 import java.util.Date; /** * Subjectdetails generated by hbm2java */ public class Subjectdetails implements java.io.Serializable { private int subjectId; private String subjectName; private Integer createrId; private Date createdDate; private Integer updateId; private Date updatedDate; public Subjectdetails( String subjectName) { //this.subjectId = subjectId; this.subjectName = subjectName; } public Subjectdetails() { } public Subjectdetails(int subjectId) { this.subjectId = subjectId; } public Subjectdetails(int subjectId, String subjectName, Integer createrId, Date createdDate, Integer updateId, Date updatedDate) { this.subjectId = subjectId; this.subjectName = subjectName; this.createrId = createrId; this.createdDate = createdDate; this.updateId = updateId; this.updatedDate = updatedDate; } public Subjectdetails( String subjectName, Integer createrId, Date createdDate, Integer updateId, Date updatedDate) { this.subjectName = subjectName; this.createrId = createrId; this.createdDate = createdDate; this.updateId = updateId; this.updatedDate = updatedDate; } public int getSubjectId() { return this.subjectId; } public void setSubjectId(int subjectId) { this.subjectId = subjectId; } public String getSubjectName() { return this.subjectName; } public void setSubjectName(String subjectName) { this.subjectName = subjectName; } public Integer getCreaterId() { return this.createrId; } public void setCreaterId(Integer createrId) { this.createrId = createrId; } public Date getCreatedDate() { return this.createdDate; } public void setCreatedDate(Date createdDate) { this.createdDate = createdDate; } public Integer getUpdateId() { return this.updateId; } public void setUpdateId(Integer updateId) { this.updateId = updateId; } public Date getUpdatedDate() { return this.updatedDate; } public void setUpdatedDate(Date updatedDate) { this.updatedDate = updatedDate; } } And my Sql query is CREATE TABLE IF NOT EXISTS `subjectdetails` ( `subject_id` int(3) NOT NULL, `subject_name` varchar(150) DEFAULT NULL, `creater_id` int(5) DEFAULT NULL, `created_date` datetime DEFAULT NULL, `update_id` int(5) DEFAULT NULL, `updated_date` datetime DEFAULT NULL, PRIMARY KEY (`subject_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    Read the article

  • Linux not buffering block I/O when the device is not "in use" (i.e. mounted)

    - by Radek Hladík
    I am installing new server and I've found an interesting issue. The server is running Fedora 19 (3.11.7-200.fc19.x86_64 kernel) and is supposed to host a few KVM/Qemu virtual servers (mail server, file server, etc..). The HW is Intel(R) Xeon(R) CPU 5160 @ 3.00GHz with 16GB RAM. One of the most important features will be Samba server and we have decided to make it as virtual machine with almost direct access to the disks. So the real HDD is cached on SSD (via bcache) then raided with md and the final device is exported into the virtual machine via virtio. The virtual machine is again Fedora 19 with the same kernel. One important topic to find out is whether the virtualization layer will not introduce high overload into disk I/Os. So far I've been able to get up to 180MB/s in VM and up to 220MB/s on real HW (on the SSD disk). I am still not sure why the overhead is so big but it is more than the network can handle so I do not care so much. The interesting thing is that I've found that the disk reads are not buffered in the VM unless I create and mount FS on the disk or I use the disks somehow. Simply put: Lets do dd to read disk for the first time (the /dev/vdd is an old Raptor disk 70MB/s is its real speed): [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 36.8038 s, 71.2 MB/s Buffers: 14444 kB Rereading the data shows that they are cached somewhere but not in buffers of the VM. Also the speed increased to "only" 500MB/s. The VM has 4GB of RAM (more that the test file) [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.16016 s, 508 MB/s Buffers: 14444 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.05727 s, 518 MB/s Buffers: 14444 kB Now lets mount the FS on /dev/vdd and try the dd again: [root@localhost ~]# mount /dev/vdd /mnt/tmp [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 4.68578 s, 559 MB/s Buffers: 2574592 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 1.50504 s, 1.7 GB/s Buffers: 2574592 kB While the first read was the same, all 2.6GB got buffered and the next read was at 1.7GB/s. And when I unmount the device: [root@localhost ~]# umount /mnt/tmp [root@localhost ~]# cat /proc/meminfo | grep Buffers Buffers: 14452 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.10499 s, 514 MB/s Buffers: 14468 kB The bcache was disabled while testing and the results are same on faster (newer) HDDs and on SSD (except for the initial read speed of course). To sum it up. When I read from the device via dd first time, it gets read from the disk. Next time I reread it gets cached in the host but not in the guest (thats actually the same issue, more on that later). When I mount the filesystem but try to read the device directly it gets cached in VM (via buffers). As soon as I stop "using" it, buffers are discarded and the device is not cached anymore in the VM. When I looked into buffers value on the host I realized that the situation is the same. The block I/O gets buffered only when the disk is in use, in this case it means "exported to a VM". On host, after all the measurement done: 3165552 buffers On the host, after the VM shutdown: 119176 buffers I know it is not important as the disks will be mounted all the time but I am curious and I would like to know why it is working like this.

    Read the article

  • Custom Paging for GridView in an UpdatePanel not firing PageIndexChanging event

    - by JeffCren
    I have a GridView that uses custom paging inside an UpdatePanel (so that the paging and sorting of the gridview don't cause postback). The sorting works fine, but the paging doesn't. The PageIndexChanging event is never called. This is the aspx code: <asp:UpdatePanel runat="server" ID="upSearchResults" ChildrenAsTriggers="true" UpdateMode="Always"> <ContentTemplate> <asp:GridView ID="gvSearchResults" runat="server" AllowSorting="true" AutoGenerateColumns="false" AllowPaging="true" PageSize="10" OnDataBound="gvSearchResults_DataBound" OnRowDataBound ="gvSearchResults_RowDataBound" OnSorting="gvSearchResults_Sorting" OnPageIndexChanging="gvSearchResults_PageIndexChanging" Width="100%" EnableSortingAndPagingCallbacks="false"> <Columns> <asp:TemplateField HeaderText="Select" HeaderStyle-HorizontalAlign="Center"> <ItemTemplate> <asp:HyperLink ID="lnkAdd" runat="server">Add</asp:HyperLink> <asp:HiddenField ID="hfPersonId" runat="server" Value='<%# Eval("Id") %>'/> </ItemTemplate> </asp:TemplateField> <asp:BoundField HeaderText="First Name" DataField="FirstName" HeaderStyle-HorizontalAlign="Center" ItemStyle-HorizontalAlign="Center" SortExpression="FirstName" /> <asp:BoundField HeaderText="Last Name" DataField="LastName" HeaderStyle-HorizontalAlign="Center" ItemStyle-HorizontalAlign="Center" SortExpression="LastName" /> <asp:TemplateField HeaderText="Phone Number" HeaderStyle-HorizontalAlign="Center" ItemStyle-HorizontalAlign="Center" > <ItemTemplate> <asp:Label ID="lblPhone" runat="server" Text="" /> </ItemTemplate> </asp:TemplateField> </Columns> <PagerTemplate> <table width="100%" class="pager"> <tr> <td> </td> </tr> </table> </PagerTemplate> </asp:GridView> <div class="btnContainer"> <div class="btn btn-height_small btn-style_dominant"> <asp:LinkButton ID="lbtNewRecord" runat="server" OnClick="lbtNewRecord_Click"><span>Create New Record</span></asp:LinkButton> </div> <div class="btn btn-height_small btn-style_subtle"> <a onclick="openParticipantModal();"><span>Cancel</span></a> </div> </div> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="gvSearchResults" EventName="PageIndexChanging" /> <asp:AsyncPostBackTrigger ControlID="gvSearchResults" EventName="Sorting" /> </Triggers> </asp:UpdatePanel> In the code behind I have a SetPaging method that is called on the GridView OnDataBound event: private void SetPaging(GridView gv) { GridViewRow row = gv.BottomPagerRow; var place = row.Cells[0]; var first = new LinkButton(); first.CommandName = "Page"; first.CommandArgument = "First"; first.Text = "First"; first.ToolTip = "First Page"; if (place != null) place.Controls.Add(first); var lbl = new Label(); lbl.Text = " "; if (place != null) place.Controls.Add(lbl); var prev = new LinkButton(); prev.CommandName = "Page"; prev.CommandArgument = "Prev"; prev.Text = "Prev"; prev.ToolTip = "Previous Page"; if (place != null) place.Controls.Add(prev); var lbl2 = new Label(); lbl2.Text = " "; if (place != null) place.Controls.Add(lbl2); for (int i = 1; i <= gv.PageCount; i++) { var btn = new LinkButton(); btn.CommandName = "Page"; btn.CommandArgument = i.ToString(); if (i == gv.PageIndex + 1) { btn.BackColor = Color.Gray; } btn.Text = i.ToString(); btn.ToolTip = "Page " + i.ToString(); if (place != null) place.Controls.Add(btn); var lbl3 = new Label(); lbl3.Text = " "; if (place != null) place.Controls.Add(lbl3); } var next = new LinkButton(); next.CommandName = "Page"; next.CommandArgument = "Next"; next.Text = "Next"; next.ToolTip = "Next Page"; if (place != null) place.Controls.Add(next); var lbl4 = new Label(); lbl4.Text = " "; if (place != null) place.Controls.Add(lbl4); var last = new LinkButton(); last.CommandName = "Page"; last.CommandArgument = "Last"; last.Text = "Last"; last.ToolTip = "Last Page"; if (place != null) place.Controls.Add(last); var lbl5 = new Label(); lbl5.Text = " "; if (place != null) place.Controls.Add(lbl5); } The paging works if I don't use custom paging, but I really need to use the custom paging. I can't figure out why the PageIndexChanging event isn't fired when I'm using the custom paging. Thanks, Jeff

    Read the article

  • MySQL unique clustered constraint not constraining as expected

    - by igor
    I'm creating a table with: CREATE TABLE movies ( id INT AUTO_INCREMENT PRIMARY KEY, name CHAR(255) NOT NULL, year INT NOT NULL, inyear CHAR(10), CONSTRAINT UNIQUE CLUSTERED (name, year, inyear) ); (this is jdbc SQL) Which creates a MySQL table with a clustered index, "index kind" is "unique", and spans the three clustered columns: full size However, once I dump my data (without exceptions thrown), I see that the uniqueness constraint has failed: SELECT * FROM movies WHERE name = 'Flawless' AND year = 2007 AND inyear IS NULL; gives: id, name, year, inyear 162169, 'Flawless', 2007, NULL 162170, 'Flawless', 2007, NULL Does anyone know what I'm doing wrong here?

    Read the article

  • OpenID on Google not returning anything

    - by PlayKid
    Hi there, For some reason, the following code does not return anything: string alias = response.FriendlyIdentifierForDisplay; var sreg = response.GetExtension<ClaimsResponse>(); if (sreg != null && sreg.MailAddress != null) { alias = sreg.MailAddress.User; } if (sreg != null && !string.IsNullOrEmpty(sreg.Email)) { alias = sreg.Email; } if (sreg != null && !string.IsNullOrEmpty(sreg.FullName)) { alias = sreg.FullName; } I was hoping I can get the Email from Yahoo or Google, but sreg just return null whichever provider I have chosen. I saw some of other posts that this code should return an e-mail at least, but for me, it does not, please assist. Thanks alot

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Ubuntu 12.04 host – Virtualbox 4.1.12 Guest=Windows 7 – Network will not connect

    - by user287529
    Ubuntu 12.04 host – Virtualbox 4.1.12 Guest=Windows 7 – Network will not connect. I'm using Ubuntu 12.04 on an Acer Aspire 5742-7645 laptop with 4GB memory, Intel Core i3 processor, Intel HD Graphics, DVD drive, 802.1 b/g/n, and 500 GB HD. I connect to my router via a wireless connection. I have installed Virutalbox 4.1.12 from the Ubuntu Software Center and installed Guest additions 4.1.12 in the Windows 7 guest session. I have Windows XP and Windows 7 installed as guests in Virtual box The network settings are different for XP and 7 – see below. Network Settings XP guest = Adapter 1: PCnet-FAST III (NAT) - Network works perfectly and has worked well for several years. Network Settings Win 7 = Adapter 1: Intel PRO/1000 MT Desktop (Bridged adapter, eth1) Promiscuous Mode = allow all Cable connected = checked When I originally installed Windows 7, I tried NAT and the guest network would not connect. Once I changed the setting to the above (Bridged) the Network worked perfectly. However, what I believe is after updates (not sure if it was an Ubuntu or Windows update) the guest network stopped working and I can not get it to connect. Interfaces file content auto lo iface lo inet loopback Ifconfig yields lou@lou-Aspire-5742:~$ ifconfig eth0 Link encap:Ethernet HWaddr 1c:75:08:09:f6:5c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 eth1 Link encap:Ethernet HWaddr 4c:0f:6e:7c:9f:01 inet addr:192.168.1.104 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::4e0f:6eff:fe7c:9f01/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18095 errors:2 dropped:0 overruns:0 frame:24344 TX packets:9281 errors:47 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5301926 (5.3 MB) TX bytes:1441885 (1.4 MB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3208 errors:0 dropped:0 overruns:0 frame:0 TX packets:3208 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:294088 (294.0 KB) TX bytes:294088 (294.0 KB) Ipconfig yields the following: Windows IP Configuration Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::38ba:dbca:a21d:c3d1%13 Autoconfiguration IPv4 Address. . : 169.254.195.209 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : Tunnel adapter isatap.{B292E440-679D-4FC5-8E34-77D6804669C8}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Local Area Connection* 11: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : I'm not sure what else to do. Can someone provide the troubleshooting steps to determine what the problem is and possible solution?

    Read the article

  • Search SQL Question Between Related Two Tables

    - by mTuran
    Hi, I am writing some kind of search engine for my web application and i have a problem. I have 2 tables first of is projects table: PROJECTS TABLE id int(11) NO PRI NULL auto_increment employer_id int(11) NO MUL NULL project_title varchar(100) NO MUL NULL project_description text NO NULL project_budget int(11) NO NULL project_allowedtime int(11) NO NULL project_deadline datetime NO NULL total_bids int(11) NO NULL average_bid int(11) NO NULL created datetime NO MUL NULL active tinyint(1) NO MUL NULL PROJECTS_SKILLS TABLE project_id int(11) NO MUL NULL skill_id int(11) NO MUL NULL For example: I want ask this query to database: 1-) Skills are 5 and 7. 2-) Order results by created 3-) project title contains "php" word. 4-) Returned rows should contain projects.* columuns. 5-) Projects should be distinct(i don't want same projects in return of query). Please write sql query that ensure these conditions. Thank You.

    Read the article

  • T-SQL to PL/SQL (IDENTITY)

    - by folone
    I've got a T-SQL script, that converts field to IDENTITY (in a weird way). How do I convert it to PL/SQL? (and, probably, figure out, if there is a simpler way to do this - without creating a temporary table). The T-SQL script: -- alter table ts_changes add TS_THREADID VARCHAR(100) NULL; -- Change Field TS_ID TS_NOTIFICATIONEVENTS to IDENTITY BEGIN TRANSACTION GO CREATE TABLE dbo.Tmp_TS_NOTIFICATIONEVENTS ( TS_ID int NOT NULL IDENTITY (1, 1), TS_TABLEID int NOT NULL, TS_CASEID int NULL, TS_WORKFLOWID int NULL, TS_NOTIFICATIONID int NULL, TS_PRIORITY int NULL, TS_STARTDATE int NULL, TS_TIME int NULL, TS_WAITSTATUS int NULL, TS_RECIPIENTID int NULL, TS_LASTCHANGEDATE int NULL, TS_ELAPSEDCYCLES int NULL ) ON [PRIMARY] SET IDENTITY_INSERT dbo.Tmp_TS_NOTIFICATIONEVENTS ON GO IF EXISTS(SELECT * FROM dbo.TS_NOTIFICATIONEVENTS) EXEC('INSERT INTO dbo.Tmp_TS_NOTIFICATIONEVENTS (TS_ID, TS_TABLEID, TS_CASEID, TS_WORKFLOWID, TS_NOTIFICATIONID, TS_PRIORITY, TS_STARTDATE, TS_TIME, TS_WAITSTATUS, TS_RECIPIENTID, TS_LASTCHANGEDATE, TS_ELAPSEDCYCLES) SELECT TS_ID, TS_TABLEID, TS_CASEID, TS_WORKFLOWID, TS_NOTIFICATIONID, TS_PRIORITY, TS_STARTDATE, TS_TIME, TS_WAITSTATUS, TS_RECIPIENTID, TS_LASTCHANGEDATE, TS_ELAPSEDCYCLES FROM dbo.TS_NOTIFICATIONEVENTS WITH (HOLDLOCK TABLOCKX)') GO SET IDENTITY_INSERT dbo.Tmp_TS_NOTIFICATIONEVENTS OFF GO DROP TABLE dbo.TS_NOTIFICATIONEVENTS GO EXECUTE sp_rename N'dbo.Tmp_TS_NOTIFICATIONEVENTS', N'TS_NOTIFICATIONEVENTS', 'OBJECT' GO ALTER TABLE dbo.TS_NOTIFICATIONEVENTS ADD CONSTRAINT aaaaaTS_NOTIFICATIONEVENTS_PK PRIMARY KEY NONCLUSTERED ( TS_ID ) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO COMMIT

    Read the article

  • GameStateManagement and inputs not being recognized

    - by Dave Voyles
    EDIT: I've removed a bit of code from the input class to make this more readable, and updated my StartScreen class, which is now at the bottom. I have the same issues though, but they are explained in my comments on the bottom of this page. It won't let me paste my additional code here (the format comes out crazy), so I've linked to pastebin with the code pastebin I've been trying to implement the MS provided GameStateManagement sample with my game, but it has proven a bit difficult. Really, I'm using Oneksoft's Starter Kit, which uses the MS provided sample, so they are identical, except for my splash screen. I'm able to get the splash screen to launch, where it informs the player to press A to advance the screen, but this doesn't seem to accept any of my inputs. I’ve also added Console.Writeline(“Pressing A”) under the IsMenuPressed method in Input.cs to verify that it is getting called, but for some reason it is constantly spamming my log, rather than just appearing each time I press it. Not sure why this is happening. I have a bit too much code to post it all here, so I’ve attached a link to my .rar with my classes, but I’ll also leave a bit here which I thinkmay be applicable. https://www.dropbox.com/sh/6ek4uru2jc2ch0k/JTeBWN_3PQ What do you guys think the issue is? namespace Pong { public class Input { public const int MaxInputs = 4; public readonly KeyboardState[] CurrentKeyboardState; public readonly GamePadState[] CurrentGamePadState; public KeyboardState[] LastKeyboardState; public GamePadState[] LastGamePadState; public readonly bool[] GamePadWasConnected; public Input() { // Get input state CurrentKeyboardState = new KeyboardState[MaxInputs]; CurrentGamePadState = new GamePadState[MaxInputs]; // Preserving last states to check for isKeyUp events LastKeyboardState = CurrentKeyboardState; LastGamePadState = CurrentGamePadState; } /// <summary> /// Checks for a "menu select" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuSelect(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { Console.WriteLine("Pressing A"); return IsNewKeyPress(Keys.Space, controllingPlayer, out playerIndex) || IsNewKeyPress(Keys.Enter, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.A, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Start, controllingPlayer, out playerIndex); } /// <summary> /// Checks for a "menu cancel" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuCancel(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { return IsNewKeyPress(Keys.Escape, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.B, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Back, controllingPlayer, out playerIndex); }

    Read the article

  • Entity Framework This property descriptor does not support the SetValue

    - by Gayan
    Hello guys, below are my entities which i have created using entity frame work. retailer id name childs(navigation) generated database schema [Id] [int] IDENTITY(1,1) NOT NULL, [Name] nvarchar NOT NULL childern id name RETAILER(navigation) generated database schema [Id] [int] IDENTITY(1,1) NOT NULL, [name] nvarchar NOT NULL [Retailer_Id] [int] NOT NULL, As you can see in the above model the relationship is 1 retailer can have 0 or 1 child. my problem is when i create a new child and set the retailer navigation property of it to a retailer entity it throws the following exception.how do i solve it Error while setting property 'retailer': 'This property descriptor does not support the SetValue method.'.

    Read the article

  • SQL question - Cursor or not?

    - by grady
    Hi, I have a query which returns 2+ rows. In those results is a column which we can call columnX for now. Lets look at those example results: columnX 100 86 85 70 null null I get 6 rows for example, some of them are null, some of them are not null. Now I want to go through those results and stop as soon as I find a row which is < null. How can I do that? Thanks in advance :-)

    Read the article

  • android client not working [migrated]

    - by Syeda Zunairah
    i have a java client and c# server the server code is static Socket listeningSocket; static Socket socket; static Thread thrReadRequest; static int iPort = 4444; static int iConnectionQueue = 100; static void Main(string[] args) { Console.WriteLine(IPAddress.Parse(getLocalIPAddress()).ToString()); try { listeningSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); //listeningSocket.Bind(new IPEndPoint(0, iPort)); listeningSocket.Bind(new IPEndPoint(IPAddress.Parse(getLocalIPAddress()), iPort)); listeningSocket.Listen(iConnectionQueue); thrReadRequest = new Thread(new ThreadStart(getRequest)); thrReadRequest.Start(); } catch (Exception e) { Console.WriteLine("Winsock error: " + e.ToString()); //throw; } } static private void getRequest() { int i = 0; while (true) { i++; Console.WriteLine("Outside Try i = {0}", i.ToString()); try { socket = listeningSocket.Accept(); // Receiving //byte[] rcvLenBytes = new byte[4]; //socket.Receive(rcvLenBytes); //int rcvLen = System.BitConverter.ToInt32(rcvLenBytes, 0); //byte[] rcvBytes = new byte[rcvLen]; //socket.Receive(rcvBytes); //String formattedBuffer = System.Text.Encoding.ASCII.GetString(rcvBytes); byte[] buffer = new byte[socket.SendBufferSize]; int iBufferLength = socket.Receive(buffer, 0, buffer.Length, 0); Console.WriteLine("Received {0}", iBufferLength); Array.Resize(ref buffer, iBufferLength); string formattedBuffer = Encoding.ASCII.GetString(buffer); Console.WriteLine("Data received by Client: {0}", formattedBuffer); if (formattedBuffer == "quit") { socket.Close(); listeningSocket.Close(); Environment.Exit(0); } Console.WriteLine("Inside Try i = {0}", i.ToString()); Thread.Sleep(500); } catch (Exception e) { //socket.Close(); Console.WriteLine("Receiving error: " + e.ToString()); Console.ReadKey(); //throw; } finally { socket.Close(); //listeningsocket.close(); } } } static private string getLocalIPAddress() { IPHostEntry host; string localIP = ""; host = Dns.GetHostEntry(Dns.GetHostName()); foreach (IPAddress ip in host.AddressList) { if (ip.AddressFamily == AddressFamily.InterNetwork) { localIP = ip.ToString(); break; } } return localIP; } } and the jave android code is private TCPClient mTcpClient; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); final EditText editText = (EditText) findViewById(R.id.edit_message); Button send = (Button)findViewById(R.id.sendbutton); // connect to the server new connectTask().execute(""); send.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { String message = editText.getText().toString(); //sends the message to the server if (mTcpClient != null) { mTcpClient.sendMessage(message); } editText.setText(""); } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.main, menu); return true; } public class connectTask extends AsyncTask<String,String,TCPClient> { @Override protected TCPClient doInBackground(String... message) { mTcpClient = new TCPClient(new TCPClient.OnMessageReceived() { @Override public void messageReceived(String message) { publishProgress(message); } }); mTcpClient.run(); return null; } @Override protected void onProgressUpdate(String... values) { super.onProgressUpdate(values); } } } when i run the server it gives output of try i=1. can any one tell me what to do next

    Read the article

  • apache2 error Could not open configuration file /etc/apache2/conf.d/: No such file or directory

    - by Sundar Elumalai
    I have just upgraded my Ubuntu 13.10 and apache2 is not working. When I try to start the apache2 server it is printing following errors: * Starting web server apache2 * The apache2 configtest failed. Output of config test was: apache2: Syntax error on line 263 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/conf.d/: No such file or directory Action 'configtest' failed.

    Read the article

  • NoSQL is not about object databases

    - by Bertrand Le Roy
    NoSQL as a movement is an interesting beast. I kinda like that it’s negatively defined (I happen to belong myself to at least one other such a-community). It’s not in its roots about proposing one specific new silver bullet to kill an old problem. it’s about challenging the consensus. Actually, blindly and systematically replacing relational databases with object databases would just replace one set of issues with another. No, the point is to recognize that relational databases are not a universal answer -although they have been used as one for so long- and recognize instead that there’s a whole spectrum of data storage solutions out there. Why is it so hard to recognize, by the way? You are already using some of those other data storage solutions every day. Let me cite a few: The file system Active Directory XML / JSON documents The Web e-mail Logs Excel files EXIF blobs in your photos Relational databases And yes, object databases It’s just a fact of modern life. Notice by the way that most of the data that you use every day is unstructured and thus mostly unsuitable for relational storage. It really is more a matter of recognizing it: you are already doing NoSQL. So what happens when for any reason you need to simultaneously query two or more of these heterogeneous data stores? Well, you build an index of sorts combining them, and that’s what you query instead. Of course, there’s not much distance to travel from that to realizing that querying is better done when completely separated from storage. So why am I writing about this today? Well, that’s something I’ve been giving lots of thought, on and off, over the last ten years. When I built my first CMS all that time ago, one of the main problems my customers were facing was to manage and make sense of the mountain of unstructured data that was constituting most of their business. The central entity of that system was the file system because we were dealing with lots of Word documents, PDFs, OCR’d articles, photos and static web pages. We could have stored all that in SQL Server. It would have worked. Ew. I’m so glad we didn’t. Today, I’m working on Orchard (another CMS ;). It’s a pretty young project but already one of the questions we get the most is how to integrate existing data. One of the ideas I’ll be trying hard to sell to the rest of the team in the next few months is to completely split the querying from the storage. Not only does this provide great opportunities for performance optimizations, it gives you homogeneous access to heterogeneous and existing data sources. For free.

    Read the article

  • apt-get error, cannot install many packages?

    - by tech
    How do I fix this? It shows an error, and I don't know how to fix it. I want to install crossover. Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... failed. The following packages have unmet dependencies: crossover:i386 : Depends: libc6:i386 (>= 2.3) but it is not installed Depends: libice6:i386 but it is not installed or xlibs:i386 but it is not installable Depends: libsm6:i386 but it is not installed or xlibs:i386 but it is not installable Depends: libx11-6:i386 but it is not installed or xlibs:i386 but it is not installable Depends: libxext6:i386 but it is not installed or xlibs:i386 but it is not installable Depends: libfreetype6:i386 but it is not installed Depends: libz1:i386 Depends: perl5-base:i386 Depends: perl-modules:i386 but it is not installable Depends: python:i386 (>= 2.4) but it is not installed Depends: python-gtk2:i386 but it is not installed Depends: python-glade2:i386 but it is not installed Depends: desktop-file-utils:i386 but it is not installed Depends: libasound2:i386 but it is not installed Depends: libgl1:i386 Depends: libxrandr2:i386 but it is not installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. E: Unable to correct dependencies EDIT I have another recent error. You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: crossover:i386 : Depends: libc6:i386 (= 2.3) but it is not installed Depends: libice6:i386 but it is not installed or xlibs:i386 but it is not installable Depends: libsm6:i386 but it is not installed or xlibs:i386 but it is not installable Depends: libx11-6:i386 but it is not installed or xlibs:i386 but it is not installable Depends: libxext6:i386 but it is not installed or xlibs:i386 but it is not installable Depends: libfreetype6:i386 but it is not installed Depends: libz1:i386 Depends: perl5-base:i386 Depends: perl-modules:i386 but it is not installable Depends: python:i386 (= 2.4) but it is not installed Depends: python-gtk2:i386 but it is not installed Depends: python-glade2:i386 but it is not installed Depends: desktop-file-utils:i386 but it is not installed Depends: libasound2:i386 but it is not installed Depends: libgl1:i386 Depends: libxrandr2:i386 but it is not installed E: Unmet dependencies. Try using -f. running " apt-get -f install " gives me the same error everytime.

    Read the article

  • nfs-kernel-server installation : file does not exist

    - by Stuti Rastogi
    I am extremely new to Ubuntu and need to work on EdX platform. I need to install the NFS Client on Ubuntu 12.04 for the same. I used the following stuti@stuti:/$ sudo apt-get install nfs-kernel-server However this gives me an error as follows: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: nfs-common The following NEW packages will be installed: nfs-common nfs-kernel-server 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/355 kB of archives. After this operation, 1,222 kB of additional disk space will be used. Do you want to continue [Y/n]? y Selecting previously unselected package nfs-common. (Reading database ... 200367 files and directories currently installed.) Unpacking nfs-common (from .../nfs-common_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Selecting previously unselected package nfs-kernel-server. Unpacking nfs-kernel-server (from .../nfs-kernel-server_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up nfs-common (1:1.2.5-3ubuntu3.1) ... statd start/running, process 4574 gssd stop/pre-start, process 4603 idmapd start/running, process 4643 Setting up nfs-kernel-server (1:1.2.5-3ubuntu3.1) ... update-rc.d: /etc/init.d/nfs-kernel-server: file does not exist dpkg: error processing nfs-kernel-server (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: nfs-kernel-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried sudo apt-get autoremove nfs-kernel-server sudo apt-get autoremove nfs-common After these, I tried to install but I keep getting the same error. apt-get update or upgrade also do not help and give the same error. I am clueless as to where can I find this missing file as stated in the output. I tried to google about this problem but none of the solutions I came across have helped or I have not been able to understand some of them. Any help would really be appreciated. Thanks in advance for your time and attention.

    Read the article

  • VIM does not detect syntax of .ssh/config

    - by Erik
    On a plain Ubuntu installation (12.04 in my case) when I have no ~/.vimrc VIM does not detect syntax of .ssh/config. Syntax highlighting works, but it does not set the correct filetype. vi ~/.ssh/config :set syn? >syntax=conf When I do: set syn=sshconfig Then the syntax highlighting is as it should be. Why isn't the filetype automatically identified? And how can it be set automatically?

    Read the article

  • Atheros AR9285 / Lenovo G560 wireless not working after installing 13.04

    - by teyi
    I had Ubuntu 12.04 initially installed on my laptop. I upgraded to 12.10 then 13.04. Everything worked fine, including wireless. After adding a new memory card ( I only had 2 gb and one memory slot free) my wireess stopped working. I backed up all my data and reinstallled Ubuntu 13.04. Everything works fine except wireess. I bought this laptop in 2010 from Japan. It has Intel Core i5 CPU M 450 @2.40 Ghz * 4 3,7 Gb RAM os type 64 bit The output of iwconfig: eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off The output of rfkill list all: 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no The output of lshw -C network: *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:05:00.0 logical name: wlan0 version: 01 serial: 78:e4:00:7d:fe:fa width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.8.0-19-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:d6400000-d640ffff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 02 serial: 88:ae:1d:2b:36:ac size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d2410000-d2410fff memory:d2400000-d240ffff memory:d2420000-d243ffff The wi-fi network appears as disconnected ( it's greyed out) Strangely enough I see a wifi network ( not mine) but not mine or the rest. That network doesn't require a password . I click on it, try to connect and i get an error message: failed to connect to xxxxx ... 32) The access point/org/freedesktop/NetworkManager/AccessPoint/0 was not in the scan list. Someone help please

    Read the article

  • Should I register the domain name that has not popular top level domain name

    - by sreginogemoh
    Lets say for example you want to register domain name assembly.com or assembly.net and find out that they are already registered(not available). Would you go with the domain name assemb.ly in such case? By having .ly the domain name represent word assembly but I think .ly domain is not so friendly for search engines? What do you think? Do you see any advantage of asemb.ly over assembly.com or assembly.net except it is shorter?

    Read the article

  • Network is not working anymore - Ubuntu 12.04

    - by Jonathan
    Network is not working anymore - Ubuntu 12.04 Hello, I have a problem with my network connection. I have been using the same laptop with Ubuntu and the same connection for more than a year, and suddenly yesterday the connection stopped working (both wireless and wired). I've tested with another computer and the connection is fine (both wireless and wired). I've been reading similar posts but I haven't found a solution yet. I tried a few commands that I'm posting here (my system is in spanish, so I have traslated it to english, maybe the terms are not accurate): grep -i eth /var/log/syslog | tail Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): now managed Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): device state change: unmanaged - unavailable (reason 'managed') [10 20 2] Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): bringing up device. Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): preparing device. Jun 3 18:45:40 vanesa-pc kernel: [ 7351.845743] forcedeth 0000:00:0a.0: irq 41 for MSI/MSI-X Jun 3 18:45:40 vanesa-pc kernel: [ 7351.845984] forcedeth 0000:00:0a.0: eth0: no link during initialization Jun 3 18:45:40 vanesa-pc kernel: [ 7351.847103] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): deactivating device (reason 'managed') [2] Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: Added default wired connection 'Wired connection 1' for /sys/devices/pci0000:00/0000:00:0a.0/net/eth0 Jun 3 18:45:40 vanesa-pc kernel: [ 7351.848817] ADDRCONF(NETDEV_UP): eth0: link is not ready ifconfig -a eth0 Link encap:Ethernet addressHW 00:1b:24:fc:a8:d1 ACTIVE BROADCAST MULTICAST MTU:1500 Metric:1 Packages RX:0 errors:16 lost:0 overruns:0 frame:16 Packages TX:123 errors:0 lost:0 overruns:0 carrier:0 colissions:0 length.tailTX:1000 Bytes RX:0 (0.0 B) TX bytes:26335 (26.3 KB) Interruption:41 Base address: 0x2000 lo Link encap:Local loop Inet address:127.0.0.1 Mask:255.0.0.0 Inet6 address: ::1/128 Scope:Host ACTIVE LOOP WORKING MTU:16436 Metrics:1 Packages RX:1550 errors:0 lost:0 overruns:0 frame:0 Packages TX:1550 errors:0 lost:0 overruns:0 carrier:0 colissions:0 long.tailTX:0 Bytes RX:125312 (125.3 KB) TX bytes:125312 (125.3 KB) iwconfig lo no wireless extensions. eth0 no wireless extensions. sudo lshw -C network *-network description: Ethernet interface product: MCP67 Ethernet manufacturer: NVIDIA Corporation Physical id: a bus information: pci@0000:00:0a.0 logical name: eth0 version: a2 series: 00:1b:24:fc:a8:d1 capacity: 100Mbit/s width: 32 bits clock: 66MHz capacities: pm msi ht bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=forcedeth driverversion=0.64 latency=0 link=no maxlatency=20 mingnt=1 multicast=yes port=MII resources: irq:41 memoria:f6288000-f6288fff ioport:30f8(size=8) memoria:f6289c00-f6289cff memoria:f6289800-f628980f lsmod Module Size Used by usbhid 41906 0 hid 77367 1 usbhid rfcomm 38139 0 parport_pc 32114 0 ppdev 12849 0 bnep 17830 2 bluetooth 158438 10 rfcomm,bnep binfmt_misc 17292 1 joydev 17393 0 hp_wmi 13652 0 sparse_keymap 13658 1 hp_wmi nouveau 708198 3 ttm 65344 1 nouveau drm_kms_helper 45466 1 nouveau drm 197692 5 nouveau,ttm,drm_kms_helper i2c_algo_bit 13199 1 nouveau psmouse 87213 0 mxm_wmi 12859 1 nouveau serio_raw 13027 0 k8temp 12905 0 i2c_nforce2 12906 0 wmi 18744 2 hp_wmi,mxm_wmi video 19068 1 nouveau mac_hid 13077 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp forcedeth 58096 0 Let me know if I can give you more information. Thank you very much in advance, Jonathan

    Read the article

  • openssl/rand.h header file not found

    - by Arun Reddy Kandoor
    I have installed libssl-dev package but that did not install the include files. How do I get the openssl include files? Appreciate your help. Checking for program g++ or c++ : /usr/bin/g++ Checking for program cpp : /usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /usr/bin/ranlib Checking for g++ : ok Checking for node path : ok /usr/bin/node Checking for node prefix : ok /usr Checking for header openssl/rand.h : not found /home/arun/Documents/webserver/node_modules/bcrypt/wscript:30: error: the configuration failed (see '/home/arun/Documents/webserver/node_modules/bcrypt/build/config.log') npm ERR! error installing [email protected] npm ERR! [email protected] preinstall: `node-waf clean || (exit 0); node-waf configure build` npm ERR! `sh "-c" "node-waf clean || (exit 0); node-waf configure build"` failed with 1 npm ERR! npm ERR! Failed at the [email protected] preinstall script. npm ERR! This is most likely a problem with the bcrypt package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node-waf clean || (exit 0); node-waf configure build npm ERR! You can get their info via: npm ERR! npm owner ls bcrypt npm ERR! There is likely additional logging output above. npm ERR! npm ERR! System Linux 3.8.0-32-generic npm ERR! command "node" "/usr/bin/npm" "install" npm ERR! cwd /home/arun/Documents/webserver npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! code ELIFECYCLE npm ERR! message [email protected] preinstall: `node-waf clean || (exit 0); node-waf configure build` npm ERR! message `sh "-c" "node-waf clean || (exit 0); node-waf configure build"` failed with 1 npm ERR! errno {} npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/arun/Documents/webserver/npm-debug.log npm not ok

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >