Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 322/704 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Synaptics drivers are not loading on Kubuntu 13.10 on Dell Vostro 2420

    - by Alok Singh Mahor
    I freshly installed Kubuntu 13.10, 32 bit version on newly purchased Dell Vostro 2420. everything is working fine except scrolling and multitouch features through touchpad. I am able to change position of cursor using touchpad and able to tap (single click and double click) but scrolling is not working I tried to find out solution by searching on google but could not find proper solution to load synaptics drivers. i am listing some details: Laptop: Dell Vostro 2420 Linux Kernel version and distribution: 3.11.0-12-generic, Kubuntu 13.10 output of xinput list is ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=12 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Power Button id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=10 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ? Dell WMI hotkeys id=13 [slave keyboard (3)] Output of synclient -l is Couldn't find synaptics properties. No synaptics driver loaded? output of lshw is at http://paste.ubuntu.com/6645687/ xserver log and dmesg dont have trace of synaptics kindly tell me how to troubleshoot this problem.

    Read the article

  • How do I disable touchpad tap to click?

    - by AWE
    You've heard this a million times but the "tap to click" is a pain in the behind and I want to disable it. There is no touchpad in gpointing-device-settings and neither in mouse and touchpad in system settings. I've tried some commands in terminal but it's all crap. Dconf-editor doesn't react. How about solving this once and for all? xinput list: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Power Button id=9 [slave keyboard (3)] ? Sleep Button id=10 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] ? Dell WMI hotkeys id=14 [slave keyboard (3)]

    Read the article

  • data is not inserting in my db table [closed]

    - by Sarojit Chakraborty
    Please see my below(SubjectDetailsDao.java) code of addZoneToDb method. My debugger is nicely running upto ** session.getTransaction().commit();** code. but after that debugger stops,I do not know why it stops after that line? .And because of this i am unable to insert my data into my database table. I don't know what to do.Why it is not inserting my data into my database table? Plz help me for this. H Now i am getting this Error: Struts Problem Report Struts has detected an unhandled exception: Messages: org.hibernate.event.PreInsertEvent.getSource()Lorg/hibernate/event/EventSource; File: org/hibernate/validator/event/ValidateEventListener.java Line number: 172 Stacktraces java.lang.reflect.InvocationTargetException sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:601) com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:122) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:179) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:94) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:235) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:89) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:130) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:267) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:126) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:138) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:165) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:179) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:176) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:52) org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:488) org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:498) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:394) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:243) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188) org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) java.lang.Thread.run(Thread.java:722) java.lang.NoSuchMethodError: org.hibernate.event.PreInsertEvent.getSource()Lorg/hibernate/event/EventSource; org.hibernate.validator.event.ValidateEventListener.onPreInsert(ValidateEventListener.java:172) org.hibernate.action.EntityInsertAction.preInsert(EntityInsertAction.java:156) org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:49) org.hibernate.engine.ActionQueue.execute(ActionQueue.java:250) org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:234) org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:141) org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298) org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27) org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338) org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106) v.esoft.dao.SubjectdetailsDAO.SubjectdetailsDAO.addZoneToDb(SubjectdetailsDAO.java:185) v.esoft.actions.LoginAction.datatobeinsert(LoginAction.java:53) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:601) com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) ............................... ............................... SubjectDetailsDao.java(I have problem in addZoneToDb) package v.esoft.dao.SubjectdetailsDAO; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import org.hibernate.HibernateException; import org.hibernate.Query; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.criterion.Order; import com.opensymphony.xwork2.ActionSupport; import v.esoft.connection.HibernateUtil; import v.esoft.pojos.Subjectdetails; public class SubjectdetailsDAO extends ActionSupport { private static Session session = null; private static SessionFactory sessionFactory = null; static Transaction transaction = null; private String currentDate; SimpleDateFormat formatter1 = new SimpleDateFormat("yyyy-MM-dd"); private java.util.Date currentdate; public SubjectdetailsDAO() { sessionFactory = HibernateUtil.getSessionFactory(); SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd"); currentdate = new java.util.Date(); currentDate = formatter.format(currentdate); } public List getAllCustomTempleteRoutinesForGrid() { List list = new ArrayList(); try { session = sessionFactory.openSession(); list = session.createCriteria(Subjectdetails.class).addOrder(Order.desc("subjectId")).list(); } catch (Exception e) { System.out.println("Exepetion in getAllCustomTempleteRoutines" + e); } finally { try { // HibernateUtil.shutdown(); } catch (Exception e) { System.out.println("Exception In getExerciseListByLoginId Resource closing :" + e); } } return list; } //**showing list on grid private static List<Subjectdetails> custLst=new ArrayList<Subjectdetails>(); static int total=50; static { SubjectdetailsDAO cts = new SubjectdetailsDAO(); Iterator iterator1 = cts.getAllCustomTempleteRoutinesForGrid().iterator(); while (iterator1.hasNext()) { Subjectdetails get = (Subjectdetails) iterator1.next(); custLst.add(get); } } /****************************************update Routines List by WorkId************************************/ public int updatesub(Subjectdetails s) { int updated = 0; try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); Query query = session.createQuery("UPDATE Subjectdetails set subjectName = :routineName1 WHERE subjectId=:workoutId1"); query.setString("routineName1", s.getSubjectName()); query.setInteger("workoutId1", s.getSubjectId()); updated = query.executeUpdate(); if (updated != 0) { } transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { System.out.println("Exception in addUser() Rollback :" + e1); } } } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In addUser Resource closing :" + e); } } return updated; } /****************************************update Routines List by WorkId************************************/ public int addSubjectt(Subjectdetails s) { int inserted = 0; Subjectdetails ss=new Subjectdetails(); try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); ss. setSubjectName(s.getSubjectName()); session.save(ss); System.out.println("Successfully data insert in database"); inserted++; if (inserted != 0) { } transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { System.out.println("Exception in addUser() Rollback :" + e1); } } } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In addUser Resource closing :" + e); } } return inserted; } /******************************************Get all Routines List by LoginID************************************/ public List getSubjects() { List list = null; try { session = sessionFactory.openSession(); list = session.createCriteria(Subjectdetails.class).list(); } catch (Exception e) { System.out.println("Exception in getRoutineList() :" + e); } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In getUserList Resource closing :" + e); } } return list; } //---\ public int addZoneToDb(String countryName, Integer loginId) { int inserted = 0; try { System.out.println("---------1--------"); Session session = HibernateUtil.getSessionFactory().openSession(); System.out.println("---------2------session--"+session); session.beginTransaction(); Subjectdetails country = new Subjectdetails(countryName, loginId, currentdate, loginId, currentdate); System.out.println("---------2------country--"+country); session.save(country); System.out.println("-------after save--"); inserted++; session.getTransaction().commit(); System.out.println("-------after commits--"); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { } } } finally { try { } catch (Exception e) { } } return inserted; } //-- public int nextId() { return total++; } public List<Subjectdetails> buildList() { return custLst; } public static int count() { return custLst.size(); } public static List<Subjectdetails> find(int o,int q) { return custLst.subList(o, q); } public void save(Subjectdetails c) { custLst.add(c); } public static Subjectdetails findById(Integer id) { try { for(Subjectdetails c:custLst) { if(c.getSubjectId()==id) { return c; } } } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } public void update(Subjectdetails c) { for(Subjectdetails x:custLst) { if(x.getSubjectId()==c.getSubjectId()) { x.setSubjectName(c.getSubjectName()); } } } public void delete(Subjectdetails c) { custLst.remove(c); } public static List<Subjectdetails> findNotById(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()!=id) { temp.add(c); } } return temp; } public static List<Subjectdetails> findLesserAsId(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()<=id) { temp.add(c); } } return temp; } public static List<Subjectdetails> findGreaterAsId(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()>=id) { temp.add(c); } } return temp; } } Subjectdetails.hbm.xml <hibernate-mapping> <class name="vb.sofware.pojos.Subjectdetails" table="subjectdetails" catalog="vbsoftware"> <id name="subjectId" type="int"> <column name="subject_id" /> <generator class="increment"/> </id> <property name="subjectName" type="string"> <column name="subject_name" length="150" /> </property> <property name="createrId" type="java.lang.Integer"> <column name="creater_id" /> </property> <property name="createdDate" type="timestamp"> <column name="created_date" length="19" /> </property> <property name="updateId" type="java.lang.Integer"> <column name="update_id" /> </property> <property name="updatedDate" type="timestamp"> <column name="updated_date" length="19" /> </property> </class> </hibernate-mapping> My POJO - Subjectdetails.java package v.esoft.pojos; // Generated Oct 6, 2012 1:58:21 PM by Hibernate Tools 3.4.0.CR1 import java.util.Date; /** * Subjectdetails generated by hbm2java */ public class Subjectdetails implements java.io.Serializable { private int subjectId; private String subjectName; private Integer createrId; private Date createdDate; private Integer updateId; private Date updatedDate; public Subjectdetails( String subjectName) { //this.subjectId = subjectId; this.subjectName = subjectName; } public Subjectdetails() { } public Subjectdetails(int subjectId) { this.subjectId = subjectId; } public Subjectdetails(int subjectId, String subjectName, Integer createrId, Date createdDate, Integer updateId, Date updatedDate) { this.subjectId = subjectId; this.subjectName = subjectName; this.createrId = createrId; this.createdDate = createdDate; this.updateId = updateId; this.updatedDate = updatedDate; } public Subjectdetails( String subjectName, Integer createrId, Date createdDate, Integer updateId, Date updatedDate) { this.subjectName = subjectName; this.createrId = createrId; this.createdDate = createdDate; this.updateId = updateId; this.updatedDate = updatedDate; } public int getSubjectId() { return this.subjectId; } public void setSubjectId(int subjectId) { this.subjectId = subjectId; } public String getSubjectName() { return this.subjectName; } public void setSubjectName(String subjectName) { this.subjectName = subjectName; } public Integer getCreaterId() { return this.createrId; } public void setCreaterId(Integer createrId) { this.createrId = createrId; } public Date getCreatedDate() { return this.createdDate; } public void setCreatedDate(Date createdDate) { this.createdDate = createdDate; } public Integer getUpdateId() { return this.updateId; } public void setUpdateId(Integer updateId) { this.updateId = updateId; } public Date getUpdatedDate() { return this.updatedDate; } public void setUpdatedDate(Date updatedDate) { this.updatedDate = updatedDate; } } And my Sql query is CREATE TABLE IF NOT EXISTS `subjectdetails` ( `subject_id` int(3) NOT NULL, `subject_name` varchar(150) DEFAULT NULL, `creater_id` int(5) DEFAULT NULL, `created_date` datetime DEFAULT NULL, `update_id` int(5) DEFAULT NULL, `updated_date` datetime DEFAULT NULL, PRIMARY KEY (`subject_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    Read the article

  • How do I get 1366x768 resolution on 12.04?

    - by Megan
    I am on an HP Envy 14, and the proper resolution that I should be using is 1366x768. This is not an option and I am stuck on 1024x768. I am using Linux 12.04. lspci | grep VGA: 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Madison [Radeon HD 5000M Series] I've tried to add the resolution as a mode in xorg.conf but that does not work. Please any help would be appreciated. I'm new to Linux and just got my dual boot working but this resolution issue is killing me. Edit1 I just tried using the xrandr command: xrandr --newmode "1368x768_60.00" 85.25 1368 1440 1576 1784 768 771 781 798 -hsync +vsync But I get an error: xrandr: Failed to get size of gamma for output default Edit2 lsmod returns the following: Module Size Used by vesafb 13844 1 rfcomm 47604 12 bnep 18281 2 parport_pc 32866 0 ppdev 17113 0 snd_hda_codec_hdmi 32474 1 arc4 12529 2 joydev 17693 0 hid_logitech_dj 18594 0 i915 472941 5 uvcvideo 72627 0 usbhid 47199 1 hid_logitech_dj hid 99559 2 hid_logitech_dj,usbhid psmouse 87692 0 iwlwifi 332525 0 mac80211 506816 1 iwlwifi videodev 98259 1 uvcvideo snd_hda_codec_idt 70795 1 mei 41616 0 btusb 18288 2 v4l2_compat_ioctl32 17128 1 videodev hp_accel 25976 0 lis3lv02d 19876 1 hp_accel hp_wmi 18092 0 sparse_keymap 13890 1 hp_wmi input_polldev 13896 1 lis3lv02d drm_kms_helper 46978 1 i915 drm 242038 2 i915,drm_kms_helper i2c_algo_bit 13423 1 i915 serio_raw 13211 0 snd_hda_intel 33773 5 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec bluetooth 180104 23 rfcomm,bnep,btusb cfg80211 205544 2 iwlwifi,mac80211 snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec mac_hid 13253 0 snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event fglrx 3263886 0 snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 20 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_ra wmidi,snd_seq,snd_timer,snd_seq_device wmi 19256 1 hp_wmi video 19596 1 i915 intel_ips 18174 0 soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62099 0 I have installed ATI/AMD proprietary FGLRX graphics driver. But there is another one called ATI/AMD proprietary FGLRX graphics driver (post-release updates) which I have trouble installing because it gives me an error and tells me to look at some sort of jockey log.

    Read the article

  • Unable to enable wireless in ubuntu 12.04

    - by Joe
    I have a Vostro 2520 and not sure how to enable wireless on my machine. The details are given below, would appreciate any pointers to resolving this issue. lsmod returns Module Size Used by ath9k 132390 0 ath9k_common 14053 1 ath9k ath9k_hw 411151 2 ath9k,ath9k_common ath 24067 3 ath9k,ath9k_common,ath9k_hw b43 365785 0 mac80211 506816 2 ath9k,b43 cfg80211 205544 4 ath9k,ath,b43,mac80211 bcma 26696 1 b43 ssb 52752 1 b43 ndiswrapper 282628 0 ums_realtek 18248 0 usb_storage 49198 1 ums_realtek uas 18180 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_cirrus 24002 1 joydev 17693 0 parport_pc 32866 0 ppdev 17113 0 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 10 rfcomm,bnep psmouse 97362 0 dell_wmi 12681 0 sparse_keymap 13890 1 dell_wmi snd_hda_intel 33773 3 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_cirrus,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq wmi 19256 1 dell_wmi snd 78855 16 snd_hda_codec_hdmi,snd_hda_codec_cirrus,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device mac_hid 13253 0 i915 473240 3 drm_kms_helper 46978 1 i915 uvcvideo 72627 0 drm 242038 4 i915,drm_kms_helper videodev 98259 1 uvcvideo soundcore 15091 1 snd dell_laptop 18119 0 dcdbas 14490 1 dell_laptop i2c_algo_bit 13423 1 i915 v4l2_compat_ioctl32 17128 1 videodev snd_page_alloc 18529 2 snd_hda_intel,snd_pcm video 19596 1 i915 serio_raw 13211 0 mei 41616 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62099 0 sudo lshw -class network *-network UNCLAIMED description: Network controller product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:07:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f7c00000-f7c07fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 07 serial: 78:45:c4:a3:aa:65 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8168e-3_0.0.4 03/27/12 ip=192.168.1.5 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:e000(size=256) memory:f0004000-f0004fff memory:f0000000-f0003fff rfkill list all 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes

    Read the article

  • Disable touchpad tap to click on Oneiric ocelot

    - by AWE
    You've heard this a million times but the "tap to click" is a pain in the behind and I want to disable it. There is no touchpad in gpointing-device-settings and neither in mouse and touchpad in system settings. I've tried some commands in terminal but it's all crap. Dconf-editor doesn't react. How about solving this once and for all? xinput list: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Power Button id=9 [slave keyboard (3)] ? Sleep Button id=10 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] ? Dell WMI hotkeys id=14 [slave keyboard (3)]

    Read the article

  • ALPS touchpad on DELL Inspiron I15RN-3647BK with Ubuntu 11.10 x64

    - by Miguel
    I can't make work the multitouch of the ALPS touchpad in my Dell Inspiron I15RN-3647BK with Ubuntu 11.10 x64, kernel 3.0.0-17-generic... I have tried with this driver but it doesn't work http://people.canonical.com/~sforshee/alps-touchpad/psmouse-alps-0.10/psmouse-alps-dkms_0.10_all.deb. This is the result from the following command: user@laptop:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=12 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Power Button id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=10 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ? Dell WMI hotkeys That came with or without the alps driver from canonical repository and the touchpad just works like a PS2 generic mouse... Any ideas? Thanks in advance...

    Read the article

  • How can I configure the touchpad and keyboard settings on a Dell Inspiron 5110?

    - by Robik
    I am using ubuntu 11.10. I want the following 3 things: I have dell inspiron 5110 laptop. There is button at the top right corner of laptop which can be used for turning the screen off. It works in windows but it does not work in ubuntu 11.10. Even in the manual of the laptop, it the button is supported only in windows. Is there a way to activate it in ubuntu 11.10? Some of the keys like: "break" etc. are missing. Can I use other keys (or combinations of other keys) to function as those missing keys? In the program, "mouse and touchpad", there is no tab for touchpad. I want to enable vertical and horizontal scrolling. How do I do that? The command: xinput list shows Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Power Button id=9 [slave keyboard (3)] ? Sleep Button id=10 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] ? Dell WMI hotkeys Please help!!

    Read the article

  • Wireless problems with Broadcom BCM4313 with windows7 (duplicate)

    - by user292394
    I have wireless connection problem can't connect my wireless adress lspci -vvnn | grep 14e4 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01) iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: no lsmod Module Size Used by bnep 19624 2 rfcomm 69160 8 ip6t_REJECT 12939 1 xt_hl 12521 6 ip6t_rt 13537 3 nf_conntrack_ipv6 18894 8 nf_defrag_ipv6 34768 1 nf_conntrack_ipv6 ipt_REJECT 12541 1 xt_LOG 17717 10 xt_limit 12711 13 xt_tcpudp 12884 18 xt_addrtype 12635 4 nf_conntrack_ipv4 15012 8 nf_defrag_ipv4 12758 1 nf_conntrack_ipv4 xt_conntrack 12760 16 ip6table_filter 12815 1 ip6_tables 27025 1 ip6table_filter nf_conntrack_netbios_ns 12665 0 nf_conntrack_broadcast 12589 1 nf_conntrack_netbios_ns nf_nat_ftp 12770 0 nf_nat 21798 1 nf_nat_ftp nf_conntrack_ftp 18638 1 nf_nat_ftp nf_conntrack 96976 8 nf_nat_ftp,nf_conntrack_netbios_ns,nf_nat,xt_conntrack,nf_conntrack_broadcast,nf_conntrack_ftp,nf_conntrack_ipv4,nf_conntrack_ipv6 iptable_filter 12810 1 ip_tables 27239 1 iptable_filter vx_tables 34059 13 ip6table_filter,xt_hl,ip_tables,xt_tcpudp,xt_limit,xt_conntrack,xt_LOG,iptable_filter,ip6t_rt,ipt_REJECT,ip6_tables,xt_addrtype,ip6t_REJECT uvcvideo 80885 0 videobuf2_vmalloc 13216 1 uvcvideo videobuf2_memops 13362 1 videobuf2_vmalloc snd_hda_codec_hdmi 46207 1 videobuf2_core 40664 1 uvcvideo videodev 134688 2 uvcvideo,videobuf2_core snd_hda_codec_conexant 57441 1 snd_hda_intel 52355 8 snd_hda_codec 192906 3 snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 102099 4 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 dm_multipath 22873 0 snd_seq_midi_event 14899 1 snd_seq_midi scsi_dh 14882 1 dm_multipath lib80211_crypt_tkip 17619 0 snd_rawmidi 30144 1 snd_seq_midi intel_powerclamp 14705 0 coretemp 13435 0 kvm_intel 143060 0 kvm 451511 1 kvm_intel joydev 17381 0 serio_raw 13462 0 snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi intel_ips 18664 0 wl 4207846 0 snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi btusb 32412 0 bluetooth 395423 22 bnep,btusb,rfcomm snd_timer 29482 2 snd_pcm,snd_seq snd 69238 26 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi lib80211 14381 2 wl,lib80211_crypt_tkip toshiba_bluetooth 12852 0 cfg80211 484040 1 wl lpc_ich 21080 0 fglrx 8085343 190 soundcore 12680 1 snd toshiba_acpi 22901 0 sparse_keymap 13948 1 toshiba_acpi wmi 19177 1 toshiba_acpi amd_iommu_v2 19054 1 fglrx video 19476 0 mei_me 18627 0 mei 82276 1 mei_me mac_hid 13205 0 parport_pc 32701 0 ppdev 17671 0 lp 17759 0 parport 42348 3 lp,ppdev,parport_pc hid_generic 12548 0 usbhid 52570 0 hid 106148 2 hid_generic,usbhid psmouse 102222 0 ahci 25819 3 libahci 32168 1 ahci atl1c 46086 0 dm_mirror 22135 0 dm_region_hash 20862 1 dm_mirror dm_log 18411 2 dm_region_hash,dm_mirror Can someone give any ideas on how I can fix my wireless

    Read the article

  • Getting Started with Chart control in ASP.Net 4.0

    - by sreejukg
    In this article I am going to demonstrate the Chart control available in ASP.Net 4 and Visual Studio 2010. Most of the web applications need to generate reports for business users. The business users are happy to view the results in a graphical format more that seeing it in numbers. For the purpose of this demonstration, I have created a sales table. I am going to create charts from this sale data. The sale table looks as follows I have created an ASP.Net web application project in Visual Studio 2010. I have a default.aspx page that I am going to use for the demonstration. First I am going to add a chart control to the page. Visual Studio 2010 has a chart control. The Chart Control comes under the Data Tab in the toolbox. Drag and drop the Chart control to the default.aspx page. Visual Studio adds the below markup to the page. <asp:Chart ID="Chart1" runat="server"></asp:Chart> In the designer view, the Chart controls gives the following output. As you can see this is exactly similar to other server controls in ASP.Net, and similar to other controls under the data tab, Chart control is also a data bound control. So I am going to bind this with my sales data. From the design view, right click the chart control and select “show smart tag” Here you need so choose the Data source property and the chart type. From the choose data source drop down, select new data source. In the data source configuration wizard, select the SQL data base and write the query to retrieve the data. At first I am going to show the chart for amount of sales done by each sales person. I am going to use the following query inside sqldatasource select command. “SELECT SUM(SaleAmount) AS Expr1, salesperson FROM SalesData GROUP BY SalesPerson” This query will give me the amount of sales achieved by each sales person. The mark up of SQLDataSource is as follows. <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:SampleConnectionString %>" SelectCommand="SELECT SUM(SaleAmount) as amount, SalesPerson FROM SalesData GROUP BY SalesPerson"></asp:SqlDataSource> Once you selected the data source for the chart control, you need to select the X and Y values for the columns. I have entered salesperson in the X Value member and amount in the Y value member. After modifications, the Chart control looks as follows Click F5 to run the application. The output of the page is as follows. Using ASP.Net it is much easier to represent your data in graphical format. To show this chart, I didn’t even write any single line of code. The chart control is a great tool that helps the developer to show the business intelligence in their applications without using third party products. I will write another blog that explore further possibilities that shows more reports by using the same sales data. If you want to get the Project in zipped format, post your email below.

    Read the article

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • Archiving SQLHelp tweets

    - by jamiet
    #SQLHelp is a Twitter hashtag that can be used by any Twitter user to get help from the SQL Server community. I think its fair to say that in its first year of being it has proved to be a very useful resource however Kendra Little (@kendra_little) made a very salient point yesterday when she tweeted: Is there a way to search the archives of #sqlhelp Trying to remember answer to a question I know I saw a couple months ago http://twitter.com/#!/Kendra_Little/status/15538234184441856 This highlights an inherent problem with Twitter’s search capability – it simply does not reach far enough back in time. I have made steps to remedy that situation by putting into place two initiatives to archive Tweets that contain the #sqlhelp hashtag. The Archivist http://archivist.visitmix.com/ is a free service that, quite simply, archives a history of tweets that contain a given search term by periodically polling Twitter’s search service with that search term and subsequently displaying a dashboard providing an aggregate view of those tweets for things like tweet volume over time, top users and top words (Archivist FAQ). I have set up an archive on The Archivist for “sqlhelp” which you can view at http://archivist.visitmix.com/jamiet/7. Here is a screenshot of the SQLHelp dashboard 36 minutes after I set it up: There is lots of good information in there, including the fact that Jonathan Kehayias (@SQLSarg) is the most active SQLHelp tweeter (I suspect as an answerer rather than a questioner ) and that SSIS has proven to be a rather (ahem) popular subject!! Datasift The Archivist has its uses though for our purposes it has a couple of downsides. For starters you cannot search through an archive (which is what Kendra was after) and nor can you export the contents of the archive for offline analysis. For those functions we need something a bit more heavyweight and for that I present to you Datasift. Datasift is a tool (currently an alpha release) that allows you to search for tweets and provide them through an object called a Datasift stream. That sounds very similar to normal Twitter search though it has one distinct advantage that other Twitter search tools do not – Datasift has access to Twitter’s Streaming API (aka the Twitter Firehose). In addition it has access to a lot of other rather nice features: It provides the Datasift API that allows you to consume the output of a Datasift stream in your tool of choice (bring on my favourite ultimate mashup tool J ) It has a query language (called Filtered Stream Definition Language – FSDL for short) A Datasift stream can consume (and filter) other Datasift streams Datasift can (and does) consume services other than Twitter If I refer to Datasift as “ETL for tweets” then you may get some sort of idea what it is all about. Just as I did with The Archivist I have set up a publicly available Datasift stream for “sqlhelp” at http://datasift.net/stream/1581/sqlhelp. Here is the FSDL query that provides the data: twitter.text contains "sqlhelp" Pretty simple eh? At the current time it provides little more than a rudimentary dashboard but as Datasift is currently an alpha release I think this may be worth keeping an eye on. The real value though is the ability to consume the output of a stream via Datasift’s RESTful API, observe: http://api.datasift.net/stream.xml?stream_identifier=c7015255f07e982afdeebdf1ae6e3c0d&username=jamiet&api_key=XXXXXXX (Note that an api_key is required during the alpha period so, given that I’m not supplying my api_key, this URI will not work for you) Just to prove that a Datasift stream can indeed consume data from another stream I have set up a second stream that further filters the first one for tweets containing “SSIS”. That one is at http://datasift.net/stream/1586/ssis-sqlhelp and here is the FSDL query: rule "414c9845685ff8d2548999cf3162e897" and (interaction.content contains "ssis") When Datasift moves beyond alpha I’ll re-assess how useful this is going to be and post a follow-up blog. @Jamiet

    Read the article

  • Floating Panels and Describe Windows in Oracle SQL Developer

    - by thatjeffsmith
    One of the challenges I face as I try to share tips about our software is that I tend to assume there are features that you just ‘know about.’ Either they’re so intuitive that you MUST know about them, or it’s a feature that I’ve been using for so long I forget that others may have never even seen it before. I want to cover two of those today - Describe (DESC) – SHIFT+F4 Floating Panels My super-exciting desktop SQL Developer and Describe DESC or Describe is an Oracle SQL*Plus command. It shows what a table or view is composed of in terms of it’s column definition. Here’s an example: SQL*Plus: Release 11.2.0.3.0 Production on Fri Sep 21 14:25:37 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> desc beer; Name Null? Type ----------------------------------------- -------- ---------------------------- BREWERY NOT NULL VARCHAR2(100) CITY VARCHAR2(100) STATE VARCHAR2(100) COUNTRY VARCHAR2(100) ID NUMBER SQL> You can get the same information – and a good bit more – in SQL Developer using the SQL Developer DESC command. You invoke it with SHIFT+F4. It will open a floating (non-modal!) window with the information you want. Here’s an example: I can see my column definitions, constratins, stats, privs, etc A few ‘cool’ things you should be aware of: I can open as many as I want, and still work in my worksheet, browser, etc. I can also DESC an index, user, or most any other database object I can of course move them off my primary desktop display The DESC panel’s are read-only. I can’t drop a constraint from within the DESC window of a given table. But for dragging columns into my worksheet, and checking out the stats for my objects as I query them – it’s very, very handy. Try This Right Now Type ‘scott.emp’ (or some other table you have), place your cursor on the text, and hit SHIFT+F4. You’ll see the EMP object open. Now click into a column name in the columns page. Drag it into your worksheet. It will paste that column name into your query. This is an alternative for those that don’t like our code insight feature or dragging columns off the connection tree (new for v3.2!) Got it? SQL Developer’s Floating Panels Ok, let’s talk about a similar feature. Did you know that any dockable panel from the View menu can also be ‘floated?’ One of my favorite features is the SQL History. Every query I run is recorded, and I can recall them later without having to remember what I ran and when. And I USUALLY use the keyboard shortcuts for this. Let your trouble float away…if only it were so easy as a right-click in the real world. But sometimes I still want to see my recall list without having to give up my screen real estate. So I just mouse-right click on the panel tab and select ‘Float.’ Then I move it over to my secondary display – see the poorly lit picture in the beginning of this post. And that’s it. Simple, I know. But I thought you should know about these two things!

    Read the article

  • Bind a Wijmo Grid to Salesforce.com Through the Salesforce OData Connector

    - by dataintegration
    This article will explain how to connect to any RSSBus OData Connector with Wijmo's data grid using JSONP. While the example will use the Salesforce Connector, the same process can be followed for any of the RSSBus OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and the Wijmo javascript library. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open a Wijmo sample grid file to edit. This example will use the overview.html grid found in the Samples folder. Step 4: First, we will wrap the jQuery document ready function in a callback function for the JSONP service. In this example, we will wrap this in function called fnCallback which will take a single object args. <script id="scriptInit" type="text/javascript"> function fnCallback(args) { $(document).ready(function () { $("#demo").wijgrid({ ... }); }); }; </script> Step 5: Next, we need to format the columns object in a format that Wijmo's data grid expects. This is done by adding the headerText: element for each column. <script id="scriptInit" type="text/javascript"> function fnCallback(args) { var columns = []; for (var i = 0; i < args.columnnames.length; i++){ var col = { headerText: args.columnnames[i]}; columns.push(col); } $(document).ready(function () { $("#demo").wijgrid({ ... }); }); }; </script> Step 6: Now the wijgrid parameters are ready to be set. In this example, we will set the data input parameter to the args.data object and the columns input parameter to our newly created columns object. The resulting javascript function should look like this: <script id="scriptInit" type="text/javascript"> function fnCallback(args) { var columns = []; for (var i = 0; i < args.columnnames.length; i++){ var col = { headerText: args.columnnames[i]}; columns.push(col); } $(document).ready(function () { $("#demo").wijgrid({ allowSorting: true, allowPaging: true, pageSize: 10, data: args.data, columns: columns }); }); }; </script> Step 7: Finally, we need to add the JSONP reference to our Salesforce Connector's data table. You can find this by clicking on the Settings tab of the Salesforce Connector. Once you have found the JSONP URL, you will need to supply a valid table name that you want to connect with Wijmo. In this example, we will connect to the Lead table. You will also need to add authentication options in this step. In the example we will append the authtoken of the user who has access to the Salesforce Connector using the @authtoken query string parameter. IMPORTANT: This is not secure and will expose the authtoken of the user whose authtoken you supply in this step. There are other ways to secure the user's authtoken, but this example uses a query string parameter for simplicity. <script src="http://localhost:8181/sfconnector/data/conn/Lead.rsd?@jsonp=fnCallback&sql:query=SELECT%20*%20FROM%20Lead&@authtoken=<myAuthToken>" type="text/javascript"></script> Step 8: Now, we are done. If you point your browser to the URL of the sample, you should see your Salesforce.com leads in a Wijmo data grid.

    Read the article

  • SSRS Parameters and MDX Data Sets

    - by blakmk
    Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.

    Read the article

  • MySql Connector/NET 6.7.4 GA has been released

    - by fernando
    MySQL Connector/Net 6.7.4, a new version of the all-managed .NET driver for MySQL has been released.  This is the GA, is feature complete. It is recommended for production environments.  It is appropriate for use with MySQL server versions 5.0-5.7.  It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.7 version of MySQL Connector/Net brings the following new features: -  WinRT Connector. -  Load Balancing support. -  Entity Framework 5.0 support. -  Memcached support for Innodb Memcached plugin. -  This version also splits the product in two: from now on, starting version 6.7, Connector/NET will include only the former Connector/NET ADO.NET driver, Entity Framework and ASP.NET providers (Core libraries of MySql.Data, MySql.Data.Entity & MySql.Web). While all the former product Visual Studio integration (Design support, Intellisense, Debugger) are available as part of MySql Windows Installer under the name "MySql for Visual Studio".  WinRT Connector  ------------------------------------------- Now you can write MySql data access apps in Windows Runtime (aka Store Apps) using the familiar API of Connector/NET for .NET.  Load Balancing Support  -------------------------------------------  Now you can setup a Replication or Cluster configuration in the backend, and Connector/NET will balance the load of queries among all servers making up the backend topology.  Entity Framework 5.0  -------------------------------------------  Connector/NET is now compatible with EF 5, including special features of EF 5 like spatial types.  Memcached  -------------------------------------------  Just setup Innodb memcached plugin and use Connector/NET new APIs to establish a client to MySql 5.6 server's memcached daemon.  Bug fixes included in this release: - Fix for Entity Framework when inserts data having Identity columns (Oracle bug #16494585). - Fix for Connector/NET cannot read data from a MySql table using UTF-16/UTF-32 (MySql bug #69169, Oracle bug #16776818). - Fix for Malformed query in Entity Framework when eager loading due to multiple projections (MySql bug #67183, Oracle bug #16872852). - Fix for database objects with 'dbo' prefix when using automatic migrations in Entity Framework 5.0 (Oracle bug #16909439). - Fix for bug IIS application pool reset worker process causes website to crash (Oracle bug #16909237, Mysql Bug #67665). - Fix for bug Error in LINQ to Entities query when using Distinct().Count() (MySql Bug #68513, Oracle bug #16950146). - Fix for occasionally return no data when socket connection is slow, interrupted or delayed (MySql bug #69039, Oracle bug #16950212). - Fix for ConstraintException when filling a datatable (MySql bug #65065, Oracle bug #16952323). - Fix for Data Provider is not found after uninstalling Mysql for visual studio (Oracle bug #16973456). - Fix for nested sql generated for LINQ to Entities query with Take and Order by (MySql bug #65723, Oracle bug #16973939). The documentation is available at http://dev.mysql.com/doc/refman/5.7/en/connector-net.html  Enjoy and thanks for the support!  --  Fernando Gonzalez Sanchez | Software Engineer |  Oracle MySQL Windows Experience Team, Connector/NET  Guadalajara | Jalisco | Mexico 

    Read the article

  • NHibernate 3.0 and FluentNHibernate, how to get up and running&hellip;.

    - by DesigningCode
    First up. Its actually really easy. I’m not very religious about my DB tech, I don’t really care, I just want something that works.  So I’m happy to consider all options if they provide an advantage, and recently I was considering jumping from NHibernate to EF 4.0.  However before ditching NHibernate and jumping to EF 4.0 I thought I should try the head version of NHibernates trunk and the Head version of FluentNHibernate. I currently have a “Repository / Unit of Work” Framework built up around these two techs.  All up it makes my life pretty simple for dealing with databases.   The problem is the current release of NHibernate + the Linq provider wasn’t too hot for our purposes.  Especially trying to plug it into older VB.NET code.   The Linq provider spat the dummy with VB.NET lambdas.  Mainly because in C# Query().Where(l => l.Name.Contains("x") || l.Name.Contains("y")).ToList(); is not the same as the VB.NET Query().Where(Function(l) l.Name.Contains("x") Or l.Name.Contains("y")).ToList VB.NET seems to spit out … well…. something different :-) so anyways… Compiling your own version of NHibernate and FluentNHibernate.  It’s actually pretty easy! First you’ll need to install tortisesvn NAnt and Git if you don’t already have them.  NHibernate first step, get the subversion trunk https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/ into a directory somewhere.  eg \thirdparty\nhibernate Then use NAnt to build it.   (if you open the .sln it will show errors in that  AssemblyInfo.cs doesn’t exist ) to build it, there is a .txt document with sample command line build instructions,  I simply used :- NAnt -D:project.config=release clean build >output-release-build.log *wait* *wait* *wait* and ta da, you will have a bin directory with all the release dlls. FluentNHibernate This was pretty simple. there’s instructions here :- http://wiki.fluentnhibernate.org/Getting_started#Installation basically, with git, create a directory, and you issue the command git clone git://github.com/jagregory/fluent-nhibernate.git and wait, and soon enough you have the source. Now, from the bin directory that NHibernate spit out, take everything and dump it into the subdirectory “fluent-nhibernate\tools\NHibernate” Now, to build, you can use rake….which a ruby build system, however you can also just open the solution and build.   Which is what I did.  I had a few problems with the references which I simply re-added using the new ones.  Once built, I just took all the NHibnerate dlls, and the fluent ones and replaced my existing NHibernate / Fluent and killed off the old linq project. All I had to change is the places that used  .Linq<T>  and replace them with .Query<T>  (which was easy as I had wrapped it already to isolate my code from such changes) and hey presto, everything worked.  Even the VB.NET linq calls. I need to do some more testing as I’ve only done basic smoke tests, but its all looking pretty good, so for now, I will stick to NHibernate!

    Read the article

  • Pull Request Changes, Multi-Selection in Advanced View, and Advertisement Changes

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Changes In this release, we have begun to re-focus on Pull Requests to ensure a productive experience between the project users and developers. We feel we made significant progress in this area for this release and look forward to using your feedback to drive future iterations. One of the biggest hurdles people have indicated is the inability to see what a pull request includes without pulling the source down from a Mercurial client. With today’s changes, any user has the ability to view a pull request, the changesets / changes included, and perform an inline diff of the file. When a pull request is made, the CodePlex website will query for all outgoing changes from the fork to the main repository for a point-in-time comparison. Because of this point-in-time comparison… All existing pull requests created prior to this release will not have changesets associated with them. If new commits are pushed to the fork while a pull request is active, they will not appear associated with the pull request. The pull request will need to be re-submitted for them to appear. Once a pull request is created, you can “View the Pull Request” which takes you to a page that looks like As you may notice, we now display a lot more detailed information regarding that pull request including who it was requested by and when, the associated changesets, the description, who it’s assigned to (we’ll come back to this) and the listing of summarized file changes. What you’ll also notice, is that each modified file has the ability to view a diff of all changes made. When you click “(view diff)” for a file, an inline diff experience appears. This new experience allows you to quickly navigate through all of the modified files as well as viewing the various change blocks for each file. You’ll also notice as you browse through each file’s changes, we update the URL to include the file path so you can quickly send a direct link to a pull request’s file. Clicking “(close diff)” will bring you back to the original pull request view. View this pull request live on WikiPlex. Pull Request Review Assignment Another new feature we added for pull requests is the ability for project members to assign pull requests for review. Any project member has the ability to assign (and re-assign if needed) a pull request to a project member. Once the assignment has been made, that project member will be notified via email of the assignment. Once they complete the review of the pull request, they can either accept or deny it similarly to the previous process. Multi-Selection in Advanced View Filters One of the more recent requests we have heard from users is the ability multi-select advanced view filters for work items. We are happy to announce this is now possible. Simply control-click the multiple options for each filter item and your work item query will be refined as such. Should you happen to unselect all options for a given filter, it will automatically reset to the default option for that filter. Furthermore, the “Direct Link” URL will be updated to include the multi-selected options for each filter. Note: The “Direct Link” feature was released in our previous deployment, just never written about. It allows you to capture the current state of your query and send it to other individuals. Advertisement Changes Very recently, the advertiser (The Lounge) we partnered to provide advertising revenue for projects, or donated to charity, was acquired by Lake Quincy Media. There has been no change in the advertising platform offering, and all projects have been converted over to using the new infrastructure. Project owners should note the new contact information for getting paid. The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Math with Timestamp

    - by Knut Vatsendvik
    table.sql { border-width: 1px; border-spacing: 2px; border-style: dashed; border-color: #0023ff; border-collapse: separate; background-color: white; } table.sql th { border-width: 1px; padding: 1px; border-style: none; border-color: gray; background-color: white; -moz-border-radius: 0px 0px 0px 0px; } table.sql td { border-width: 1px; padding: 3px; border-style: none; border-color: gray; background-color: white; -moz-border-radius: 0px 0px 0px 0px; } .sql-keyword { color: #0000cd; background-color: inherit; } .sql-result { color: #458b74; background-color: inherit; } Got this little SQL quiz from a colleague.  How to add or subtract exactly 1 second from a Timestamp?  Sounded simple enough at first blink, but was a bit trickier than expected. If the data type had been a Date, we knew that we could add or subtract days, minutes or seconds using + or – sysdate + 1 to add one day sysdate - (1 / 24) to subtract one hour sysdate + (1 / 86400) to add one second Would the same arithmetic work with Timestamp as with Date? Let’s test it out with the following query SELECT   systimestamp , systimestamp + (1 / 86400) FROM dual; ---------- 03.05.2010 22.11.50,240887 +02:00 03.05.2010 The first result line shows us the system time down to fractions of seconds. The second result line shows the result as Date (as used for date calculation) meaning now that the granularity is reduced down to a second.   By using the PL/SQL dump() function, we can confirm this with the following query SELECT   dump(systimestamp) , dump(systimestamp + (1 / 86400)) FROM dual; ---------- Typ=188 Len=20: 218,7,5,4,8,53,9,0,200,46,89,20,2,0,5,0,0,0,0,0 Typ=13 Len=8: 218,7,5,4,10,53,10,0 Where typ=13 is a runtime representation for Date. So how can we increase the precision to include fractions of second? After investigating it a bit, we found out that the interval data type INTERVAL DAY TO SECOND could be used with the result of addition or subtraction being a Timestamp. Let’s try again our first query again, now using the interval data type. SELECT systimestamp,    systimestamp + INTERVAL '0 00:00:01.0' DAY TO SECOND(1) FROM dual; ---------- 03.05.2010 22.58.32,723659000 +02:00 03.05.2010 22.58.33,723659000 +02:00 Yes, it worked! To finish the story, here is one example showing how to specify an interval of 2 days, 6 hours, 30 minutes, 4 seconds and 111 thousands of a second. INTERVAL ‘2 6:30:4.111’ DAY TO SECOND(3)

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • Designing An ACL Based Permission System

    - by ryanzec
    I am trying to create a permissions system where everything is going to be stored in MySQL (or some database) and pulled using PHP for a project management system I am building.  I am right now trying to do it is an ACL kind of way.  There are a number key features I want to be able to support: 1.  Being able to assign permissions without being tied to a specific object. The reason for this is that I want to be able to selectively show/hide elements of the UI based on permissions at a point where I am not directly looking at a domain object instance.  For instance, a button to create a new project should only should only be shown to users that have the pm.project.create permission but obviously you can assign a create permission to an domain object instance (as it is already created). 2.  Not have to assign permissions for every single object. Obviously creating permissions entries for every single object (projects, tickets, comments, etc…) would become a nightmare to maintain so I want to have some level of permission inheritance. *3.  Be able to filter queries based on permissions. This would be a really nice to have but I am not sure if it is possible.  What I mean by this is say I have a page that list all projects.  I want the query that pulls all projects to incorporate the ACL so that it would not show projects that the current user does not have pm.project.read access to.  This would have to be incorporated into the main query as if it is a process that is done after that main query (which I know I could do) certain features like pagination become much more difficult. Right now this is my basic design for the tables: AclEntities id - the primary key key - the unique identifier for the domain object (usually the primary key of that object) parentId - the parent of the domain object (like the project object if this was a ticket object) aclDomainObjectId - metadata about the domain object AclDomainObjects id - primary key title - simple string to unique identify the domain object(ie. project, ticket, comment, etc…) fullyQualifiedClassName - the fully qualified class name for use in code (I am using namespaces) There would also be tables mapping AclEntities to Users and UserGroups. I also have this interface that all acl entity based object have to implement: IAclEntity getAclKey() - to the the unique key for this specific instance of the acl domain object (generally return the primary key or a concatenated string of a composite primary key) getAclTitle() - to get the unique title for the domain object (generally just returning a static string) getAclDisplayString() - get the string that represents this entity (generally one or more field on the object) getAclParentEntity() - get the parent acl entity object (or null if no parent) getAclEntity() - get the acl enitty object for this instance of the domain object (or null if one has not been created yet) hasPermission($permissionString, $user = null) - whether or not the user has the permission for this instance of the domain object static getFromAclEntityId($aclEntityId) - get a specific instance of the domain object from an acl entity id. Do any of these features I am looking for seems hard to support or are just way off base? Am I missing or not taking in account anything in my implementation? Is performance something I should keep in mind?

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >