Search Results

Search found 2031 results on 82 pages for '37 signals'.

Page 74/82 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Has glassfish 2.1.1 a bug handling http request and handle them twice?

    - by marabol
    I'm using glassfish 2.1.1. I've watched a mysterious http/webservice-call handling. It seams an http request is handled by two different threads. After http basic authentication the first thread is faster. Persisting some data end, but writing response fails in glassfish internal. The second thread fails, because it tries to persist identical data and there are (unique) constrain failures. The response (the failure) of second thread was delivered to client. I don't won't discuss the behavior with the unique constrain failure. I've improve the webservice, so it can handle this better, because it could be happen anytime, that the client send the ws call a second time. But I think, glassfish 2.1.1 has an bug handling http request. Is there any known issue? Have I done an mistake? [#|2010-03-22T10:40:54.150+0000|INFO|sun-appserver2.1|javax.enterprise.system.core|_ThreadID=10;_ThreadName=main;|Starting Sun GlassFish Enterprise Server v2.1.1 ((v2.1 Patch06)(9.1_02 Patch12)) (build b31g-fcs) ...|#] ... [#|2010-03-22T11:18:44.220+0000|FINE|sun-appserver2.1|mypackage.module.security.auth.realm.YaJdbcRealm|_ThreadID=26;_ThreadName=httpSSLWorkerThread-8080-1;ClassName=mypackage.module.security.auth.realm.YaJdbcRealm;MethodName=authenticate;_RequestID=4d8f23e9-5106-4d64-b865-1638d7075bde;|JDBC authenticate successful for: 8002 groups:[roleUser]|#] [#|2010-03-22T11:18:44.220+0000|FINE|sun-appserver2.1|mypackage.module.security.auth.login.YaJdbcLoginModule|_ThreadID=26;_ThreadName=httpSSLWorkerThread-8080-1;ClassName=mypackage.module.security.auth.login.YaJdbcLoginModule;MethodName=authenticate;_RequestID=4d8f23e9-5106-4d64-b865-1638d7075bde;|JDBC login succeeded for: 8002 groups:[roleUser]|#] [#|2010-03-22T11:18:44.220+0000|FINE|sun-appserver2.1|mypackage.module.security.auth.realm.YaJdbcRealm|_ThreadID=39;_ThreadName=httpSSLWorkerThread-8080-2;ClassName=mypackage.module.security.auth.realm.YaJdbcRealm;MethodName=authenticate;_RequestID=4ca7e3e5-5ab7-41ec-b3c9-d9260b1164c9;|JDBC authenticate successful for: 8002 groups:[roleUser]|#] [#|2010-03-22T11:18:44.220+0000|FINE|sun-appserver2.1|mypackage.module.security.auth.login.YaJdbcLoginModule|_ThreadID=39;_ThreadName=httpSSLWorkerThread-8080-2;ClassName=mypackage.module.security.auth.login.YaJdbcLoginModule;MethodName=authenticate;_RequestID=4ca7e3e5-5ab7-41ec-b3c9-d9260b1164c9;|JDBC login succeeded for: 8002 groups:[roleUser]|#] [#|2010-03-22T11:18:44.220+0000|FINE|sun-appserver2.1|mypackage.MyWebService|_ThreadID=26;_ThreadName=httpSSLWorkerThread-8080-1;ClassName=mypackage.MyWebService;MethodName=enqueue;_RequestID=4d8f23e9-5106-4d64-b865-1638d7075bde;|Received WebService call to enqueue() from client 59|#] [#|2010-03-22T11:18:44.220+0000|FINE|sun-appserver2.1|mypackage.MyWebService|_ThreadID=39;_ThreadName=httpSSLWorkerThread-8080-2;ClassName=mypackage.MyWebService;MethodName=enqueue;_RequestID=4ca7e3e5-5ab7-41ec-b3c9-d9260b1164c9;|Received WebService call to enqueue() from client 59|#] ... [#|2010-03-22T11:18:44.267+0000|FINE|sun-appserver2.1|mypackage.MyWebService|_ThreadID=26;_ThreadName=httpSSLWorkerThread-8080-1;ClassName=mypackage.MyWebService;MethodName=enqueue;_RequestID=4d8f23e9-5106-4d64-b865-1638d7075bde;|Successfully finished WebService call to enqueue() from client 59|#] [#|2010-03-22T11:18:44.329+0000|WARNING|sun-appserver2.1|javax.enterprise.system.container.ejb|_ThreadID=26;_ThreadName=httpSSLWorkerThread-8080-1;_RequestID=4d8f23e9-5106-4d64-b865-1638d7075bde;|invocation error on ejb endpoint MyWebService at /MyWebserviceService/MyWebservice : com.sun.xml.stream.XMLStreamException2 javax.xml.ws.WebServiceException: com.sun.xml.stream.XMLStreamException2 at com.sun.xml.ws.encoding.StreamSOAPCodec.encode(StreamSOAPCodec.java:111) at com.sun.xml.ws.encoding.SOAPBindingCodec.encode(SOAPBindingCodec.java:281) at com.sun.xml.ws.transport.http.HttpAdapter.encodePacket(HttpAdapter.java:320) at com.sun.xml.ws.transport.http.HttpAdapter.access$100(HttpAdapter.java:93) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:454) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:135) at com.sun.enterprise.webservice.Ejb3MessageDispatcher.handlePost(Ejb3MessageDispatcher.java:113) at com.sun.enterprise.webservice.Ejb3MessageDispatcher.invoke(Ejb3MessageDispatcher.java:87) at com.sun.enterprise.webservice.EjbWebServiceServlet.dispatchToEjbEndpoint(EjbWebServiceServlet.java:231) at com.sun.enterprise.webservice.EjbWebServiceServlet.service(EjbWebServiceServlet.java:157) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at com.sun.enterprise.web.AdHocContextValve.invoke(AdHocContextValve.java:114) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:593) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:587) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:87) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:222) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:593) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:587) at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1093) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:166) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:648) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:593) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:587) at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1093) at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:291) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.invokeAdapter(DefaultProcessorTask.java:666) at com.sun.enterprise.web.connector.grizzly.comet.CometEngine.executeServlet(CometEngine.java:616) at com.sun.enterprise.web.connector.grizzly.comet.CometEngine.handle(CometEngine.java:362) at com.sun.enterprise.web.connector.grizzly.comet.CometAsyncFilter.doFilter(CometAsyncFilter.java:84) at com.sun.enterprise.web.connector.grizzly.async.DefaultAsyncExecutor.invokeFilters(DefaultAsyncExecutor.java:189) at com.sun.enterprise.web.connector.grizzly.async.DefaultAsyncExecutor.interrupt(DefaultAsyncExecutor.java:164) at com.sun.enterprise.web.connector.grizzly.async.AsyncProcessorTask.doTask(AsyncProcessorTask.java:92) at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:264) at com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run(SSLWorkerThread.java:106) Caused by: com.sun.xml.stream.XMLStreamException2 at com.sun.xml.stream.writers.XMLStreamWriterImpl.flush(XMLStreamWriterImpl.java:416) at com.sun.xml.ws.encoding.StreamSOAPCodec.encode(StreamSOAPCodec.java:109) ... 36 more Caused by: ClientAbortException: java.nio.channels.ClosedChannelException at org.apache.coyote.tomcat5.OutputBuffer.doFlush(OutputBuffer.java:385) at org.apache.coyote.tomcat5.OutputBuffer.flush(OutputBuffer.java:351) at org.apache.coyote.tomcat5.CoyoteOutputStream.flush(CoyoteOutputStream.java:176) at com.sun.xml.stream.writers.UTF8OutputStreamWriter.flush(UTF8OutputStreamWriter.java:153) at com.sun.xml.stream.writers.XMLStreamWriterImpl.flush(XMLStreamWriterImpl.java:414) ... 37 more Caused by: java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:126) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324) at com.sun.enterprise.web.connector.grizzly.OutputWriter.flushChannel(OutputWriter.java:91) at com.sun.enterprise.web.connector.grizzly.OutputWriter.flushChannel(OutputWriter.java:66) at com.sun.enterprise.web.connector.grizzly.SocketChannelOutputBuffer.flushChannel(SocketChannelOutputBuffer.java:172) at com.sun.enterprise.web.connector.grizzly.async.AsynchronousOutputBuffer.flushChannel(AsynchronousOutputBuffer.java:81) at com.sun.enterprise.web.connector.grizzly.SocketChannelOutputBuffer.flushBuffer(SocketChannelOutputBuffer.java:205) at com.sun.enterprise.web.connector.grizzly.async.AsynchronousOutputBuffer.flushBuffer(AsynchronousOutputBuffer.java:114) at com.sun.enterprise.web.connector.grizzly.SocketChannelOutputBuffer.flush(SocketChannelOutputBuffer.java:183) at com.sun.enterprise.web.connector.grizzly.async.AsynchronousOutputBuffer.flush(AsynchronousOutputBuffer.java:104) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.action(DefaultProcessorTask.java:1100) at org.apache.coyote.Response.action(Response.java:237) at org.apache.coyote.tomcat5.OutputBuffer.doFlush(OutputBuffer.java:381) ... 41 more |#] [#|2010-03-22T11:18:44.376+0000|WARNING|sun-appserver2.1|oracle.toplink.essentials.session.file:/mygf-211/domains/mydomain/applications/j2ee-apps/myear/myjar-myPu|_ThreadID=39;_ThreadName=httpSSLWorkerThread-8080-2;_RequestID=4ca7e3e5-5ab7-41ec-b3c9-d9260b1164c9;| Local Exception Stack: Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b31g-fcs (10/19/2009))): oracle.toplink.essentials.exceptions.DatabaseException Internal Exception: com.microsoft.sqlserver.jdbc.SQLServerException: Eine Zeile mit doppeltem Schlüssel kann in das 'dbo.MY_TABLE'-Objekt mit dem eindeutigen 'MY_INDEX'-Index nicht eingefügt werden.

    Read the article

  • Calling Status Bar notification from method from other class.

    - by Jez Fischer
    Firstly, I am new to both android and Java. I have two classes, my main.class and Note.class. I am calling the notification method from my Note.class in my main.class when i press a button. The issue is with this line from the Note.class : PendingIntent contentIntent = PendingIntent.getActivity(this, 0, notificationIntent, 0); notification.setLatestEventInfo(context, contentTitle, contentText, contentIntent); When the method is called it force closes. I believe the problem to be with the "this" in PendingIntent.getActivity(this, 0, notificationIntent, 0);, but I am unsure what to change it to. The notification code works fine if it's in the main class. I would be very grateful for any guidance. Edit: Main class : http://pastebin.com/05Yx0a48 Note.class : package com.adamblanchard.remindme.com.adamblanchard; import com.adamblanchard.remindme.R; import android.app.Activity; import android.app.Notification; import android.app.NotificationManager; import android.app.PendingIntent; import android.content.Context; import android.content.Intent; import android.os.Bundle; public class Note extends Activity { public CharSequence note = "not changed"; int HELLO_ID = 1; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); setTitle("Remind Me!"); } //Notification Method public void callNotification() { // TODO Auto-generated method stub String ns = Context.NOTIFICATION_SERVICE; final NotificationManager mNotificationManager = (NotificationManager) getSystemService(ns); int icon = R.drawable.launcher; CharSequence tickerText = "Remind Me!"; long when = System.currentTimeMillis(); final Notification notification = new Notification(icon, tickerText, when); notification.flags |= Notification.FLAG_AUTO_CANCEL; final Context context = getApplicationContext(); CharSequence contentTitle = "Remind Me!"; CharSequence contentText = note; Intent notificationIntent = new Intent(context, AndroidNotifications.class); PendingIntent contentIntent = PendingIntent.getActivity(this, 0, notificationIntent, 0); notification.setLatestEventInfo(context, contentTitle, contentText, contentIntent); mNotificationManager.notify(HELLO_ID, notification); HELLO_ID++; } } Debug Output : Thread [<1 main] (Suspended (exception IllegalStateException)) Note(Activity).getSystemService(String) line: 3536 Note.callNotification() line: 37 remindme$1$1.onClick(DialogInterface, int) line: 72 AlertDialog(AlertController$ButtonHandler).handleMessage(Message) line: 159 AlertController$ButtonHandler(Handler).dispatchMessage(Message) line: 99 Looper.loop() line: 123 ActivityThread.main(String[]) line: 3647 Method.invokeNative(Object, Object[], Class, Class[], Class, int, boolean) line: not available [native method] Method.invoke(Object, Object...) line: 507 ZygoteInit$MethodAndArgsCaller.run() line: 839 ZygoteInit.main(String[]) line: 597 NativeStart.main(String[]) line: not available [native method] This is the debug output I get, plus a force close popup on the device. Edit2: Manifest xml: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.adamblanchard.remindme" android:versionCode="3" android:versionName="0.7"> <application android:label="@string/app_name" android:icon="@drawable/ic_launcher72"> <activity android:name=".com.adamblanchard.remindme" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".Note"> <intent-filter> <action android:name="Note" /> <category android:name="android.intent.category.DEFAULT"/> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="1"></uses-sdk> </manifest> Stack traces (Are these what you mean?): Thread [<1> main] (Suspended (exception ActivityNotFoundException)) Instrumentation.checkStartActivityResult(int, Object) line: 1404 Instrumentation.execStartActivity(Context, IBinder, IBinder, Activity, Intent, int) line: 1378 remindme(Activity).startActivityForResult(Intent, int) line: 2827 remindme(Activity).startActivity(Intent) line: 2933 remindme$1$1.onClick(DialogInterface, int) line: 82 AlertDialog(AlertController$ButtonHandler).handleMessage(Message) line: 159 AlertController$ButtonHandler(Handler).dispatchMessage(Message) line: 99 Looper.loop() line: 123 ActivityThread.main(String[]) line: 3647 Method.invokeNative(Object, Object[], Class, Class[], Class, int, boolean) line: not available [native method] Method.invoke(Object, Object...) line: 507 ZygoteInit$MethodAndArgsCaller.run() line: 839 ZygoteInit.main(String[]) line: 597 NativeStart.main(String[]) line: not available [native method]

    Read the article

  • autopostback problem listbox asp.net

    - by lodun
    I want to add posts(question) like i do that on yahoo answers.When i choose item in "kategorije" control,items can't be loaded in "SUB_kategorije" control. Image My new ascx.cs: 1. protected void Page_Load(object sender, EventArgs e) 2. { 3. 4. 5. if (!Page.IsPostBack) 6. { 7. 8. SqlDataSource ds = new SqlDataSource(); 9. ds.ConnectionString = conn; 10. ds.SelectCommand = "SELECT [ID], [Kategorije] FROM [kategorije] "; 11. kategorije.DataSource = ds; 12. kategorije.DataTextField = "Kategorije"; 13. kategorije.DataValueField = "ID"; 14. kategorije.DataBind(); 15. kategorije.SelectedIndex = 1; 16. 17. SqlDataSource dk = new SqlDataSource(); 18. dk.ConnectionString = conn; 19. dk.SelectCommand = "SELECT * from pod_kategorije WHERE kat_id = " + kategorije.SelectedItem.Value; 20. SUB_kategorije.DataSource = dk; 21. SUB_kategorije.DataTextField = "pkategorija"; 22. SUB_kategorije.DataValueField = "ID"; 23. SUB_kategorije.DataBind(); 24. 25. 26. } 27. 28. 29. 30. 31. } 32. protected void kategorije_SelectedIndexChanged(object sender, EventArgs e) 33. { 34. 35. 36. SqlDataSource dk = new SqlDataSource(); 37. dk.ConnectionString = conn; 38. dk.SelectCommand = "SELECT * from pod_kategorije WHERE [kat_id] = " + kategorije.SelectedItem.Value; 39. SUB_kategorije.DataSource = dk; 40. SUB_kategorije.DataTextField = "pkategorija"; 41. SUB_kategorije.DataValueField = "ID"; 42. SUB_kategorije.DataBind(); 43. 44. 45. } and .ascx: 1. <asp:ListBox ID="kategorije" runat="server" Height="380px" CssClass="kat" AutoPostBack="true" 2. 3. onselectedindexchanged="kategorije_SelectedIndexChanged"></asp:ListBox> 4. 5. <asp:Button ID="Button1" CssClass="posalji" runat="server" Text="click" 6. onclick="Button1_Click" /> 7. 8. 9. 10. <asp:UpdatePanel ID="UpdatePanel10" runat="server"> 11. <ContentTemplate> <asp:ListBox ID="SUB_kategorije" CssClass="pod" Height="150px" runat="server"></asp:ListBox></ContentTemplate> 12. <Triggers> 13. <asp:AsyncPostBackTrigger ControlID="kategorije" EventName="SelectedIndexChanged" /> 14. 15. </Triggers> 16. 17. </asp:UpdatePanel>

    Read the article

  • Possible memory leak problem

    - by MaiTiano
    I write two pieces of c programs like following, during memcheck process using Valgrind, a lot of mem leak information is given. int GetMemory(int framewidth, int frameheight, int SR/*, int blocksize*//*,int ALL_REF_NUM*/) { //int i,j; int memory_size = 0; //int refnum = ALL_REF_NUM; int input_search_range = SR; memory_size += get_mem2D(&curFrameY, frameheight, framewidth); memory_size += get_mem2D(&curFrameU, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&curFrameV, frameheight>>1, framewidth>>1); memory_size += get_mem3D(&prevFrameY, refnum, frameheight, framewidth);// to allocate reference frame buffer accoding to the reference frame number memory_size += get_mem3D(&prevFrameU, refnum, frameheight>>1, framewidth>>1); memory_size += get_mem3D(&prevFrameV, refnum, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&mpFrameY, frameheight, framewidth); memory_size += get_mem2D(&mpFrameU, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&mpFrameV, frameheight>>1, framewidth>>1); memory_size += get_mem2D(&searchwindow, input_search_range*2 + blocksize, input_search_range*2 + blocksize);// to allocate search window according to the searchrange /*memory_size +=*/ get_mem1D(/*&SAD_cost, height, width*/); // memory_size += get_mem2D(&searchwindow, 80, 80);// if searchrange is 32, then only 32+1+32+15 pixels is needed in each row and col, therefore the range of // search window is enough to be set to 80 ! memory_size += get_mem2Dint(&all_mv, height/blocksize, width/blocksize); return 0; } void FreeMemory(int refno) { free_mem2D(curFrameY); free_mem2D(curFrameU); free_mem2D(curFrameV); free_mem3D(prevFrameY,refno); free_mem3D(prevFrameU,refno); free_mem3D(prevFrameV,refno); free_mem2D(mpFrameY); free_mem2D(mpFrameU); free_mem2D(mpFrameV); free_mem2D(searchwindow); free_mem1D(); free_mem2Dint(all_mv); } void free_mem1D() { free(SAD_cost); } Now I hope to make sure where are the possible problems in my program? Here I may post some valgrind information to let you know about the actual error information. ==29105== by 0x804A906: main (me_search.c:1480) ==29105== ==29105== ==29105== HEAP SUMMARY: ==29105== in use at exit: 124,088 bytes in 18 blocks ==29105== total heap usage: 37 allocs, 21 frees, 749,276 bytes allocated ==29105== ==29105== 272 bytes in 1 blocks are still reachable in loss record 1 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804885E: GetMemory (me_search.c:117) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 352 bytes in 1 blocks are still reachable in loss record 2 of 18 ==29105== at 0x4024F20: malloc (vg_replace_malloc.c:236) ==29105== by 0x409537E: __fopen_internal (iofopen.c:76) ==29105== by 0x409544B: fopen@@GLIBC_2.1 (iofopen.c:107) ==29105== by 0x804A660: main (me_search.c:1439) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 3 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048724: GetMemory (me_search.c:106) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 4 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048747: GetMemory (me_search.c:107) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 5 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048809: GetMemory (me_search.c:114) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are still reachable in loss record 6 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804882C: GetMemory (me_search.c:115) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are definitely lost in loss record 7 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804A4F8: get_mem3D (me_search.c:1393) ==29105== by 0x804879B: GetMemory (me_search.c:110) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 584 bytes in 1 blocks are definitely lost in loss record 8 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804A4F8: get_mem3D (me_search.c:1393) ==29105== by 0x80487C9: GetMemory (me_search.c:111) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 1,168 bytes in 1 blocks are still reachable in loss record 9 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x8048701: GetMemory (me_search.c:105) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 1,168 bytes in 1 blocks are still reachable in loss record 10 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x80487E6: GetMemory (me_search.c:113) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 1,168 bytes in 1 blocks are definitely lost in loss record 11 of 18 ==29105== at 0x402425F: calloc (vg_replace_malloc.c:467) ==29105== by 0x804A296: get_mem2D (me_search.c:1315) ==29105== by 0x804A4F8: get_mem3D (me_search.c:1393) ==29105== by 0x804876D: GetMemory (me_search.c:109) ==29105== by 0x804A757: main (me_search.c:1456) ==29105== ==29105== 6,336 bytes in 1 blocks are definitely lost in loss record 12 of 18 ==29105== at 0x4024F20: malloc (vg_replace_malloc.c:236) ==29105== by 0x804A25C: get_mem1D (me_search.c:1295) ==29105== by 0x8048866: GetMemory (me_search.c:119) ==29105== by 0x804A757: main (me_search.c:1456)

    Read the article

  • Updating a Data Source with a Dataset

    - by Paul
    Hi, I need advice. I have asp.net web service and winforms client app. Client call this web method and get dataset. 1. [WebMethod] 2. public DataSet GetSecureDataSet(string id) 3. { 4. 5. 6. SqlConnection conn = null; 7. SqlDataAdapter da = null; 8. DataSet ds; 9. try 10. { 11. 12. string sql = "SELECT * FROM Tab1"; 13. 14. string connStr = WebConfigurationManager.ConnectionStrings["Employees"].ConnectionString; 15. 16. conn = new SqlConnection(connStr); 17. conn.Open(); 18. 19. da = new SqlDataAdapter(sql, conn); 20. 21. ds = new DataSet(); 22. da.Fill(ds, "Tab1"); 23. 24. return ds; 25. } 26. catch (Exception ex) 27. { 28. throw ex; 29. } 30. finally 31. { 32. if (conn != null) 33. conn.Close(); 34. if (da != null) 35. da.Dispose(); 36. } 37. } After he finish work he call this update web method. He can add, delete and edit rows in table in dataset. [WebMethod] public bool SecureUpdateDataSet(DataSet ds) { SqlConnection conn = null; SqlDataAdapter da = null; SqlCommand cmd = null; try { DataTable delRows = ds.Tables[0].GetChanges(DataRowState.Deleted); DataTable addRows = ds.Tables[0].GetChanges(DataRowState.Added); DataTable editRows = ds.Tables[0].GetChanges(DataRowState.Modified); string sql = "UPDATE * FROM Tab1"; string connStr = WebConfigurationManager.ConnectionStrings["Employees"].ConnectionString; conn = new SqlConnection(connStr); conn.Open(); cmd = new SqlCommand(sql, conn); da = new SqlDataAdapter(sql, conn); if (addRows != null) { da.Update(addRows); } if (delRows != null) { da.Update(delRows); } if (editRows != null) { da.Update(editRows); } return true; } catch (Exception ex) { throw ex; } finally { if (conn != null) conn.Close(); if (da != null) da.Dispose(); } } Code on client side 1. //on client side is dataset bind to datagridview 2. Dataset ds = proxy.GetSecureDataSet(""); 3. ds.AcceptChanges(); 4. 5. //edit dataset 6. 7. 8. //get changes 9. DataSet editDataset = ds.GetChanges(); 10. 11. //call update webmethod 12. proxy.SecureUpdateDataSet(editDataSet) But it finish with this error : System.Web.Services.Protocols.SoapException: Server was unable to process request. --- System.InvalidOperationException: Update requires a valid UpdateCommand when passed DataRow collection with modified rows. at WebService.Service.SecureUpdateDataSet(DataSet ds) in D:\Diploma.Work\WebService\Service1.asmx.cs:line 489 Problem is with SQL Commad, client can add, delete and insert row, how can write a corect SQL command.... any advice please? Thank you

    Read the article

  • nhibernate fatal error

    - by Afif Lamloumi
    i have an error ( System.InvalidCastException: Unable to cast object of type 'AccountProxy' to type 'System.String'.) when i did this code i mapped the tables( Account,AccountString,EventData,...) of the the database opengts ( open source) i have this error when i called a function from EventData.cs IQuery query = session.CreateQuery("FROM Eventdata"); IList pets = query.List(); return pets; the Stack Trace: [InvalidCastException: Impossible d'effectuer un cast d'un objet de type 'AccountProxy' en type 'System.String'.] (Object , Object[] , SetterCallback ) +431 NHibernate.Bytecode.Lightweight.AccessOptimizer.SetPropertyValues(Object target, Object[] values) +20 NHibernate.Tuple.Component.PocoComponentTuplizer.SetPropertyValues(Object component, Object[] values) +49 NHibernate.Type.ComponentType.SetPropertyValues(Object component, Object[] values, EntityMode entityMode) +34 NHibernate.Type.ComponentType.ResolveIdentifier(Object value, ISessionImplementor session, Object owner) +150 NHibernate.Type.ComponentType.NullSafeGet(IDataReader rs, String[] names, ISessionImplementor session, Object owner) +42 NHibernate.Loader.Loader.GetKeyFromResultSet(Int32 i, IEntityPersister persister, Object id, IDataReader rs, ISessionImplementor session) +93 NHibernate.Loader.Loader.GetRowFromResultSet(IDataReader resultSet, ISessionImplementor session, QueryParameters queryParameters, LockMode[] lockModeArray, EntityKey optionalObjectKey, IList hydratedObjects, EntityKey[] keys, Boolean returnProxies) +92 NHibernate.Loader.Loader.DoQuery(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies) +675 NHibernate.Loader.Loader.DoQueryAndInitializeNonLazyCollections(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies) +129 NHibernate.Loader.Loader.DoList(ISessionImplementor session, QueryParameters queryParameters) +116 [GenericADOException: could not execute query [ select eventdata0_.deviceID as deviceID5_, eventdata0_.timestamp as timestamp5_, eventdata0_.statusCode as statusCode5_, eventdata0_.accountID as accountID5_, eventdata0_.latitude as latitude5_, eventdata0_.longitude as longitude5_, eventdata0_.gpsAge as gpsAge5_, eventdata0_.speedKPH as speedKPH5_, eventdata0_.heading as heading5_, eventdata0_.altitude as altitude5_, eventdata0_.transportID as transpo11_5_, eventdata0_.inputMask as inputMask5_, eventdata0_.outputMask as outputMask5_, eventdata0_.address as address5_, eventdata0_.DataSource as DataSource5_, eventdata0_.rawdata as rawdata5_, eventdata0_.distanceKM as distanceKM5_, eventdata0_.odometerKM as odometerKM5_, eventdata0_.geozoneIndex as geozone19_5_, eventdata0_.geozoneID as geozoneID5_, eventdata0_.creationTime as creatio21_5_ from eventdata eventdata0_ ] [SQL: select eventdata0_.deviceID as deviceID5_, eventdata0_.timestamp as timestamp5_, eventdata0_.statusCode as statusCode5_, eventdata0_.accountID as accountID5_, eventdata0_.latitude as latitude5_, eventdata0_.longitude as longitude5_, eventdata0_.gpsAge as gpsAge5_, eventdata0_.speedKPH as speedKPH5_, eventdata0_.heading as heading5_, eventdata0_.altitude as altitude5_, eventdata0_.transportID as transpo11_5_, eventdata0_.inputMask as inputMask5_, eventdata0_.outputMask as outputMask5_, eventdata0_.address as address5_, eventdata0_.DataSource as DataSource5_, eventdata0_.rawdata as rawdata5_, eventdata0_.distanceKM as distanceKM5_, eventdata0_.odometerKM as odometerKM5_, eventdata0_.geozoneIndex as geozone19_5_, eventdata0_.geozoneID as geozoneID5_, eventdata0_.creationTime as creatio21_5_ from eventdata eventdata0_]] NHibernate.Loader.Loader.DoList(ISessionImplementor session, QueryParameters queryParameters) +213 NHibernate.Loader.Loader.ListIgnoreQueryCache(ISessionImplementor session, QueryParameters queryParameters) +18 NHibernate.Loader.Loader.List(ISessionImplementor session, QueryParameters queryParameters, ISet`1 querySpaces, IType[] resultTypes) +79 NHibernate.Hql.Ast.ANTLR.Loader.QueryLoader.List(ISessionImplementor session, QueryParameters queryParameters) +51 NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.List(ISessionImplementor session, QueryParameters queryParameters) +231 NHibernate.Engine.Query.HQLQueryPlan.PerformList(QueryParameters queryParameters, ISessionImplementor session, IList results) +369 NHibernate.Impl.SessionImpl.List(String query, QueryParameters queryParameters, IList results) +317 NHibernate.Impl.SessionImpl.List(String query, QueryParameters parameters) +282 NHibernate.Impl.QueryImpl.List() +163 DATA1.EventdataExtensions.GetEventdata() in C:\Users\HP\Desktop\our_project\DATA1\Queries\Eventdata.cs:33 MvcApplication7.Controllers.HistoriqueController.Index() in C:\Users\HP\Desktop\our_project\MvcApplication7\Controllers\HistoriqueController.cs:17 lambda_method(Closure , ControllerBase , Object[] ) +62 System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) +17 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +208 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +27 System.Web.Mvc.<>c__DisplayClass15.<InvokeActionMethodWithFilters>b__12() +55 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) +263 System.Web.Mvc.<>c__DisplayClass17.<InvokeActionMethodWithFilters>b__14() +19 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList`1 filters, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +191 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +343 System.Web.Mvc.Controller.ExecuteCore() +116 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +97 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +10 System.Web.Mvc.<>c__DisplayClassb.<BeginProcessRequest>b__5() +37 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +21 System.Web.Mvc.Async.<>c__DisplayClass8`1.<BeginSynchronous>b__7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult`1.End() +62 System.Web.Mvc.<>c__DisplayClasse.<EndProcessRequest>b__d() +50 System.Web.Mvc.SecurityUtil.<GetCallInAppTrustThunk>b__0(Action f) +7 System.Web.Mvc.SecurityUtil.ProcessInApplicationTrust(Action action) +22 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +60 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +9 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8841105 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Any suggestions? how can correct this error Data entity class (outtake from comment): public class MyClass { public virtual string DeviceID { get; set; } public virtual int Timestamp { get; set; } public virtual string Account { get; set; } public virtual int StatusCode { get; set; } public virtual double Latitude { get; set; } public virtual double Longitude { get; set; } public virtual int GpsAge { get; set; } public virtual double SpeedKPH { get; set; } public virtual double Heading { get; set; } public override bool Equals(object obj) { return true; } public override int GetHashCode() { return 0; } }

    Read the article

  • PHP - not returning a count number for filled array...

    - by Phil Jackson
    Morning, this is eating me alive so Im hoping it's not something stupid, lol. $arrg = array(); if( str_word_count( $str ) > 1 ) { $input_arr = explode(' ', $str); die(print_r($input_arr)); $count = count($input_arr); die($count); above is part of a function. when i run i get; > Array ( > [0] => luke > [1] => snowden > [2] => create > [3] => develop > [4] => web > [5] => applications > [6] => sites > [7] => alse > [8] => dab > [9] => hand > [10] => design > [11] => love > [12] => helping > [13] => business > [14] => thrive > [15] => latest > [16] => industry > [17] => developer > [18] => act > [19] => designs > [20] => php > [21] => mysql > [22] => jquery > [23] => ajax > [24] => xhtml > [25] => css > [26] => de > [27] => montfont > [28] => award > [29] => advanced > [30] => programming > [31] => taught > [32] => development > [33] => years > [34] => experience > [35] => topic > [36] => fully > [37] => qualified > [38] => electrician > [39] => city > [40] => amp > [41] => guilds > [42] => level ) Which im expecting; run this however and nothing is returned!?!?! $arrg = array(); if( str_word_count( $str ) > 1 ) { $input_arr = explode(' ', $str); //die(print_r($input_arr)); $count = count($input_arr); die($count); can anyone see anything that my eyes cant?? regards, Phil

    Read the article

  • Why wont this sort in Solr work?

    - by Camran
    I need to sort on a date-field type, which name is "mod_date". It works like this in the browser adress-bar: http://localhost:8983/solr/select/?&q=bmw&sort=mod_date+desc But I am using a phpSolr client which sends an URL to Solr, and the url sent is this: fq=+category%3A%22Bilar%22+%2B+car_action%3AS%C3%A4ljes&version=1.2&wt=json&json.nl=map&q=%2A%3A%2A&start=0&rows=5&sort=mod_date+desc // This wont work and is echoed after this in php: $queryString = http_build_query($params, null, $this->_queryStringDelimiter); $queryString = preg_replace('/%5B(?:[0-9]|[1-9][0-9]+)%5D=/', '=', $queryString); This wont work, I dont know why! Everything else works fine, all right fields are returned. But the sort doesn't work. Any ideas? Thanks BTW: The field "mod_date" contains something like: 2010-03-04T19:37:22.5Z EDIT: First I use PHP to send this to a SolrPhpClient which is another php-file called service.php: require_once('../SolrPhpClient/Apache/Solr/Service.php'); $solr = new Apache_Solr_Service('localhost', 8983, '/solr/'); $results = $solr->search($querystring, $p, $limit, $solr_params); $solr_params is an array which contains the solr-parameters (q, fq, etc). Now, in service.php: $params['version'] = self::SOLR_VERSION; // common parameters in this interface $params['wt'] = self::SOLR_WRITER; $params['json.nl'] = $this->_namedListTreatment; $params['q'] = $query; $params['sort'] = 'mod_date desc'; // HERE IS THE SORT I HAVE PROBLEM WITH $params['start'] = $offset; $params['rows'] = $limit; $queryString = http_build_query($params, null, $this->_queryStringDelimiter); $queryString = preg_replace('/%5B(?:[0-9]|[1-9][0-9]+)%5D=/', '=', $queryString); if ($method == self::METHOD_GET) { return $this->_sendRawGet($this->_searchUrl . $this->_queryDelimiter . $queryString); } else if ($method == self::METHOD_POST) { return $this->_sendRawPost($this->_searchUrl, $queryString, FALSE, 'application/x-www-form-urlencoded'); } The $results contain the results from Solr... So this is the way I need to get to work (via php). This code below (also on top of this Q) works but thats because I paste it into the adress bar manually, not via the PHPclient. But thats just for debugging, I need to get it to work via the PHPclient: http://localhost:8983/solr/select/?&q=bmw&sort=mod_date+des // Not via phpclient, but works UPDATE (2010-03-08): I have tried Donovans codes (the urls) and they work fine. Now, I have noticed that it is one of the parameters causing the 'SORT' not to work. This parameter is the "wt" parameter. If we take the url from top of this Q, (fq=+category%3A%22Bilar%22+%2B+car_action%3AS%C3%A4ljes&version=1.2&wt=json&json.nl=map&q=%2A%3A%2A&start=0&rows=5&sort=mod_date+desc), and just simply remove the "wt" parameter, then the sort works. BUT the results appear differently, thus making my php code not able to recognize the results I believe. Donovan would know this I think. I am guessing in order for the PHPClient to work, the results must be in a specific structure, which gets messed up as soon as I remove the wt parameter. Donovan, help me please... Here is some background what I use your SolrPhpClient for: I have a classifieds website, which uses MySql. But for the searching I am using Solr to search some indexed fields. Then Solr returns an array of ID:numbers (for all matches of the search criteria). Then I use those ID:numbers to find everything in a MySql db and fetch all other information (example is not searchable information). So simplified: Search - Solr returns all matches in an array of ID:nrs - Id:numbers from Solr are the same as the Id numbers in the MySql db, so I can just make a simple match agains every record with the ID matching the ID from the Solr results array. I don't use Faceting, no boosting, no relevancy or other fancy stuff. I only sort by the latest classified put, and give the option to users to also sort on the cheapest price. Nothing more. Then I use the "fq" parameter to do queries on different fields in Solr depending on category chosen by users (example "cars" in this case which in my language is "Bilar"). I am really stuck with this problem here... Thanks for all help

    Read the article

  • MySQL Query - Alternation of WHERE IN

    - by Sadiqur Rahman
    I have a mySQL query which takes 3-4 minutes to be executed. It is a large database. This query uses WHERE IN to find the rows.. So, is there any alternate query/clause/statement for my this query? SELECT r.reg_id, r.first_name, r.last_name, r.email, r.country, e.headline, e.industry, pp.photo FROM basic_registration r LEFT JOIN exp_ind_reg e ON e.reg_id=r.reg_id LEFT JOIN profile_photo pp ON pp.reg_id=r.reg_id WHERE r.reg_id IN (23,228,497,593,761,1204,1491,1894,1895,2128,7,11,20,22,25,26,27,29,31, 32,33,34,37,41,45,47,50,52,53,54,55,62,63,69,75,79,80,82,85,87,88,89,93,96,99, 102,104,106,110,116,117,124,139,143,146,150,157,159,161,162,170,175,176,177, 181,183,197,210,213,215,217,220,226,227,233,240,250,252,255,262,263,268,274,280, 283,285,290,300,312,313,317,324,332,341,347,351,357,368,369,372,373,377, 381,383,398,408,414,416,418,419,422,432,441,446,450,451,453,463,466,469,473,486,511, 522,525,527,529,534,538,541,543,546,564,566,569,577,579,581,585,586,595,598,599,600, 606,611,613,614,621,640,649,654,656,660,667,668,674,682,686,689,693,699,705,720, 734,742,748,753,763,774,775,780,782,784,792,795,804,839,841,862,871,890,929, 930,943,951,965,994,1004,1017,1026,1034,1050,1051,1053,1054,1067,1082,1087,1109, 1119,1121,1124,1136,1147,1187,1197,1214,1224,1226,1230,1241,1255,1318,1323,1358,1361, 1383,1404,1415,1429,1440,1443,1452,1458,1473,1478,1484,1490,1496,1505,1508,1521, 1534,1544,1556,1575,1628,1640,1644,1660,1688,1725,1791,1802,1815,1819,1849,1850,1891, 1896,1897,1911,1917,1923,1924,1926,1927,1930,1956,1959,1961,1967,1983,2006,2016, 2028,2053,2059,2088,2089,2100,2136,2145,2164,2183,2190,2219,2243,2291,2301,2321, 2343,2345,2423,2438,2465,2478,2501,2507,2508,2551,2563,2572,2629,2636,2642,2650, 2670,2693,2695,2724,2732,2801,2803,2839,2847,2867,2899,3024,3061,3068,3071,3093, 3123,3126,3188,3240,3273,3307,3308,3332,3484,3493,3522,3552,3596,3632,3705,3769, 3845,3869,3966,3969,4046,4066,4074,4077,4108,4113,4140,4198,4213,4218,4266,4295, 4312,4345,4365,4369,4380,4425,4453,4485,4486,4488,4493,4494,4495,4500,4513,4515, 4517,4520,4533,4540,4542,4544,4548,4550,4551,4554,4555,4557,4566,4567,4568, 4570,4572,4575,4586,4587,4590,4593,4594,4595,4598,4599,4608,4640,4642,4647,4650, 4661,4664,4679,4681,4685,4686,4698,4707,4708,4709,4711,4712,4714,4715,4717,4719, 4720,4721,4722,4724,4725,4728,4729,4732,4734,4735,4736,4737,4739,4742,4744,4745, 4750,4752,4754,4755,4757,4759,4760,4761,4763,4764,4766,4768,4770,4772,4774,4776, 4777,4789,4790,4791,4793,4795,4796,4797,4799,4803,4804,4805,4806,4808,4809,4811, 4814,4815,4817,4818,4821,4825,4826,4828,4830,4831,4833,4835,4836,4837,4843,4844, 4847,4848,4852,4853,4854,4861,4865,4866,4871,4874,4875,4876,4879,4880,4886,4889, 4890,4891,4892,4893,4894,4896,4899,4900,4904,4908,4914,4915,4916,4917,4918,4922, 4925,4929,4930,4931,4932,4934,4935,4940,4943,4944,4945,4947,4948,4949,4952,4953, 4956,4961,4963,4964,4965,4973,4974,4976,4978,4980,4985,4988,4989,4990,4993,4996, 5001,5009,5014,5016,5017,5018,5019,5021,5023,5024,5025,5028,5032,5033,5041,5042, 5048,5055,5056,5058,5059,5062,5065,5066,5072,5073,5075,5078,5079,5083,5084,5085, 5086,5087,5088,5089,5090,5091,5092,5093,5094,5096,5103,5112,5115,5116,5117,5123, 5125,5126,5127,5128,5130,5131,5132,5133,5134,5137,5138,5139,5140,5141,5146,5148, 5150,5155,5156,5158,5161,5162,5163,5164,5166,5168,5172,5174,5176,5178,5179,5180, 5181,5183,5186,5191,5194,5199,5200,5201,5202,5206,5214,5215,5217,5218,5222,5225, 5226,5227,5235,5236,5237,5243,5245,5246,5248,5251,5252,5254,5255,5256,5257, 5259,5261,5262,5267,5270,5271,5275,5279,5281,5283,5284,5286,5288,5289,5292,5293, 5295,5307,5308,5310,5311,5313,5315,5321,5323,5324,5325,5327,5328,5339,5340,5345, 5351,5353,5355,5356,5357,5358,5359,5363,5364,5365,5366,5369,5370,5371,5372,5373, 5376,5377,5378,5379,5381,5382,5383,5384,5385,5386,5387,5388,5389,5390,5393,5395, 5405,5406,5407,5411,5413,5414,5415,5416,5417,5418,5420,5424,5425,5429,5430,5431, 5432,5433,5434,5435,5437,5441,5451,5460,5467,5473,5476,5506,5524,5528,5530,5534, 5535,5536,5550,5551,5552,5553,5554,5556,5557,5559,5564,5565,5567,5568,5574,5575, 5585,5586,5587,5597,5600,5601,5605,5606,5607,5613,5614,5615,5617,5618,5624,5626, 5627,5628,5640,5643,5644,5645,5647,5648,5649,5650,5660,5661,5670,5671,5673,5674, 5675,5681,5683,5685,5689,5690,5691,5692,5693,5694,5695,5696,5697,5702,5703,5704, 5705,5706,5708,5710,5711,5712,5713,5716,5717,5719,5730,5732,5737,5744,5745,5746, 5748,5749,5750,5752,5753,5754,5756,5757,5758,5759,5761,5762,5763,5764,5765,5767, 5769,5770,5776,5780,5782,5783,5784,5787,5788,5789,5790,5791,5792,5793,5794,5799, 5802,5803,5804,5805,5806,5808,5809,5810,5812,5813,5814,5816,5817,5818,5822,5823,5826, 5827,5829,5830,5831,5848,5849,5850,5851,5852,5854,5856,5858,5859,5863,5864,5865, 5866,5867,5873,5884,5885,5893,5898,5899,5904,5907,5908,5910,5911,5915,5916,5918, 5919,5922,5923,5924,5933,5934,5941,5944,5950,5954,5955,5956,5960,5961,5973,5978,5981, 5982,5983,5984,5985,5986,5987,5988,5989,5990,5998,5999,6000,6002,6003,6004,6006, 6007,6010,6093,6175,6177,6217,6236,6325,6327,6347,6398,6403,6447,6582,6586,6609, 6697,6904,6926,6933,7001,7003,7047,7081,7094,7111,7205,7207,7219,7220,7221,7222, 7224,7227,7228,7229,7230,7232,7237,7238,7241,7268,7274,7275,7276,7281,7300,7307, 7309,7315,7330,7333,7334,7339,7343,7348,7354,7360,7374,7377,7378,7390,7429,7434, 7445,7448,7449,7452,7532,7534,7539,7542,7546,7547,7555,7563,7565,7567,7572,7575, 7576,7577,7578,7579,7585,7611,7907,7926,8100,8134,8205,8324,8337,8339,8350,8351, 8362,8410,8568,8572,8618,8619,8651,8665,8666,8667,8668,9010,9068,9098,9100,9106, 9111,9115,9121,9123,9174,9177,9272,9302,9421,9570,9683,9684,9697,9704,9712,9715,9779, 9790,9792,9793,9795,9798,9814,9818,9856,9866,9876,9886,9891,9908,9912,9928,10508, 10825,11103,11729,12289,12377,12643,12656,12657,12668,12876,12926,12958,13291, 13300,13408,13472,13976,14477,14538,14833,15044,15108,15779,16039,16061,16549, 16556,16562,16564,16565,16571,16573,16574,16576,16577,16584,16589,16590,16591, 16592,16598,16604,16606,16607,16610,16620,16645,16648,16650,16654,16655,16661, 16662,16675,16680,16697,16699,16701,16702,16704,16705,16708,16714,16719,16723, 16724,16727,16729,16731,16732,16743,16750,16752,16755,16758,16772,16774,16782,16787, 16793,16794,16795,16797,16798,16802,16813,16814,16815,16824,16825,16829,16831, 16841,16843,16848,16850,16863,16864,16866,16870,16878,16881,16887,16893,16896,16897, 16900,16902,16909,16912,16936,16944,16948,16958,16960,16963,16974,16978,16993,17012, 17016,17020,17053,17061,17096,17120,17124,17125,17129,17135,17137,17140,17141,17142, 17145,17149,17150,17157,17164,17170,17172,17173,17178,17180,17184,17187,17188, 17192,17196,17197,17200,17201,17206,17207,17221,17223,17227,17236,17244,17246, 17273,17285,17289,17291,17297,17300,17305,17310,17311,17321,17326,17331,17335, 17352,17370,17414,17423,17424,17439,17479,17493,17495,17501,17519,17525,17541, 17571,17590,17614,17755,17838,17846,17848,17852,17853,17855,17858,17861,17871, 17876,17877,17891,17896,17899,17900,17905,17908,17910,17911,17916,17917,17938,17939, 17940,17949,17953,17955,17960,17972,17980,17982,17992,18055,18067,18069,18071,18077, 18108,18127,18134,18136,18140,18142,18143,18158,18162,18178,18192,18196,18206,18217, 18221,18242,18245,18249,18263,18271,18273,18275,18277,18278,18286,18291,18295,18300, 18301,18308,18325,18333,18338,18360,18373,18374,18387,18397,18411,18412,18420,18429, 18434,18455,18478,18484,18534,18779,18790,18804,18821,18851,18964,18965,18977,18990, 18991,19000,19006,19276,19291,19374,19395,19416,19432,19627,19917,19927,19971,19974, 19989,20007,2254,2549,2652,3077,3615,4483,4484,4611,4700,5714,5772,6252,6536,7051, 7102,7107,7591,8167,8286,8935,9937,11089,12344,15830,16343,16644,17359, 17994,18774) AND r.activation=1 ORDER BY r.first_name ASC LIMIT 0, 10;

    Read the article

  • App crashes when adding array data to table cells

    - by bassmandan
    I am trying to create a table view that loads a number of tweets into the table (one per cell etc). I am using NSXMLParser to get the information and have got as far as creating an array with the selection of tweets that I want. However, when I try to add them to the table cells, the app crashes on the line: cell.textLabel.text = cellValue; An NSLog before this shows in the console that the app is getting the correct data, so I am a bit stumped as to why this isn't working. This is the block of code that appears to be having the problem: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; } // Set up the cell... NSString *cellValue = [statuses objectAtIndex:indexPath.row]; NSLog(@"%@", cellValue); cell.textLabel.text = cellValue; return cell;} If it makes a difference, I am using ARC and the latest version of XCode. I'm still quite new to all this, so if I need to give some extra information, let me know. Thanks in advance. Edit: Backtrace gives the following: * thread #1: tid = 0x2003, 0x918a19c6 libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGABRT frame #0: 0x918a19c6 libsystem_kernel.dylib`__pthread_kill + 10 frame #1: 0x9968ff78 libsystem_c.dylib`pthread_kill + 106 frame #2: 0x99680bdd libsystem_c.dylib`abort + 167 frame #3: 0x03c93e78 libc++abi.dylib`_Unwind_DeleteException frame #4: 0x03c9189e libc++abi.dylib`_ZL17default_terminatev + 34 frame #5: 0x0154df4b libobjc.A.dylib`_objc_terminate + 94 frame #6: 0x03c918de libc++abi.dylib`_ZL19safe_handler_callerPFvvE + 13 frame #7: 0x03c91946 libc++abi.dylib`std::terminate() + 23 frame #8: 0x03c92ab2 libc++abi.dylib`__cxa_throw + 110 frame #9: 0x0154de15 libobjc.A.dylib`objc_exception_throw + 311 frame #10: 0x013bdced CoreFoundation`-[NSObject doesNotRecognizeSelector:] + 253 frame #11: 0x01322f00 CoreFoundation`___forwarding___ + 432 frame #12: 0x01322ce2 CoreFoundation`_CF_forwarding_prep_0 + 50 frame #13: 0x0015168f UIKit`-[UILabel setText:] + 56 frame #14: 0x00003088 Twitter`-[TwitterViewController tableView:cellForRowAtIndexPath:] + 376 at TwitterViewController.m:131 frame #15: 0x000ace0f UIKit`-[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:withIndexPath:] + 494 frame #16: 0x000ad589 UIKit`-[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:] + 69 frame #17: 0x00098dfd UIKit`-[UITableView(_UITableViewPrivate) _updateVisibleCellsNow:] + 1350 frame #18: 0x000a7851 UIKit`-[UITableView layoutSubviews] + 242 frame #19: 0x00052301 UIKit`-[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 145 frame #20: 0x013bde72 CoreFoundation`-[NSObject performSelector:withObject:] + 66 frame #21: 0x01d6692d QuartzCore`-[CALayer layoutSublayers] + 266 frame #22: 0x01d70827 QuartzCore`CA::Layer::layout_if_needed(CA::Transaction*) + 231 frame #23: 0x01cf6fa7 QuartzCore`CA::Context::commit_transaction(CA::Transaction*) + 377 frame #24: 0x01cf8ea6 QuartzCore`CA::Transaction::commit() + 374 frame #25: 0x01d8430c QuartzCore`+[CATransaction flush] + 52 frame #26: 0x000124c6 UIKit`-[UIApplication _reportAppLaunchFinished] + 39 frame #27: 0x00012bd6 UIKit`-[UIApplication _runWithURL:payload:launchOrientation:statusBarStyle:statusBarHidden:] + 1324 frame #28: 0x00021743 UIKit`-[UIApplication handleEvent:withNewEvent:] + 1027 frame #29: 0x000221f8 UIKit`-[UIApplication sendEvent:] + 68 frame #30: 0x00015aa9 UIKit`_UIApplicationHandleEvent + 8196 frame #31: 0x012a6fa9 GraphicsServices`PurpleEventCallback + 1274 frame #32: 0x013901c5 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 53 frame #33: 0x012f5022 CoreFoundation`__CFRunLoopDoSource1 + 146 frame #34: 0x012f390a CoreFoundation`__CFRunLoopRun + 2218 frame #35: 0x012f2db4 CoreFoundation`CFRunLoopRunSpecific + 212 frame #36: 0x012f2ccb CoreFoundation`CFRunLoopRunInMode + 123 frame #37: 0x000122a7 UIKit`-[UIApplication _run] + 576 frame #38: 0x00013a9b UIKit`UIApplicationMain + 1175 frame #39: 0x0000239d Twitter`main + 141 at main.m:16 frame #40: 0x00002305 Twitter`start + 53 Debugging console shows this: 2012-04-08 10:10:05.084 Twitter[25309:f803] ( { text = "Have you shared the Shakedown yet? http://t.co/WHrIC9w7"; }, { text = "For all you closet rocknrollas pencil in Sat 12th May The Rebirth of Rock n Roll Party. Haywire Saint @ The Good... http://t.co/OXHKlLIV"; }, { text = "4 weeks today: Vocal tracks will be getting recorded at The Premises Studios"; }, { text = "Rehearsal tonight in preparation to some big recording next month!"; }, { text = "haywire saint 'great taste.' Tune. \n\nhttp://t.co/GKmu5Lna http://t.co/0fii55Hw"; }, { text = "Meeting up with an old roadie for The Cure today. oh the stories...... http://t.co/UeUYccme"; }, { text = "Satisfying day of programming today.. Haywire Saint app coming along nicely with the custom music player ready to rock 'n' roll!"; }, { text = "Happy Friday Everyone!"; }, { text = "We had a great time at The Premises Studios yesterday. We'll be back there before long :D x"; }, { text = "I posted a new photo to Facebook http://t.co/73qAnCvk"; } ) 2012-04-08 10:10:05.093 Twitter[25309:f803] { text = "Have you shared the Shakedown yet? http://t.co/WHrIC9w7"; } 2012-04-08 10:10:05.094 Twitter[25309:f803] -[__NSCFDictionary isEqualToString:]: unrecognized selector sent to instance 0x6877a50 2012-04-08 10:10:05.096 Twitter[25309:f803] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSCFDictionary isEqualToString:]: unrecognized selector sent to instance 0x6877a50' *** First throw call stack: (0x13bc052 0x154dd0a 0x13bdced 0x1322f00 0x1322ce2 0x15168f 0x3088 0xace0f 0xad589 0x98dfd 0xa7851 0x52301 0x13bde72 0x1d6692d 0x1d70827 0x1cf6fa7 0x1cf8ea6 0x1d8430c 0x124c6 0x12bd6 0x21743 0x221f8 0x15aa9 0x12a6fa9 0x13901c5 0x12f5022 0x12f390a 0x12f2db4 0x12f2ccb 0x122a7 0x13a9b 0x239d 0x2305) terminate called throwing an exception2012-04-08 10:10:05.924 Twitter[25309:f803] -[__NSCFConstantString count]: unrecognized selector sent to instance 0x5b30

    Read the article

  • Server Error in '/' Application (ASP.NET)

    - by baeltazor
    Hi All I just setup up member ship roles and registration on my website with visual web developer using the tutorial on msdn. It works perfectly locally, but when i uploaded it to my server, I get the following page: "Server Error in '/' Application. -------------------------------------------------------------------------------- Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: The connection name 'LocalSqlServer' was not found in the applications configuration or the connection string is empty. Source Error: [No relevant source lines] Source File: machine.config Line: 160 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016 " Does anybody know why I'm seeing this and how I may go about fixinf this? Any help is greatly appreciated. Thank you Bael. EDIT: I have just looked at my web.config file after reading the following line in the error message: "The connection name 'LocalSqlServer' was not found in the applications configuration or the connection string is empty." ... And have noticed that the following element is completely empty: <connectionStrings/> // Is this one supposed to be empty? if not what should go here? In the error it implies it shouldn't be empty. Also, I don't know where I should place LocalSqlServer LATEST EDIT After changing DataSource to LocalHost i get the following error: Server Error in '/JTS' Application. -------------------------------------------------------------------------------- A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4849015 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) +4862333 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) +90 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) +342 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +221 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +189 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +185 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +31 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +433 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +66 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +499 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +65 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +117 System.Data.SqlClient.SqlConnection.Open() +122 System.Web.DataAccess.SqlConnectionHolder.Open(HttpContext context, Boolean revertImpersonate) +87 System.Web.DataAccess.SqlConnectionHelper.GetConnection(String connectionString, Boolean revertImpersonation) +221 System.Web.Security.SqlMembershipProvider.GetPasswordWithFormat(String username, Boolean updateLastLoginActivityDate, Int32& status, String& password, Int32& passwordFormat, String& passwordSalt, Int32& failedPasswordAttemptCount, Int32& failedPasswordAnswerAttemptCount, Boolean& isApproved, DateTime& lastLoginDate, DateTime& lastActivityDate) +815 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved, String& salt, Int32& passwordFormat) +105 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved) +42 System.Web.Security.SqlMembershipProvider.ValidateUser(String username, String password) +78 System.Web.UI.WebControls.Login.AuthenticateUsingMembershipProvider(AuthenticateEventArgs e) +60 System.Web.UI.WebControls.Login.OnAuthenticate(AuthenticateEventArgs e) +119 System.Web.UI.WebControls.Login.AttemptLogin() +115 System.Web.UI.WebControls.Login.OnBubbleEvent(Object source, EventArgs e) +101 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.Button.OnCommand(CommandEventArgs e) +118 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +166 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1565 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927

    Read the article

  • .NET interview, code structure and the design

    - by j_lewis
    I have been given the below .NET question in an interview. I don’t know why I got low marks. Unfortunately I did not get a feedback. Question: The file hockey.csv contains the results from the Hockey Premier League. The columns ‘For’ and ‘Against’ contain the total number of goals scored for and against each team in that season (so Alabama scored 79 goals against opponents, and had 36 goals scored against them). Write a program to print the name of the team with the smallest difference in ‘for’ and ‘against’ goals. the structure of the hockey.csv looks like this (it is a valid csv file, but I just copied the values here to get an idea) Team - For - Against Alabama 79 36 Washinton 67 30 Indiana 87 45 Newcastle 74 52 Florida 53 37 New York 46 47 Sunderland 29 51 Lova 41 64 Nevada 33 63 Boston 30 64 Nevada 33 63 Boston 30 64 Solution: class Program { static void Main(string[] args) { string path = @"C:\Users\<valid csv path>"; var resultEvaluator = new ResultEvaluator(string.Format(@"{0}\{1}",path, "hockey.csv")); var team = resultEvaluator.GetTeamSmallestDifferenceForAgainst(); Console.WriteLine( string.Format("Smallest difference in ‘For’ and ‘Against’ goals > TEAM: {0}, GOALS DIF: {1}", team.Name, team.Difference )); Console.ReadLine(); } } public interface IResultEvaluator { Team GetTeamSmallestDifferenceForAgainst(); } public class ResultEvaluator : IResultEvaluator { private static DataTable leagueDataTable; private readonly string filePath; private readonly ICsvExtractor csvExtractor; public ResultEvaluator(string filePath){ this.filePath = filePath; csvExtractor = new CsvExtractor(); } private DataTable LeagueDataTable{ get { if (leagueDataTable == null) { leagueDataTable = csvExtractor.GetDataTable(filePath); } return leagueDataTable; } } public Team GetTeamSmallestDifferenceForAgainst() { var teams = GetTeams(); var lowestTeam = teams.OrderBy(p => p.Difference).First(); return lowestTeam; } private IEnumerable<Team> GetTeams() { IList<Team> list = new List<Team>(); foreach (DataRow row in LeagueDataTable.Rows) { var name = row["Team"].ToString(); var @for = int.Parse(row["For"].ToString()); var against = int.Parse(row["Against"].ToString()); var team = new Team(name, against, @for); list.Add(team); } return list; } } public interface ICsvExtractor { DataTable GetDataTable(string csvFilePath); } public class CsvExtractor : ICsvExtractor { public DataTable GetDataTable(string csvFilePath) { var lines = File.ReadAllLines(csvFilePath); string[] fields; fields = lines[0].Split(new[] { ',' }); int columns = fields.GetLength(0); var dt = new DataTable(); //always assume 1st row is the column name. for (int i = 0; i < columns; i++) { dt.Columns.Add(fields[i].ToLower(), typeof(string)); } DataRow row; for (int i = 1; i < lines.GetLength(0); i++) { fields = lines[i].Split(new char[] { ',' }); row = dt.NewRow(); for (int f = 0; f < columns; f++) row[f] = fields[f]; dt.Rows.Add(row); } return dt; } } public class Team { public Team(string name, int against, int @for) { Name = name; Against = against; For = @for; } public string Name { get; private set; } public int Against { get; private set; } public int For { get; private set; } public int Difference { get { return (For - Against); } } } Output: Smallest difference in for' andagainst' goals TEAM: Boston, GOALS DIF: -34 Can someone please review my code and see anything obviously wrong here? They were only interested in the structure/design of the code and whether the program produces the correct result (i.e lowest difference). Much appreciated. "P.S - Please correct me if the ".net-interview" tag is not the right tag to use"

    Read the article

  • Android SQLiteConstraintException: error code 19: constraint failed

    - by Tom D
    I've seen other questions about this exception, but all of them seem to be resolved with the solution that a row with the primary key specified already exists. This doesn't seem to be the case for me. I have tried replacing all single quotes in my strings with double quotes, but the same problem occurs. I'm trying to insert a row into the Settings table of the SQLite database I've created by doing the following: db.execSQL("DROP TABLE IF EXISTS "+Settings.SETTINGS_TABLE_NAME + ";"); db.execSQL(CREATE_MEDIA_TABLE); db.execSQL(CREATE_SETTINGS_TABLE); Cursor c = getAllSettings(); //If there isn't already a settings row, create a row full of defaults if(c.getCount()==0){ ContentValues cv = new ContentValues(); cv.put(Settings.SETTING_UNIQUE_ID, "'"+Settings.uniqueID+"'"); cv.put(Settings.SETTING_DEVICE_ID, Settings.SETTING_DEVICE_ID_DEFAULT); cv.put(Settings.SETTING_CONNECTION_PREFERENCE, Settings.SETTING_CONNECTION_PREFERENCE_DEFAULT); cv.put(Settings.SETTING_AD_HOC_ENABLED, Settings.SETTING_AD_HOC_ENABLED_DEFAULT); cv.put(Settings.SETTING_SERVER_ADDRESS, Settings.SETTING_SERVER_ADDRESS_DEFAULT); cv.put(Settings.SETTING_RECORDING_MODE, Settings.SETTING_RECORDING_MODE_DEFAULT); cv.put(Settings.SETTING_PREVIEW_ENABLED, Settings.SETTING_PREVIEW_ENABLED_DEFAULT); cv.put(Settings.SETTING_PICTURE_RESOLUTION_X, Settings.SETTING_PICTURE_RESOLUTION_X_DEFAULT); cv.put(Settings.SETTING_PICTURE_RESOLUTION_Y, Settings.SETTING_PICTURE_RESOLUTION_Y_DEFAULT); cv.put(Settings.SETTING_VIDEO_RESOLUTION_X, Settings.SETTING_VIDEO_RESOLUTION_X_DEFAULT); cv.put(Settings.SETTING_VIDEO_RESOLUTION_Y, Settings.SETTING_VIDEO_RESOLUTION_Y_DEFAULT); cv.put(Settings.SETTING_VIDEO_FPS, Settings.SETTING_VIDEO_FPS_DEFAULT); cv.put(Settings.SETTING_AUDIO_BITRATE_KBPS, Settings.SETTING_AUDIO_BITRATE_KBPS_DEFAULT); cv.put(Settings.SETTING_STORE_TO_SD, Settings.SETTING_STORE_TO_SD_DEFAULT); cv.put(Settings.SETTING_STORAGE_LIMIT_MB, Settings.SETTING_STORAGE_LIMIT_MB_DEFAULT); this.db.insert(Settings.SETTINGS_TABLE_NAME, null, cv); } The CREATE_SETTINGS_TABLE string is defined as the following: private static String CREATE_SETTINGS_TABLE = "CREATE TABLE IF NOT EXISTS " + Settings.SETTINGS_TABLE_NAME + "(" + Settings.SETTING_UNIQUE_ID + " TEXT NOT NULL PRIMARY KEY, " + Settings.SETTING_DEVICE_ID + " TEXT NOT NULL , " + Settings.SETTING_CONNECTION_PREFERENCE + " TEXT NOT NULL CHECK("+Settings.SETTING_CONNECTION_PREFERENCE+" IN("+Settings.SETTING_CONNECTION_PREFERENCE_ALLOWED+")), " + Settings.SETTING_AD_HOC_ENABLED + " TEXT NOT NULL CHECK("+Settings.SETTING_AD_HOC_ENABLED+" IN("+Settings.SETTING_AD_HOC_ENABLED_ALLOWED+")), " + Settings.SETTING_SERVER_ADDRESS + " TEXT NOT NULL, " + Settings.SETTING_RECORDING_MODE + " TEXT NOT NULL CHECK("+Settings.SETTING_RECORDING_MODE+" IN("+Settings.SETTING_RECORDING_MODE_ALLOWED+")), " + Settings.SETTING_PREVIEW_ENABLED + " TEXT NOT NULL CHECK("+Settings.SETTING_PREVIEW_ENABLED+" IN("+Settings.SETTING_PREVIEW_ENABLED_ALLOWED+")), " + Settings.SETTING_PICTURE_RESOLUTION_X + " TEXT NOT NULL, " + Settings.SETTING_PICTURE_RESOLUTION_Y + " TEXT NOT NULL, " + Settings.SETTING_VIDEO_RESOLUTION_X + " TEXT NOT NULL, " + Settings.SETTING_VIDEO_RESOLUTION_Y + " TEXT NOT NULL, " + Settings.SETTING_VIDEO_FPS + " TEXT NOT NULL, " + Settings.SETTING_AUDIO_BITRATE_KBPS + " TEXT NOT NULL, " + Settings.SETTING_STORE_TO_SD + " TEXT NOT NULL CHECK("+Settings.SETTING_STORE_TO_SD+" IN("+Settings.SETTING_STORE_TO_SD_ALLOWED+")), " + Settings.SETTING_STORAGE_LIMIT_MB + " TEXT NOT NULL )"; However, when I execute my insert, I always get: 03-19 19:37:36.974: ERROR/Database(386): Error inserting server_address='0.0.0.0' storage_limit='-1' connection='none' preview_enabled='0' sd_enabled='1' video_fps='15' audio_bitrate='96' device_id='-1' recording_mode='none' picture_resolution_x='-1' picture_resolution_y='-1' unique_id='000000000000000' adhoc_enable='0' video_resolution_x='320' video_resolution_y='240' 03-19 19:45:34.284: ERROR/Database(446): android.database.sqlite.SQLiteConstraintException: error code 19: constraint failed It seems as if all the columns in my insert are not null. The row's primary key HAS to be unique, because it's the only row in the table. Therefore, the only thing I can think of is my CHECK conditions aren't true. Here are the predefined strings I'm using: public static final String SETTING_UNIQUE_ID = "unique_id"; public static final String SETTING_DEVICE_ID = "device_id"; public static final String SETTING_DEVICE_ID_DEFAULT = "'-1'"; public static final String SETTING_CONNECTION_PREFERENCE = "connection"; public static final String SETTING_CONNECTION_PREFERENCE_3G = "'3g'"; public static final String SETTING_CONNECTION_PREFERENCE_WIFI = "'wifi'"; public static final String SETTING_CONNECTION_PREFERENCE_NONE = "'none'"; public static final String SETTING_CONNECTION_PREFERENCE_ALLOWED = SETTING_CONNECTION_PREFERENCE_3G+","+SETTING_CONNECTION_PREFERENCE_WIFI+","+SETTING_CONNECTION_PREFERENCE_NONE; public static final String SETTING_CONNECTION_PREFERENCE_DEFAULT = SETTING_CONNECTION_PREFERENCE_NONE; public static final String SETTING_AD_HOC_ENABLED = "adhoc_enable"; public static final String SETTING_AD_HOC_ENABLED_ALLOWED = TRUE+","+FALSE; public static final String SETTING_AD_HOC_ENABLED_DEFAULT = FALSE; public static final String SETTING_SERVER_ADDRESS = "server_address"; public static final String SETTING_SERVER_ADDRESS_DEFAULT = "'0.0.0.0'"; public static final String SETTING_RECORDING_MODE = "recording_mode"; public static final String SETTING_RECORDING_MODE_VIDEO = "'video'"; public static final String SETTING_RECORDING_MODE_AUDIO = "'audio'"; public static final String SETTING_RECORDING_MODE_PICTURE = "'picture'"; public static final String SETTING_RECORDING_MODE_NONE = "'none'"; public static final String SETTING_RECORDING_MODE_ALLOWED = SETTING_RECORDING_MODE_VIDEO+","+SETTING_RECORDING_MODE_AUDIO+","+SETTING_RECORDING_MODE_PICTURE+","+SETTING_RECORDING_MODE_NONE; public static final String SETTING_RECORDING_MODE_DEFAULT = SETTING_RECORDING_MODE_NONE; public static final String SETTING_PREVIEW_ENABLED = "preview_enabled"; public static final String SETTING_PREVIEW_ENABLED_ALLOWED = TRUE+","+FALSE; public static final String SETTING_PREVIEW_ENABLED_DEFAULT = FALSE; public static final String SETTING_PICTURE_RESOLUTION_X = "picture_resolution_x"; public static final String SETTING_PICTURE_RESOLUTION_X_DEFAULT = "'-1'"; public static final String SETTING_PICTURE_RESOLUTION_Y = "picture_resolution_y"; public static final String SETTING_PICTURE_RESOLUTION_Y_DEFAULT = "'-1'"; public static final String SETTING_VIDEO_RESOLUTION_X = "video_resolution_x"; public static final String SETTING_VIDEO_RESOLUTION_X_DEFAULT = "'320'"; public static final String SETTING_VIDEO_RESOLUTION_Y = "video_resolution_y"; public static final String SETTING_VIDEO_RESOLUTION_Y_DEFAULT = "'240'"; public static final String SETTING_VIDEO_FPS = "video_fps"; public static final String SETTING_VIDEO_FPS_DEFAULT = "'15'"; public static final String SETTING_AUDIO_BITRATE_KBPS = "audio_bitrate"; public static final String SETTING_AUDIO_BITRATE_KBPS_DEFAULT = "'96'"; public static final String SETTING_STORE_TO_SD = "sd_enabled"; public static final String SETTING_STORE_TO_SD_ALLOWED = TRUE+","+FALSE; public static final String SETTING_STORE_TO_SD_DEFAULT = TRUE; public static final String SETTING_STORAGE_LIMIT_MB = "storage_limit"; public static final String SETTING_STORAGE_LIMIT_MB_DEFAULT = "'-1'"; public static final String SETTING_CLIP_LENGTH_SECONDS = "clip_length"; public static final String SETTING_CLIP_LENGTH_SECONDS_DEFAULT = "'300'"; Does anyone see what could be going on? I'm stumped. Thanks in advance.

    Read the article

  • SWIG & C/C++ Python API connected - SEGFAULT

    - by user289637
    Hello, my task is to create dual program. At the beginning I start C program that calls throught C/C++ API of Python some Python method. The called method after that call a function that is created with SWIG. I show you my sample also with backtrace from gdb after I am given Segmentation fault. main.c: #include <Python.h> #include <stdio.h> #include "utils.h" int main(int argc, char** argv) { printf("Calling from C !\n"); increment(); int i; for(i = 0; i < 11; ++i) { Py_Initialize(); PyObject *pname = PyString_FromString("py_function"); PyObject *module = PyImport_Import(pname); PyObject *dict = PyModule_GetDict(module); PyObject *func = PyDict_GetItemString(dict, "ink"); PyObject_CallObject(func, NULL); Py_DECREF(module); Py_DECREF(pname); printf("\tbefore finalize\n"); Py_Finalize(); printf("\tafter finalize\n"); } return 0; } utils.c #include <stdio.h> #include "utils.h" void increment(void) { printf("Incremention counter to: %u\n", ++counter); } py_function.py #!/usr/bin/python2.6 '''py_function.py - Python source designed to demonstrate the use of python embedding''' import utils def ink(): print 'I am gonna increment !' utils.increment() and last think is my Makefile & SWIG configure file Makefile: CC=gcc CFLAGS=-c -g -Wall -std=c99 all: main main: main.o utils.o utils_wrap.o $(CC) main.o utils.o -lpython2.6 -o sample swig -Wall -python -o utils_wrap.c utils.i $(CC) utils.o utils_wrap.o -shared -o _utils.so main.o: main.c $(CC) $(CFLAGS) main.c -I/usr/include/python2.6 -o main.o utils.o: utils.c utils.h $(CC) $(CFLAGS) -fPIC utils.c -o $@ utils_wrap.o: utils_wrap.c $(CC) -c -fPIC utils_wrap.c -I/usr/include/python2.6 -o $@ clean: rm -rf *.o The program is called by ./main and there is output: (gdb) run Starting program: /home/marxin/Programming/python2/sample [Thread debugging using libthread_db enabled] Calling from C ! Incremention counter to: 1 I am gonna increment ! Incremention counter to: 2 before finalize after finalize I am gonna increment ! Incremention counter to: 3 before finalize after finalize I am gonna increment ! Incremention counter to: 4 before finalize after finalize Program received signal SIGSEGV, Segmentation fault. 0xb7ed3e4e in PyObject_Malloc () from /usr/lib/libpython2.6.so.1.0 Backtrace: (gdb) backtrace #0 0xb7ed3e4e in PyObject_Malloc () from /usr/lib/libpython2.6.so.1.0 #1 0xb7ca2b2c in ?? () #2 0xb7f8dd40 in ?? () from /usr/lib/libpython2.6.so.1.0 #3 0xb7eb014c in ?? () from /usr/lib/libpython2.6.so.1.0 #4 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #5 0xb7f99820 in ?? () from /usr/lib/libpython2.6.so.1.0 #6 0x00000001 in ?? () #7 0xb7f8dd40 in ?? () from /usr/lib/libpython2.6.so.1.0 #8 0xb7f4f014 in _PyObject_GC_Malloc () from /usr/lib/libpython2.6.so.1.0 #9 0xb7f99820 in ?? () from /usr/lib/libpython2.6.so.1.0 #10 0xb7f4f104 in _PyObject_GC_NewVar () from /usr/lib/libpython2.6.so.1.0 #11 0xb7ee8760 in _PyType_Lookup () from /usr/lib/libpython2.6.so.1.0 #12 0xb7f99820 in ?? () from /usr/lib/libpython2.6.so.1.0 #13 0x00000001 in ?? () #14 0xb7f8dd40 in ?? () from /usr/lib/libpython2.6.so.1.0 #15 0xb7ef13ed in ?? () from /usr/lib/libpython2.6.so.1.0 #16 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #17 0x00000001 in ?? () #18 0xbfff0c34 in ?? () #19 0xb7e993c3 in ?? () from /usr/lib/libpython2.6.so.1.0 #20 0x00000001 in ?? () #21 0xbfff0c70 in ?? () #22 0xb7f99da0 in ?? () from /usr/lib/libpython2.6.so.1.0 #23 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #24 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #25 0x080a6b0c in ?? () #26 0x080a6b0c in ?? () #27 0xb7e99420 in PyObject_CallFunctionObjArgs () from /usr/lib/libpython2.6.so.1.0 #28 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #29 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #30 0x800e55eb in ?? () #31 0x080a6b0c in ?? () #32 0xb7e9958c in PyObject_IsSubclass () from /usr/lib/libpython2.6.so.1.0 #33 0xb7f8dd40 in ?? () from /usr/lib/libpython2.6.so.1.0 #34 0x080a9020 in ?? () #35 0xb7fb78f0 in PyFPE_counter () from /usr/lib/libpython2.6.so.1.0 #36 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #37 0x00000000 in ?? () Thanks for your help and advices, marxin

    Read the article

  • How to properly sort MPTT hierarchy data into multidimensional array ?

    - by DiegoMax
    Helo there, im trying to figure out how to write a function that returns a multidimensional array, based on the data below: I know how to write the function using the "category_parent" value, but im just trying to write a function that can create a multidimensional array by JUST using the left and right keys. Any help greatly appreciated! array(71) { [0]=> array(9) { ["id"]=> string(1) "1" ["category_name"]=> string(6) "Rubros" ["category_parent"]=> string(1) "0" ["category_slug"]=> string(6) "rubros" ["category_image"]=> NULL ["category_totals"]=> NULL ["category_lft"]=> string(1) "1" ["category_rgt"]=> string(3) "142" } [1]=> array(9) { ["id"]=> string(4) "1000" ["category_name"]=> string(12) "Restaurantes" ["category_parent"]=> string(1) "1" ["category_slug"]=> string(12) "restaurantes" ["category_image"]=> string(16) "restaurantes.png" ["category_totals"]=> string(1) "1" ["category_lft"]=> string(1) "2" ["category_rgt"]=> string(2) "13" } [2]=> array(9) { ["id"]=> string(1) "3" ["category_name"]=> string(21) "Restaurantes de Campo" ["category_parent"]=> string(4) "1000" ["category_slug"]=> string(21) "restaurantes-de-campo" ["category_image"]=> NULL ["category_totals"]=> string(1) "1" ["category_lft"]=> string(1) "3" ["category_rgt"]=> string(1) "4" } [3]=> array(9) { ["id"]=> string(2) "37" ["category_name"]=> string(25) "Restaurantes en la Ciudad" ["category_parent"]=> string(4) "1000" ["category_slug"]=> string(19) "restaurantes-ciudad" ["category_image"]=> string(0) "" ["category_totals"]=> string(1) "6" ["category_lft"]=> string(1) "5" ["category_rgt"]=> string(1) "6" } [4]=> array(9) { ["id"]=> string(2) "41" ["category_name"]=> string(21) "Servicios de Catering" ["category_parent"]=> string(4) "1000" ["category_slug"]=> string(8) "catering" ["category_image"]=> string(0) "" ["category_totals"]=> string(1) "1" ["category_lft"]=> string(1) "7" ["category_rgt"]=> string(1) "8" } [5]=> array(9) { ["id"]=> string(2) "48" ["category_name"]=> string(10) "Rotiserias" ["category_parent"]=> string(4) "1000" ["category_slug"]=> string(10) "rotiserias" ["category_image"]=> string(0) "" ["category_totals"]=> string(1) "1" ["category_lft"]=> string(1) "9" ["category_rgt"]=> string(2) "10" } [6]=> array(9) { ["id"]=> string(2) "62" ["category_name"]=> string(10) "Pizzerías" ["category_parent"]=> string(4) "1000" ["category_slug"]=> string(9) "pizzerias" ["category_image"]=> string(0) "" ["category_totals"]=> string(1) "1" ["category_lft"]=> string(2) "11" ["category_rgt"]=> string(2) "12" } [7]=> array(9) { ["id"]=> string(1) "2" ["category_name"]=> string(13) "Profesionales" ["category_parent"]=> string(1) "1" ["category_slug"]=> string(13) "profesionales" ["category_image"]=> string(17) "profesionales.png" ["category_totals"]=> string(1) "2" ["category_lft"]=> string(2) "14" ["category_rgt"]=> string(2) "35" } [8]=> array(9) { ["id"]=> string(2) "29" ["category_name"]=> string(11) "Arquitectos" ["category_parent"]=> string(1) "2" ["category_slug"]=> string(11) "arquitectos" ["category_image"]=> NULL ["category_totals"]=> string(1) "0" ["category_lft"]=> string(2) "15" ["category_rgt"]=> string(2) "16" } [9]=> array(9) { ["id"]=> string(2) "30" ["category_name"]=> string(8) "Abogados" ["category_parent"]=> string(1) "2" ["category_slug"]=> string(8) "abogados" ["category_image"]=> NULL ["category_totals"]=> string(1) "6" ["category_lft"]=> string(2) "17" ["category_rgt"]=> string(2) "18" } }

    Read the article

  • Django + dbxml + Apache = problems. Any solutions?

    - by Jason
    I'm trying to set up a Django application using WSGI. That works fine. However, I am having some issues with part of my Django app that uses BDB XML. My Apache config is as follows: Listen 8000 WSGISocketPrefix /tmp/wsgi <VirtualHost *:8000> ServerName <server name> DocumentRoot <path to doc root> LogLevel info WSGIScriptAlias / <path to wsgi> WSGIApplicationGroup %{GLOBAL} WSGIDaemonProcess debug threads=1 WSGIProcessGroup debug </VirtualHost> However, I'm still getting the following error: DB_ENV->repmgr_stat interface requires an environment configured for the replication subsystem [error] child died with signal 11 My environment is opened as: environment = DBEnv() environment.open( <absolute db env path>, DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL, 0 ) I am using: python 2.6.2 apache 2.2 ubuntu 9.04 dbxml 2.5.13 compiled from source (so libdb-4.8, bsddb3, all that jazz) I see Apache seems to link to libdb-4.6. Is this a problem? ldd /usr/sbin/apache2 | grep libdb libdb-4.6.so => /usr/lib/libdb-4.6.so (0xb7c01000) Updated Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb5a48b90 (LWP 12700)] 0x00000000 in ?? () (gdb) thread apply all bt Thread 4 (Thread 0xb6a67b90 (LWP 12698)): #0 0xb7f11422 in __kernel_vsyscall () #1 0xb7de07b1 in select () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7ea5bcf in apr_sleep () from /usr/lib/libapr-1.so.0 #3 0xb6d7afee in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0xb7ea38ec in ?? () from /usr/lib/libapr-1.so.0 #5 0xb7e6d4ff in start_thread () from /lib/tls/i686/cmov/libpthread.so.0 #6 0xb7de849e in clone () from /lib/tls/i686/cmov/libc.so.6 Thread 3 (Thread 0xb6249b90 (LWP 12699)): #0 0xb7f11422 in __kernel_vsyscall () #1 0xb7de07b1 in select () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7ea5bcf in apr_sleep () from /usr/lib/libapr-1.so.0 #3 0xb6d7ab39 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0xb7ea38ec in ?? () from /usr/lib/libapr-1.so.0 #5 0xb7e6d4ff in start_thread () from /lib/tls/i686/cmov/libpthread.so.0 #6 0xb7de849e in clone () from /lib/tls/i686/cmov/libc.so.6 Thread 2 (Thread 0xb5a48b90 (LWP 12700)): #0 0x00000000 in ?? () #1 0xb4f03b5e in DbXml::XmlManager::XmlManager () from /home/jason/dbxml-2.5.13/install/lib/libdbxml-2.5.so #2 0xb501b29b in _wrap_new_XmlManager (self=0x0, args=0xac66fcc) at dbxml_python_wrap.cpp:5183 #3 0xb6b77aed in PyCFunction_Call () from /usr/lib/libpython2.6.so.1.0 #4 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #5 0xb6bd70b5 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #6 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #7 0xb6b6187a in ?? () from /usr/lib/libpython2.6.so.1.0 #8 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #9 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #10 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #11 0xb6b9ae03 in ?? () from /usr/lib/libpython2.6.so.1.0 #12 0xb6b90f55 in ?? () from /usr/lib/libpython2.6.so.1.0 #13 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #14 0xb6bd7618 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #15 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #16 0xb6b6187a in ?? () from /usr/lib/libpython2.6.so.1.0 #17 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #18 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #19 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #20 0xb6bd3a34 in PyEval_CallObjectWithKeywords () from /usr/lib/libpython2.6.so.1.0 #21 0xb6b44a7d in PyInstance_New () from /usr/lib/libpython2.6.so.1.0 #22 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #23 0xb6bd7618 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #24 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #25 0xb6b61969 in ?? () from /usr/lib/libpython2.6.so.1.0 #26 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #27 0xb6bd70b5 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #28 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #29 0xb6b61969 in ?? () from /usr/lib/libpython2.6.so.1.0 #30 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #31 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #32 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #33 0xb6b9b483 in ?? () from /usr/lib/libpython2.6.so.1.0 #34 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #35 0xb6bd70b5 in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #36 0xb6bdab4f in PyEval_EvalFrameEx () from /usr/lib/libpython2.6.so.1.0 #37 0xb6bdb910 in PyEval_EvalCodeEx () from /usr/lib/libpython2.6.so.1.0 #38 0xb6b6187a in ?? () from /usr/lib/libpython2.6.so.1.0 #39 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #40 0xb6b427a8 in ?? () from /usr/lib/libpython2.6.so.1.0 #41 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #42 0xb6b9b483 in ?? () from /usr/lib/libpython2.6.so.1.0 #43 0xb6b3198c in PyObject_Call () from /usr/lib/libpython2.6.so.1.0 #44 0xb6bd3a34 in PyEval_CallObjectWithKeywords () from /usr/lib/libpython2.6.so.1.0 #45 0xb6d7172d in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #46 0xb6d7539f in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #47 0xb6d7e1d8 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #48 0xb6d7a42c in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #49 0xb6d7a8bd in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #50 0xb6d7a9c5 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #51 0xb7ea38ec in ?? () from /usr/lib/libapr-1.so.0 #52 0xb7e6d4ff in start_thread () from /lib/tls/i686/cmov/libpthread.so.0 #53 0xb7de849e in clone () from /lib/tls/i686/cmov/libc.so.6 Thread 1 (Thread 0xb7460b00 (LWP 12697)): #0 0xb7f11422 in __kernel_vsyscall () #1 0xb7e75300 in sigwait () from /lib/tls/i686/cmov/libpthread.so.0 #2 0xb7ea3f3b in apr_signal_thread () from /usr/lib/libapr-1.so.0 #3 0xb6d7b48d in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0xb6d7bc98 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #5 0xb6d79632 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #6 0xb7e9a2c9 in apr_proc_other_child_alert () from /usr/lib/libapr-1.so.0 #7 0x08092202 in ap_mpm_run () #8 0x080673c8 in main () #0 0x00000000 in ?? ()

    Read the article

  • How to get latitude and longitude position that stored in MySQL and use it in Android map application

    - by gunawan haruna
    I have tried to get the latitude and longitude position that was stored in MySQL. I want use the values to my Android map application. Here is my code: deskripsi.Java Button direction = (Button) findViewById (R.id.btnDir); direction.setOnClickListener(new OnClickListener(){ public void onClick(View arg0) { Intent z = getIntent(); des_lat = z.getExtras().getString("des_lat"); des_long = z.getExtras().getString("des_long"); Intent i = new Intent(android.content.Intent.ACTION_VIEW, Uri.parse("http://maps.google.com/maps?&daddr="+des_lat+","+des_long)); //("geo:37.827500,-122.481670")); startActivity(i); } }); And here is the content.Java private ListView list; int x; private String panjang[]; public void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.kontent); super.initButtonSearch(); list = (ListView) findViewById(R.id.list); JSONObject jo; try { jo = new JSONObject(JsonKontent); JSONArray ja = jo.getJSONArray("result"); System.out.println("Panjang : " + ja.length()); if (ja.length() == 0) { Toast.makeText(Content.this, "Data tidak ada!", Toast.LENGTH_LONG).show(); finish(); } content_id = new String[ja.length()]; c_title = new String[ja.length()]; c_telephone = new String[ja.length()]; c_short_description = new String[ja.length()]; c_long_description = new String[ja.length()]; c_image1 = new String[ja.length()]; c_image2 = new String[ja.length()]; l_address = new String[ja.length()]; catagory_id = new String[ja.length()]; Location myLoc = new Location("sharedPreferences"); Location restLoc = new Location("restaurantTable"); l_latitude = new String[ja.length()]; l_longitude = new String[ja.length()]; c_name = new String[ja.length()]; panjang = new String[ja.length()]; for (x = 0; x < ja.length(); x++) { JSONObject joj = ja.getJSONObject(x); content_id[x] = joj.getString("content_id"); catagory_id[x] = joj.getString("catagory_id"); c_title[x] = joj.getString("c_title"); c_telephone[x] = joj.getString("c_telephone"); c_short_description[x] = joj.getString("c_short_description"); c_long_description[x] = joj.getString("c_long_description"); c_image1[x] = HTTPConnection.urlPicture + joj.getString("c_image1"); c_image2[x] = HTTPConnection.urlPicture + joj.getString("c_image2"); l_address[x] = joj.getString("l_address"); l_latitude[x] = joj.getString("l_latitude"); l_longitude[x] = joj.getString("l_longitude"); c_name[x] = joj.getString("c_name"); myLoc.setLatitude(myLatitude); myLoc.setLongitude(myLongitude); restLoc.setLatitude(Double.parseDouble(l_latitude[x])); restLoc.setLongitude(Double.parseDouble(l_longitude[x])); float f = myLoc.distanceTo(restLoc); int f_int = Math.round(f / 100); f = Float.valueOf(f_int) / 10; String dist = new DecimalFormat("#,##0.0").format(f); System.out.println("Panjang " + dist + " km"); panjang[x] = dist + " km"; } } catch (JSONException e) { Toast.makeText(Content.this, "Data yang dicari tidak ada!", Toast.LENGTH_LONG).show(); finish(); } PFCAdapter adapter = new PFCAdapter(this, c_image1, c_title, l_address, panjang); list.setAdapter(adapter); list.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> arg0, View arg1, int arg2, long arg3) { // TODO Auto-generated method stub System.out.println("Content ID: " + Content.content_id[Deskripsi.id]); Deskripsi.id = arg2; waitDialog = ProgressDialog.show(Content.this, "Memuat", "Harap tunggu, sedang terhubung dengan server"); waitDialog.setIcon(R.drawable.iconnya); waitDialog.show(); new LihatRatingTask().execute(); } }); } class LihatRatingTask extends AsyncTask<Void, Void, Void> { protected Void doInBackground(Void... Arg0) { Deskripsi.jsonRating = HTTPConnection.openUrl(HTTPConnection.host + "lihat_rating.php?content_id=" + Content.content_id[Deskripsi.id]); Deskripsi.jsonSubCategory = HTTPConnection .openUrl(HTTPConnection.host + "sub_catagory_parameter.php?content_id=" + Content.content_id[Deskripsi.id]); RoutePath.place = HTTPConnection .LoadImageFromWeb(HTTPConnection.host + "Logo/" + image[Integer.valueOf(catagory_id[Deskripsi.id]) - 1]); Intent i = new Intent(Content.this, Deskripsi.class); i.putExtra("des_lat", l_latitude); i.putExtra("des_long", l_longitude); startActivity(i); waitDialog.dismiss(); return null; } protected void onPostExecute(Void result) { // TODO Auto-generated method stub super.onPostExecute(result); waitDialog.dismiss(); } } } The result is in destination EditText in maps application for Android "null,null" How to make it "destination_latitude, destination_longitude"? Help me please.

    Read the article

  • Developed android application cannot connect to phpmyadmin

    - by user1850936
    I am developing an app with eclipse. I tried to store the data that key in by user into database in phpmyadmin. Unfortunately, after the user has clicked on submit button, there is no response and data is not stored in my database. Here is my java file: import java.util.ArrayList; import java.util.List; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONObject; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.RadioButton; import android.content.res.Configuration; public class UserRegister extends Activity { JSONParser jsonParser = new JSONParser(); EditText inputName; EditText inputUsername; EditText inputEmail; EditText inputPassword; RadioButton button1; RadioButton button2; Button button3; int success = 0; private static String url_register_user = "http://10.20.92.81/database/add_user.php"; private static final String TAG_SUCCESS = "success"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_user_register); inputName = (EditText) findViewById(R.id.nameTextBox); inputUsername = (EditText) findViewById(R.id.usernameTextBox); inputEmail = (EditText) findViewById(R.id.emailTextBox); inputPassword = (EditText) findViewById(R.id.pwTextBox); Button button3 = (Button) findViewById(R.id.regSubmitButton); button3.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { String name = inputName.getText().toString(); String username = inputUsername.getText().toString(); String email = inputEmail.getText().toString(); String password = inputPassword.getText().toString(); if (name.contentEquals("")||username.contentEquals("")||email.contentEquals("")||password.contentEquals("")) { AlertDialog.Builder builder = new AlertDialog.Builder(UserRegister.this); builder.setMessage(R.string.nullAlert) .setTitle(R.string.alertTitle); builder.setPositiveButton(R.string.ok, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { } }); AlertDialog dialog = builder.show(); } // creating new product in background thread RegisterNewUser(); } }); } public void RegisterNewUser() { try { String name = inputName.getText().toString(); String username = inputUsername.getText().toString(); String email = inputEmail.getText().toString(); String password = inputPassword.getText().toString(); // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("name", name)); params.add(new BasicNameValuePair("username", username)); params.add(new BasicNameValuePair("email", email)); params.add(new BasicNameValuePair("password", password)); // getting JSON Object // Note that create product url accepts POST method JSONObject json = jsonParser.makeHttpRequest(url_register_user, "GET", params); // check log cat for response Log.d("Send Notification", json.toString()); success = json.getInt(TAG_SUCCESS); if (success == 1) { // successfully created product Intent i = new Intent(getApplicationContext(), StudentLogin.class); startActivity(i); finish(); } else { // failed to register } } catch (Exception e) { e.printStackTrace(); } } @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); } } my php file: <?php $response = array(); require_once __DIR__ . '/db_connect.php'; $db = new DB_CONNECT(); if (isset($_GET['name']) && isset($_GET['username']) && isset($_GET['email']) && isset($_GET['password'])) { $name = $_GET['name']; $username = $_GET['username']; $email = $_GET['email']; $password = $_GET['password']; // mysql inserting a new row $result = mysql_query("INSERT INTO register(name, username, email, password) VALUES('$name', '$username', '$email', '$password')"); // check if row inserted or not if ($result) { // successfully inserted into database $response["success"] = 1; $response["message"] = "You are successfully registered to MEMS."; // echoing JSON response echo json_encode($response); } else { // failed to insert row $response["success"] = 0; $response["message"] = "Oops! An error occurred."; // echoing JSON response echo json_encode($response); } } else { // required field is missing $response["success"] = 0; $response["message"] = "Required field(s) is missing"; // echoing JSON response echo json_encode($response); } ?> the log cat is as follows: 11-25 10:37:46.772: I/Choreographer(638): Skipped 30 frames! The application may be doing too much work on its main thread.

    Read the article

  • Wierd characters in exported csv files when converting

    - by Ahue
    Hey guys, I came across a problem I cannot solve on my own concerning the downloadable csv formatted trends data files from Google Insights for Search. I'm to lazy to reformat the files I4S gives me manually what means: Extracting the section with the actual trends data and reformatting the columns so that I can use it with a modelling program I do for school. So I wrote a tiny script the should do the work for me: Taking a file, do some magic and give me a new file in proper format. What it's supposed to do is reading the file contents, extracting the trends section, splitting it by newlines, splitting each line and then reorder the columns and maybe reformat them. When looking at a untouched I4S csv file it looks normal containing CR LF caracters at line breaks (maybe thats only because I'm using Windows). When just reading the contents and then writing them to a new file using the script wierd asian characters appear between CR and LF. I tried the script with a manually written similar looking file and even tried a csv file from Google Trends and it works fine. I use Python and the script (snippet) I used for the following example looks like this: # Read from an input file file = open(file,"r") contents = file.read() file.close() cfile = open("m.log","w+") cfile.write(contents) cfile.close() Has anybody an idea why those characters appear??? Thank you for you help! I'll give you and example: First few lines of I4S csv file: Web Search Interest: foobar Worldwide; 2004 - present Interest over time Week foobar 2004-01-04 - 2004-01-10 44 2004-01-11 - 2004-01-17 44 2004-01-18 - 2004-01-24 37 2004-01-25 - 2004-01-31 40 2004-02-01 - 2004-02-07 49 2004-02-08 - 2004-02-14 51 2004-02-15 - 2004-02-21 45 2004-02-22 - 2004-02-28 61 2004-02-29 - 2004-03-06 51 2004-03-07 - 2004-03-13 48 2004-03-14 - 2004-03-20 50 2004-03-21 - 2004-03-27 56 2004-03-28 - 2004-04-03 59 Output file when reading and writing contents: Web Search Interest: foobar ??????????? ? ? ? ????????? ????????? ???? ?????? Week foobar ?? ?? ?? ? ? ? ?? ??? ????? 2004-01-11 - 2004-01-17 44 ?? ?? ???? ? ? ?? ????????? 2004-01-25 - 2004-01-31 40 ?? ?? ?? ? ? ? ?? ?? ?????? 2004-02-08 - 2004-02-14 51 ?? ?? ???? ? ? ?? ????????? 2004-02-22 - 2004-02-28 61 ?? ?? ???? ? ? ?? ?? ?????? 2004-03-07 - 2004-03-13 48 ?? ?? ???? ? ? ?? ??? ?? ?? 2004-03-21 - 2004-03-27 56 ?? ?? ???? ? ? ?? ?? ?????? 2004-04-04 - 2004-04-10 69 ?? ?? ???? ? ? ?? ????????? 2004-04-18 - 2004-04-24 51 ?? ?? ???? ? ? ?? ?? ?????? 2004-05-02 - 2004-05-08 56 ?? ?? ?? ? ? ? ?? ????????? 2004-05-16 - 2004-05-22 54 ?? ?? ???? ? ? ?? ????????? 2004-05-30 - 2004-06-05 74 ?? ?? ?? ? ? ? ?? ????????? 2004-06-13 - 2004-06-19 50 ?? ?? ??? ? ? ?? ????????? 2004-06-27 - 2004-07-03 58 ?? ?? ?? ? ? ? ?? ??? ????? 2004-07-11 - 2004-07-17 59 ?? ?? ???? ? ? ?? ?????????

    Read the article

  • Dealing with external processes

    - by Jesse Aldridge
    I've been working on a gui app that needs to manage external processes. Working with external processes leads to a lot of issues that can make a programmer's life difficult. I feel like maintenence on this app is taking an unacceptably long time. I've been trying to list the things that make working with external processes difficult so that I can come up with ways of mitigating the pain. This kind of turned into a rant which I thought I'd post here in order to get some feedback and to provide some guidance to anybody thinking about sailing into these very murky waters. Here's what I've got so far: Output from the child can get mixed up with output from the parent. This can make both outputs misleading and hard to read. It can be hard to tell what came from where. It becomes harder to figure out what's going on when things are asynchronous. Here's a contrived example: import textwrap, os, time from subprocess import Popen test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) proc = Popen('python -B "%s"' % test_path) for i in range(3): print 'Hello %i' % i time.sleep(1) os.remove(test_path) I guess I could have the child process write its output to a file. But it can be annoying to have to open up a file every time I want to see the result of a print statement. If I have code for the child process I could add a label, something like print 'child: Hello %i', but it can be annoying to do that for every print. And it adds some noise to the output. And of course I can't do it if I don't have access to the code. I could manually manage the process output. But then you open up a huge can of worms with threads and polling and stuff like that. A simple solution is to treat processes like synchronous functions, that is, no further code executes until the process completes. In other words, make the process block. But that doesn't work if you're building a gui app. Which brings me to the next problem... Blocking processes cause the gui to become unresponsive. import textwrap, sys, os from subprocess import Popen from PyQt4.QtGui import * from PyQt4.QtCore import * test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) app = QApplication(sys.argv) button = QPushButton('Launch process') def launch_proc(): # Can't move the window until process completes proc = Popen('python -B "%s"' % test_path) proc.communicate() button.connect(button, SIGNAL('clicked()'), launch_proc) button.show() app.exec_() os.remove(test_path) Qt provides a process wrapper of its own called QProcess which can help with this. You can connect functions to signals to capture output relatively easily. This is what I'm currently using. But I'm finding that all these signals behave suspiciously like goto statements and can lead to spaghetti code. I think I want to get sort-of blocking behavior by having the 'finished' signal from QProcess call a function containing all the code that comes after the process call. I think that should work but I'm still a bit fuzzy on the details... Stack traces get interrupted when you go from the child process back to the parent process. If a normal function screws up, you get a nice complete stack trace with filenames and line numbers. If a subprocess screws up, you'll be lucky if you get any output at all. You end up having to do a lot more detective work everytime something goes wrong. Speaking of which, output has a way of disappearing when dealing external processes. Like if you run something via the windows 'cmd' command, the console will pop up, execute the code, and then disappear before you have a chance to see the output. You have to pass the /k flag to make it stick around. Similar issues seem to crop up all the time. I suppose both problems 3 and 4 have the same root cause: no exception handling. Exception handling is meant to be used with functions, it doesn't work with processes. Maybe there's some way to get something like exception handling for processes? I guess that's what stderr is for? But dealing with two different streams can be annoying in itself. Maybe I should look into this more... Processes can hang and stick around in the background without you realizing it. So you end up yelling at your computer cuz it's going so slow until you finally bring up your task manager and see 30 instances of the same process hanging out in the background. Also, hanging background processes can interefere with other instances of the process in various fun ways, such as causing permissions errors by holding a handle to a file or someting like that. It seems like an easy solution to this would be to have the parent process kill the child process on exit if the child process didn't close itself. But if the parent process crashes, cleanup code might not get called and the child can be left hanging. Also, if the parent waits for the child to complete, and the child is in an infinite loop or something, you can end up with two hanging processes. This problem can tie in to problem 2 for extra fun, causing your gui to stop responding entirely and force you to kill everything with the task manager. F***ing quotes Parameters often need to be passed to processes. This is a headache in itself. Especially if you're dealing with file paths. Say... 'C:/My Documents/whatever/'. If you don't have quotes, the string will often be split at the space and interpreted as two arguments. If you need nested quotes you can use ' and ". But if you need to use more than two layers of quotes, you have to do some nasty escaping, for example: "cmd /k 'python \'path 1\' \'path 2\''". A good solution to this problem is passing parameters as a list rather than as a single string. Subprocess allows you to do this. Can't easily return data from a subprocess. You can use stdout of course. But what if you want to throw a print in there for debugging purposes? That's gonna screw up the parent if it's expecting output formatted a certain way. In functions you can print one string and return another and everything works just fine. Obscure command-line flags and a crappy terminal based help system. These are problems I often run into when using os level apps. Like the /k flag I mentioned, for holding a cmd window open, who's idea was that? Unix apps don't tend to be much friendlier in this regard. Hopefully you can use google or StackOverflow to find the answer you need. But if not, you've got a lot of boring reading and frusterating trial and error to do. External factors. This one's kind of fuzzy. But when you leave the relatively sheltered harbor of your own scripts to deal with external processes you find yourself having to deal with the "outside world" to a much greater extent. And that's a scary place. All sorts of things can go wrong. Just to give a random example: the cwd in which a process is run can modify it's behavior. There are probably other issues, but those are the ones I've written down so far. Any other snags you'd like to add? Any suggestions for dealing with these problems?

    Read the article

  • Why is Java EE 6 better than Spring ?

    - by arungupta
    Java EE 6 was released over 2 years ago and now there are 14 compliant application servers. In all my talks around the world, a question that is frequently asked is Why should I use Java EE 6 instead of Spring ? There are already several blogs covering that topic: Java EE wins over Spring by Bill Burke Why will I use Java EE instead of Spring in new Enterprise Java projects in 2012 ? by Kai Waehner (more discussion on TSS) Spring to Java EE migration (Part 1 and 2, 3 and 4 coming as well) by David Heffelfinger Spring to Java EE - A Migration Experience by Lincoln Baxter Migrating Spring to Java EE 6 by Bert Ertman and Paul Bakker at NLJUG Moving from Spring to Java EE 6 - The Age of Frameworks is Over at TSS Java EE vs Spring Shootout by Rohit Kelapure and Reza Rehman at JavaOne 2011 Java EE 6 and the Ewoks by Murat Yener Definite excuse to avoid Spring forever - Bert Ertman and Arun Gupta I will try to share my perspective in this blog. First of all, I'd like to start with a note: Thank you Spring framework for filling the interim gap and providing functionality that is now included in the mainstream Java EE 6 application servers. The Java EE platform has evolved over the years learning from frameworks like Spring and provides all the functionality to build an enterprise application. Thank you very much Spring framework! While Spring was revolutionary in its time and is still very popular and quite main stream in the same way Struts was circa 2003, it really is last generation's framework - some people are even calling it legacy. However my theory is "code is king". So my approach is to build/take a simple Hello World CRUD application in Java EE 6 and Spring and compare the deployable artifacts. I started looking at the official tutorial Developing a Spring Framework MVC Application Step-by-Step but it is using the older version 2.5. I wasn't able to find any updated version in the current 3.1 release. Next, I downloaded Spring Tool Suite and thought that would provide some template samples to get started. A least a quick search did not show any handy tutorials - either video or text-based. So I searched and found a link to their SVN repository at src.springframework.org/svn/spring-samples/. I tried the "mvc-basic" sample and the generated WAR file was 4.43 MB. While it was named a "basic" sample it seemed to come with 19 different libraries bundled but it was what I could find: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/joda-time-jsptags-1.0.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar And it is not even using any database! The app deployed fine on GlassFish 3.1.2 but the "@Controller Example" link did not work as it was missing the context root. With a bit of tweaking I could deploy the application and assume that the account got created because no error was displayed in the browser or server log. Next I generated the WAR for "mvc-ajax" and the 5.1 MB WAR had 20 JARs (1 removed, 2 added): ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.6.4.jar./WEB-INF/lib/jackson-mapper-asl-1.6.4.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar 2 more JARs for just doing Ajax. Anyway, deploying this application gave the following error: Caused by: java.lang.NoSuchMethodError: org.codehaus.jackson.map.SerializationConfig.<init>(Lorg/codehaus/jackson/map/ClassIntrospector;Lorg/codehaus/jackson/map/AnnotationIntrospector;Lorg/codehaus/jackson/map/introspect/VisibilityChecker;Lorg/codehaus/jackson/map/jsontype/SubtypeResolver;)V    at org.springframework.samples.mvc.ajax.json.ConversionServiceAwareObjectMapper.<init>(ConversionServiceAwareObjectMapper.java:20)    at org.springframework.samples.mvc.ajax.json.JacksonConversionServiceConfigurer.postProcessAfterInitialization(JacksonConversionServiceConfigurer.java:40)    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:407) Seems like some incorrect repos in the "pom.xml". Next one is "mvc-showcase" and the 6.49 MB WAR now has 28 JARs as shown below: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/aspectjrt-1.6.10.jar./WEB-INF/lib/commons-fileupload-1.2.2.jar./WEB-INF/lib/commons-io-2.0.1.jar./WEB-INF/lib/el-api-2.2.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.8.1.jar./WEB-INF/lib/jackson-mapper-asl-1.8.1.jar./WEB-INF/lib/javax.inject-1.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/jdom-1.0.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-api-1.2.jar./WEB-INF/lib/jstl-impl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/rome-1.0.0.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.1.0.RELEASE.jar./WEB-INF/lib/spring-asm-3.1.0.RELEASE.jar./WEB-INF/lib/spring-beans-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-support-3.1.0.RELEASE.jar./WEB-INF/lib/spring-core-3.1.0.RELEASE.jar./WEB-INF/lib/spring-expression-3.1.0.RELEASE.jar./WEB-INF/lib/spring-web-3.1.0.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.1.0.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar The app at least deployed and showed results this time. But still no database! Next I tried building "jpetstore" and got the error: [ERROR] Failed to execute goal on project org.springframework.samples.jpetstore:Could not resolve dependencies for project org.springframework.samples:org.springframework.samples.jpetstore:war:1.0.0-SNAPSHOT: Failed to collect dependencies for [commons-fileupload:commons-fileupload:jar:1.2.1 (compile), org.apache.struts:com.springsource.org.apache.struts:jar:1.2.9 (compile), javax.xml.rpc:com.springsource.javax.xml.rpc:jar:1.1.0 (compile), org.apache.commons:com.springsource.org.apache.commons.dbcp:jar:1.2.2.osgi (compile), commons-io:commons-io:jar:1.3.2 (compile), hsqldb:hsqldb:jar:1.8.0.7 (compile), org.apache.tiles:tiles-core:jar:2.2.0 (compile), org.apache.tiles:tiles-jsp:jar:2.2.0 (compile), org.tuckey:urlrewritefilter:jar:3.1.0 (compile), org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-orm:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-context-support:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework.webflow:spring-js:jar:2.0.7.RELEASE (compile), org.apache.ibatis:com.springsource.com.ibatis:jar:2.3.4.726 (runtime), com.caucho:com.springsource.com.caucho:jar:3.2.1 (compile), org.apache.axis:com.springsource.org.apache.axis:jar:1.4.0 (compile), javax.wsdl:com.springsource.javax.wsdl:jar:1.6.1 (compile), javax.servlet:jstl:jar:1.2 (runtime), org.aspectj:aspectjweaver:jar:1.6.5 (compile), javax.servlet:servlet-api:jar:2.5 (provided), javax.servlet.jsp:jsp-api:jar:2.1 (provided), junit:junit:jar:4.6 (test)]: Failed to read artifact descriptor for org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT: Could not transfer artifact org.springframework:spring-webmvc:pom:3.0.0.BUILD-SNAPSHOT from/to JBoss repository (http://repository.jboss.com/maven2): Access denied to: http://repository.jboss.com/maven2/org/springframework/spring-webmvc/3.0.0.BUILD-SNAPSHOT/spring-webmvc-3.0.0.BUILD-SNAPSHOT.pom It appears the sample is broken - maybe I was pulling from the wrong repository - would be great if someone were to point me at a good target to use here. With a 50% hit on samples in this repository, I started searching through numerous blogs, most of which have either outdated information (using XML-heavy Spring 2.5), some piece of configuration (which is a typical "feature" of Spring) is missing, or too much complexity in the sample. I finally found this blog that worked like a charm. This blog creates a trivial Spring MVC 3 application using Hibernate and MySQL. This application performs CRUD operations on a single table in a database using typical Spring technologies.  I downloaded the sample code from the blog, deployed it on GlassFish 3.1.2 and could CRUD the "person" entity. The source code for this application can be downloaded here. More details on the application statistics below. And then I built a similar CRUD application in Java EE 6 using NetBeans wizards in a couple of minutes. The source code for the application can be downloaded here and the WAR here. The Spring Source Tool Suite may also offer similar wizard-driven capabilities but this blog focus primarily on comparing the runtimes. The lack of STS tutorials was slightly disappointing as well. NetBeans however has tons of text-based and video tutorials and tons of material even by the community. One more bit on the download size of tools bundle ... NetBeans 7.1.1 "All" is 211 MB (which includes GlassFish and Tomcat) Spring Tool Suite  2.9.0 is 347 MB (~ 65% bigger) This blog is not about the tooling comparison so back to the Java EE 6 version of the application .... In order to run the Java EE version on GlassFish, copy the MySQL Connector/J to glassfish3/glassfish/domains/domain1/lib/ext directory and create a JDBC connection pool and JDBC resource as: ./bin/asadmin create-jdbc-connection-pool --datasourceclassname \\ com.mysql.jdbc.jdbc2.optional.MysqlDataSource --restype \\ javax.sql.DataSource --property \\ portNumber=3306:user=mysql:password=mysql:databaseName=mydatabase \\ myConnectionPool ./bin/asadmin create-jdbc-resource --connectionpoolid myConnectionPool jdbc/myDataSource I generated WARs for the two projects and the table below highlights some differences between them: Java EE 6 Spring WAR File Size 0.021030 MB 10.87 MB (~516x) Number of files 20 53 (> 2.5x) Bundled libraries 0 36 Total size of libraries 0 12.1 MB XML files 3 5 LoC in XML files 50 (11 + 15 + 24) 129 (27 + 46 + 16 + 11 + 19) (~ 2.5x) Total .properties files 1 Bundle.properties 2 spring.properties, log4j.properties Cold Deploy 5,339 ms 11,724 ms Second Deploy 481 ms 6,261 ms Third Deploy 528 ms 5,484 ms Fourth Deploy 484 ms 5,576 ms Runtime memory ~73 MB ~101 MB Some points worth highlighting from the table ... 516x WAR file, 10x deployment time - With 12.1 MB of libraries (for a very basic application) bundled in your application, the WAR file size and the deployment time will naturally go higher. The WAR file for Spring-based application is 516x bigger and the deployment time is double during the first deployment and ~ 10x during subsequent deployments. The Java EE 6 application is fully portable and will run on any Java EE 6 compliant application server. 36 libraries in the WAR - There are 14 Java EE 6 compliant application servers today. Each of those servers provide all the functionality like transactions, dependency injection, security, persistence, etc typically required of an enterprise or web application. There is no need to bundle 36 libraries worth 12.1 MB for a trivial CRUD application. These 14 compliant application servers provide all the functionality baked in. Now you can also deploy these libraries in the container but then you don't get the "portability" offered by Spring in that case. Does your typical Spring deployment actually do that ? 3x LoC in XML - The number of XML files is about 1.6x and the LoC is ~ 2.5x. So much XML seems circa 2003 when the Java language had no annotations. The XML files can be further reduced, e.g. faces-config.xml can be replaced without providing i18n, but I just want to compare stock applications. Memory usage - Both the applications were deployed on default GlassFish 3.1.2 installation and any additional memory consumed as part of deployment/access was attributed to the application. This is by no means scientific but at least provides an initial ballpark. This area definitely needs more investigation. Another table that compares typical Java EE 6 compliant application servers and the custom-stack created for a Spring application ... Java EE 6 Spring Web Container ? 53 MB (tcServer 2.6.3 Developer Edition) Security ? 12 MB (Spring Security 3.1.0) Persistence ? 6.3 MB (Hibernate 4.1.0, required) Dependency Injection ? 5.3 MB (Framework) Web Services ? 796 KB (Spring WS 2.0.4) Messaging ? 3.4 MB (RabbitMQ Server 2.7.1) 936 KB (Java client 936) OSGi ? 1.3 MB (Spring OSGi 1.2.1) GlassFish and WebLogic (starting at 33 MB) 83.3 MB There are differentiating factors on both the stacks. But most of the functionality like security, persistence, and dependency injection is baked in a Java EE 6 compliant application server but needs to be individually managed and patched for a Spring application. This very quickly leads to a "stack explosion". The Java EE 6 servers are tested extensively on a variety of platforms in different combinations whereas a Spring application developer is responsible for testing with different JDKs, Operating Systems, Versions, Patches, etc. Oracle has both the leading OSS lightweight server with GlassFish and the leading enterprise Java server with WebLogic Server, both Java EE 6 and both with lightweight deployment options. The Web Container offered as part of a Java EE 6 application server not only deploys your enterprise Java applications but also provide operational management, diagnostics, and mission-critical capabilities required by your applications. The Java EE 6 platform also introduced the Web Profile which is a subset of the specifications from the entire platform. It is targeted at developers of modern web applications offering a reasonably complete stack, composed of standard APIs, and is capable out-of-the-box of addressing the needs of a large class of Web applications. As your applications grow, the stack can grow to the full Java EE 6 platform. The GlassFish Server Web Profile starting at 33MB (smaller than just the non-standard tcServer) provides most of the functionality typically required by a web application. WebLogic provides battle-tested functionality for a high throughput, low latency, and enterprise grade web application. No individual managing or patching, all tested and commercially supported for you! Note that VMWare does have a server, tcServer, but it is non-standard and not even certified to the level of the standard Web Profile most customers expect these days. Customers who choose this risk proprietary lock-in since VMWare does not seem to want to formally certify with either Java EE 6 Enterprise Platform or with Java EE 6 Web Profile but of course it would be great if they were to join the community and help their customers reduce the risk of deploying on VMWare software. Some more points to help you decide choose between Java EE 6 and Spring ... Freedom to choose container - There are 14 Java EE 6 compliant application servers today, with a variety of open source and commercial offerings. A Java EE 6 application can be deployed on any of those containers. So if you deployed your application on GlassFish today and would like to scale up with your demands then you can deploy the same application to WebLogic. And because of the portability of a Java EE 6 application, you can even take it a different vendor altogether. Spring requires a runtime which could be any of these app servers as well. But why use Spring when all the required functionality is already baked into the application server itself ? Spring also has a different definition of portability where they claim to bundle all the libraries in the WAR file and move to any application server. But we saw earlier how bloated that archive could be. The equivalent features in Spring runtime offerings (mainly tcServer) are not all open source, not as mature, and often require manual assembly.  Vendor choice - The Java EE 6 platform is created using the Java Community Process where all the big players like Oracle, IBM, RedHat, and Apache are conritbuting to make the platform successful. Each application server provides the basic Java EE 6 platform compliance and has its own competitive offerings. This allows you to choose an application server for deploying your Java EE 6 applications. If you are not happy with the support or feature of one vendor then you can move your application to a different vendor because of the portability promise offered by the platform. Spring is a set of products from a single company, one price book, one support organization, one sustaining organization, one sales organization, etc. If any of those cause a customer headache, where do you go ? Java EE, backed by multiple vendors, is a safer bet for those that are risk averse. Production support - With Spring, typically you need to get support from two vendors - VMWare and the container provider. With Java EE 6, all of this is typically provided by one vendor. For example, Oracle offers commercial support from systems, operating systems, JDK, application server, and applications on top of them. VMWare certainly offers complete production support but do you really want to put all your eggs in one basket ? Do you really use tcServer ? ;-) Maintainability - With Spring, you are likely building your own distribution with multiple JAR files, integrating, patching, versioning, etc of all those components. Spring's claim is that multiple JAR files allow you to go à la carte and pick the latest versions of different components. But who is responsible for testing whether all these versions work together ? Yep, you got it, its YOU! If something does not work, who patches and maintains the JARs ? Of course, you! Commercial support for such a configuration ? On your own! The Java EE application servers manage all of this for you and provide a well-tested and commercially supported bundle. While it is always good to realize that there is something new and improved that updates and replaces older frameworks like Spring, the good news is not only does a Java EE 6 container offer what is described here, most also will let you deploy and run your Spring applications on them while you go through an upgrade to a more modern architecture. End result, you get the best of both worlds - keeping your legacy investment but moving to a more agile, lightweight world of Java EE 6. A message to the Spring lovers ... The complexity in J2EE 1.2, 1.3, and 1.4 led to the genesis of Spring but that was in 2004. This is 2012 and the name has changed to "Java EE 6" :-) There are tons of improvements in the Java EE platform to make it easy-to-use and powerful. Some examples: Adding @Stateless on a POJO makes it an EJB EJBs can be packaged in a WAR with no special packaging or deployment descriptors "web.xml" and "faces-config.xml" are optional in most of the common cases Typesafe dependency injection is now part of the Java EE platform Add @Path on a POJO allows you to publish it as a RESTful resource EJBs can be used as backing beans for Facelets-driven JSF pages providing full MVC Java EE 6 WARs are known to be kilobytes in size and deployed in milliseconds Tons of other simplifications in the platform and application servers So if you moved away from J2EE to Spring many years ago and have not looked at Java EE 6 (which has been out since Dec 2009) then you should definitely try it out. Just be at least aware of what other alternatives are available instead of restricting yourself to one stack. Here are some workshops and screencasts worth trying: screencast #37 shows how to build an end-to-end application using NetBeans screencast #36 builds the same application using Eclipse javaee-lab-feb2012.pdf is a 3-4 hours self-paced hands-on workshop that guides you to build a comprehensive Java EE 6 application using NetBeans Each city generally has a "spring cleanup" program every year. It allows you to clean up the mess from your house. For your software projects, you don't need to wait for an annual event, just get started and reduce the technical debt now! Move away from your legacy Spring-based applications to a lighter and more modern approach of building enterprise Java applications using Java EE 6. Watch this beautiful presentation that explains how to migrate from Spring -> Java EE 6: List of files in the Java EE 6 project: ./index.xhtml./META-INF./person./person/Create.xhtml./person/Edit.xhtml./person/List.xhtml./person/View.xhtml./resources./resources/css./resources/css/jsfcrud.css./template.xhtml./WEB-INF./WEB-INF/classes./WEB-INF/classes/Bundle.properties./WEB-INF/classes/META-INF./WEB-INF/classes/META-INF/persistence.xml./WEB-INF/classes/org./WEB-INF/classes/org/javaee./WEB-INF/classes/org/javaee/javaeemysql./WEB-INF/classes/org/javaee/javaeemysql/AbstractFacade.class./WEB-INF/classes/org/javaee/javaeemysql/Person.class./WEB-INF/classes/org/javaee/javaeemysql/Person_.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$1.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$PersonControllerConverter.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController.class./WEB-INF/classes/org/javaee/javaeemysql/PersonFacade.class./WEB-INF/classes/org/javaee/javaeemysql/util./WEB-INF/classes/org/javaee/javaeemysql/util/JsfUtil.class./WEB-INF/classes/org/javaee/javaeemysql/util/PaginationHelper.class./WEB-INF/faces-config.xml./WEB-INF/web.xml List of files in the Spring 3.x project: ./META-INF ./META-INF/MANIFEST.MF./WEB-INF./WEB-INF/applicationContext.xml./WEB-INF/classes./WEB-INF/classes/log4j.properties./WEB-INF/classes/org./WEB-INF/classes/org/krams ./WEB-INF/classes/org/krams/tutorial ./WEB-INF/classes/org/krams/tutorial/controller ./WEB-INF/classes/org/krams/tutorial/controller/MainController.class ./WEB-INF/classes/org/krams/tutorial/domain ./WEB-INF/classes/org/krams/tutorial/domain/Person.class ./WEB-INF/classes/org/krams/tutorial/service ./WEB-INF/classes/org/krams/tutorial/service/PersonService.class ./WEB-INF/hibernate-context.xml ./WEB-INF/hibernate.cfg.xml ./WEB-INF/jsp ./WEB-INF/jsp/addedpage.jsp ./WEB-INF/jsp/addpage.jsp ./WEB-INF/jsp/deletedpage.jsp ./WEB-INF/jsp/editedpage.jsp ./WEB-INF/jsp/editpage.jsp ./WEB-INF/jsp/personspage.jsp ./WEB-INF/lib ./WEB-INF/lib/antlr-2.7.6.jar ./WEB-INF/lib/aopalliance-1.0.jar ./WEB-INF/lib/c3p0-0.9.1.2.jar ./WEB-INF/lib/cglib-nodep-2.2.jar ./WEB-INF/lib/commons-beanutils-1.8.3.jar ./WEB-INF/lib/commons-collections-3.2.1.jar ./WEB-INF/lib/commons-digester-2.1.jar ./WEB-INF/lib/commons-logging-1.1.1.jar ./WEB-INF/lib/dom4j-1.6.1.jar ./WEB-INF/lib/ejb3-persistence-1.0.2.GA.jar ./WEB-INF/lib/hibernate-annotations-3.4.0.GA.jar ./WEB-INF/lib/hibernate-commons-annotations-3.1.0.GA.jar ./WEB-INF/lib/hibernate-core-3.3.2.GA.jar ./WEB-INF/lib/javassist-3.7.ga.jar ./WEB-INF/lib/jstl-1.1.2.jar ./WEB-INF/lib/jta-1.1.jar ./WEB-INF/lib/junit-4.8.1.jar ./WEB-INF/lib/log4j-1.2.14.jar ./WEB-INF/lib/mysql-connector-java-5.1.14.jar ./WEB-INF/lib/persistence-api-1.0.jar ./WEB-INF/lib/slf4j-api-1.6.1.jar ./WEB-INF/lib/slf4j-log4j12-1.6.1.jar ./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-jdbc-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-orm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-tx-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar ./WEB-INF/lib/standard-1.1.2.jar ./WEB-INF/lib/xml-apis-1.0.b2.jar ./WEB-INF/spring-servlet.xml ./WEB-INF/spring.properties ./WEB-INF/web.xml So, are you excited about Java EE 6 ? Want to get started now ? Here are some resources: Java EE 6 SDK (including runtime, samples, tutorials etc) GlassFish Server Open Source Edition 3.1.2 (Community) Oracle GlassFish Server 3.1.2 (Commercial) Java EE 6 using WebLogic 12c and NetBeans (Video) Java EE 6 with NetBeans and GlassFish (Video) Java EE with Eclipse and GlassFish (Video)

    Read the article

  • Another "Windows 7 entry missing from Grub2" Question

    - by 4x10
    Like many before me had the following problem that after installing Ubuntu (with windows 7 already installed), the grub boot loader wouldnt show windows 7 as a boot option, though i can boot fine if I use the "Choose Boot Device" options on the x220. The difference is that I try using UEFI only so many answers didn't really fit my problem, though i tried several stuffs: after running boot repair it destroyed the ubuntu boot loader custom entry in /etc/grub.d/40_custom for windows which doesnt show up many update-grub and reboots trying windows repair recovery thing while being there i also did bootrec.exe /FixBoot and update-grub and reboot again and finaly because it was so much fun, i installed linux all over again, while formatting and deleting everything linux related before that. Now that i think of it, Ubuntu also didn't notice Windows being there during the Setup and it still doesnt according to the Boot Info from Boot Repair. Boot Info Script 0.61-git-patched [23 April 2012] ============================= Boot Info Summary: =============================== => No boot loader is installed in the MBR of /dev/sda. sda1: __________________________________________________________________________ File system: vfat Boot sector type: Windows 7: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /efi/Boot/bootx64.efi /efi/ubuntu/grubx64.efi sda2: __________________________________________________________________________ File system: Boot sector type: - Boot sector info: Mounting failed: mount: unknown filesystem type '' sda3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /Windows/System32/winload.exe sda4: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu precise (development branch) Boot files: /boot/grub/grub.cfg /etc/fstab sda5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sda6: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 1 625,142,447 625,142,447 ee GPT GUID Partition Table detected. Partition Start Sector End Sector # of Sectors System /dev/sda1 2,048 206,847 204,800 EFI System partition /dev/sda2 206,848 468,991 262,144 Microsoft Reserved Partition (Windows) /dev/sda3 468,992 170,338,303 169,869,312 Data partition (Windows/Linux) /dev/sda4 170,338,304 330,338,304 160,000,001 Data partition (Windows/Linux) /dev/sda5 330,338,305 617,141,039 286,802,735 Data partition (Windows/Linux) /dev/sda6 617,141,040 625,141,040 8,000,001 Swap partition (Linux) "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 885C-ED1B vfat /dev/sda3 EE06CC0506CBCCB1 ntfs /dev/sda4 604dd3b2-64ca-4200-b8fb-820e8d0ca899 ext4 /dev/sda5 d62515fd-8120-4a74-b17b-0bdf244124a3 ext4 /dev/sda6 7078b649-fb2a-4c59-bd03-fd31ef440d37 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda1 /boot/efi vfat (rw) /dev/sda4 / ext4 (rw,errors=remount-ro) /dev/sda5 /home ext4 (rw) =========================== sda4/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod efi_gop insmod efi_uga insmod video_bochs insmod video_cirrus } insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="$1" if [ "$1" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-20-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 linux /boot/vmlinuz-3.2.0-20-generic root=UUID=604dd3b2-64ca-4200-b8fb-820e8d0ca899 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-20-generic } menuentry 'Ubuntu, with Linux 3.2.0-20-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 echo 'Loading Linux 3.2.0-20-generic ...' linux /boot/vmlinuz-3.2.0-20-generic root=UUID=604dd3b2-64ca-4200-b8fb-820e8d0ca899 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-20-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda4/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=604dd3b2-64ca-4200-b8fb-820e8d0ca899 / ext4 errors=remount-ro 0 1 # /boot/efi was on /dev/sda1 during installation UUID=885C-ED1B /boot/efi vfat defaults 0 1 # /home was on /dev/sda5 during installation UUID=d62515fd-8120-4a74-b17b-0bdf244124a3 /home ext4 defaults 0 2 # swap was on /dev/sda6 during installation UUID=7078b649-fb2a-4c59-bd03-fd31ef440d37 none swap sw 0 0 -------------------------------------------------------------------------------- =================== sda4: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 129.422874451 = 138.966753280 boot/grub/grub.cfg 1 83.059570312 = 89.184534528 boot/initrd.img-3.2.0-20-generic 2 101.393131256 = 108.870045696 boot/vmlinuz-3.2.0-20-generic 1 83.059570312 = 89.184534528 initrd.img 2 101.393131256 = 108.870045696 vmlinuz 1 ADDITIONAL INFORMATION : =================== log of boot-repair 2012-04-25__23h40 =================== boot-repair version : 3.18-0ppa3~precise boot-sav version : 3.18-0ppa4~precise glade2script version : 0.3.2.1-0ppa7~precise internet: connected python-software-properties version : 0.82.7 0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 591 not upgraded. dpkg-preconfigure: unable to re-open stdin: No such file or directory boot-repair is executed in installed-session (Ubuntu precise (development branch) , precise , Ubuntu , x86_64) WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. =================== OSPROBER: /dev/sda4:The OS now in use - Ubuntu precise (development branch) CurrentSession:linux =================== BLKID: /dev/sda3: UUID="EE06CC0506CBCCB1" TYPE="ntfs" /dev/sda1: UUID="885C-ED1B" TYPE="vfat" /dev/sda4: UUID="604dd3b2-64ca-4200-b8fb-820e8d0ca899" TYPE="ext4" /dev/sda5: UUID="d62515fd-8120-4a74-b17b-0bdf244124a3" TYPE="ext4" /dev/sda6: UUID="7078b649-fb2a-4c59-bd03-fd31ef440d37" TYPE="swap" 1 disks with OS, 1 OS : 1 Linux, 0 MacOS, 0 Windows, 0 unknown type OS. WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util sfdisk doesn't support GPT. Use GNU Parted. =================== /etc/default/grub : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" EFI_OF_PART[1] (, ) =================== dmesg | grep EFI : [ 0.000000] EFI v2.00 by Lenovo [ 0.000000] Kernel-defined memdesc doesn't match the one from EFI! [ 0.000000] EFI: mem00: type=3, attr=0xf, range=[0x0000000000000000-0x0000000000001000) (0MB) [ 0.000000] EFI: mem01: type=7, attr=0xf, range=[0x0000000000001000-0x000000000004e000) (0MB) [ 0.000000] EFI: mem02: type=3, attr=0xf, range=[0x000000000004e000-0x0000000000058000) (0MB) [ 0.000000] EFI: mem03: type=10, attr=0xf, range=[0x0000000000058000-0x0000000000059000) (0MB) [ 0.000000] EFI: mem04: type=7, attr=0xf, range=[0x0000000000059000-0x000000000005e000) (0MB) [ 0.000000] EFI: mem05: type=4, attr=0xf, range=[0x000000000005e000-0x000000000005f000) (0MB) [ 0.000000] EFI: mem06: type=3, attr=0xf, range=[0x000000000005f000-0x00000000000a0000) (0MB) [ 0.000000] EFI: mem07: type=2, attr=0xf, range=[0x0000000000100000-0x00000000005b9000) (4MB) [ 0.000000] EFI: mem08: type=7, attr=0xf, range=[0x00000000005b9000-0x0000000020000000) (506MB) [ 0.000000] EFI: mem09: type=0, attr=0xf, range=[0x0000000020000000-0x0000000020200000) (2MB) [ 0.000000] EFI: mem10: type=7, attr=0xf, range=[0x0000000020200000-0x00000000364e4000) (354MB) [ 0.000000] EFI: mem11: type=2, attr=0xf, range=[0x00000000364e4000-0x000000003726a000) (13MB) [ 0.000000] EFI: mem12: type=7, attr=0xf, range=[0x000000003726a000-0x0000000040000000) (141MB) [ 0.000000] EFI: mem13: type=0, attr=0xf, range=[0x0000000040000000-0x0000000040200000) (2MB) [ 0.000000] EFI: mem14: type=7, attr=0xf, range=[0x0000000040200000-0x000000009df35000) (1501MB) [ 0.000000] EFI: mem15: type=2, attr=0xf, range=[0x000000009df35000-0x00000000d39a0000) (858MB) [ 0.000000] EFI: mem16: type=4, attr=0xf, range=[0x00000000d39a0000-0x00000000d39c0000) (0MB) [ 0.000000] EFI: mem17: type=7, attr=0xf, range=[0x00000000d39c0000-0x00000000d5df5000) (36MB) [ 0.000000] EFI: mem18: type=4, attr=0xf, range=[0x00000000d5df5000-0x00000000d6990000) (11MB) [ 0.000000] EFI: mem19: type=7, attr=0xf, range=[0x00000000d6990000-0x00000000d6b82000) (1MB) [ 0.000000] EFI: mem20: type=1, attr=0xf, range=[0x00000000d6b82000-0x00000000d6b9f000) (0MB) [ 0.000000] EFI: mem21: type=7, attr=0xf, range=[0x00000000d6b9f000-0x00000000d77b0000) (12MB) [ 0.000000] EFI: mem22: type=4, attr=0xf, range=[0x00000000d77b0000-0x00000000d780a000) (0MB) [ 0.000000] EFI: mem23: type=7, attr=0xf, range=[0x00000000d780a000-0x00000000d7826000) (0MB) [ 0.000000] EFI: mem24: type=4, attr=0xf, range=[0x00000000d7826000-0x00000000d7868000) (0MB) [ 0.000000] EFI: mem25: type=7, attr=0xf, range=[0x00000000d7868000-0x00000000d7869000) (0MB) [ 0.000000] EFI: mem26: type=4, attr=0xf, range=[0x00000000d7869000-0x00000000d786a000) (0MB) [ 0.000000] EFI: mem27: type=7, attr=0xf, range=[0x00000000d786a000-0x00000000d786b000) (0MB) [ 0.000000] EFI: mem28: type=4, attr=0xf, range=[0x00000000d786b000-0x00000000d786c000) (0MB) [ 0.000000] EFI: mem29: type=7, attr=0xf, range=[0x00000000d786c000-0x00000000d786d000) (0MB) [ 0.000000] EFI: mem30: type=4, attr=0xf, range=[0x00000000d786d000-0x00000000d825f000) (9MB) [ 0.000000] EFI: mem31: type=7, attr=0xf, range=[0x00000000d825f000-0x00000000d8261000) (0MB) [ 0.000000] EFI: mem32: type=4, attr=0xf, range=[0x00000000d8261000-0x00000000d82f7000) (0MB) [ 0.000000] EFI: mem33: type=7, attr=0xf, range=[0x00000000d82f7000-0x00000000d82f8000) (0MB) [ 0.000000] EFI: mem34: type=4, attr=0xf, range=[0x00000000d82f8000-0x00000000d8705000) (4MB) [ 0.000000] EFI: mem35: type=7, attr=0xf, range=[0x00000000d8705000-0x00000000d8706000) (0MB) [ 0.000000] EFI: mem36: type=4, attr=0xf, range=[0x00000000d8706000-0x00000000d8761000) (0MB) [ 0.000000] EFI: mem37: type=7, attr=0xf, range=[0x00000000d8761000-0x00000000d8768000) (0MB) [ 0.000000] EFI: mem38: type=4, attr=0xf, range=[0x00000000d8768000-0x00000000d9b9f000) (20MB) [ 0.000000] EFI: mem39: type=7, attr=0xf, range=[0x00000000d9b9f000-0x00000000d9e4c000) (2MB) [ 0.000000] EFI: mem40: type=2, attr=0xf, range=[0x00000000d9e4c000-0x00000000d9e52000) (0MB) [ 0.000000] EFI: mem41: type=3, attr=0xf, range=[0x00000000d9e52000-0x00000000da59f000) (7MB) [ 0.000000] EFI: mem42: type=5, attr=0x800000000000000f, range=[0x00000000da59f000-0x00000000da6c3000) (1MB) [ 0.000000] EFI: mem43: type=5, attr=0x800000000000000f, range=[0x00000000da6c3000-0x00000000da79f000) (0MB) [ 0.000000] EFI: mem44: type=6, attr=0x800000000000000f, range=[0x00000000da79f000-0x00000000da8b1000) (1MB) [ 0.000000] EFI: mem45: type=6, attr=0x800000000000000f, range=[0x00000000da8b1000-0x00000000da99f000) (0MB) [ 0.000000] EFI: mem46: type=0, attr=0xf, range=[0x00000000da99f000-0x00000000daa22000) (0MB) [ 0.000000] EFI: mem47: type=0, attr=0xf, range=[0x00000000daa22000-0x00000000daa9b000) (0MB) [ 0.000000] EFI: mem48: type=0, attr=0xf, range=[0x00000000daa9b000-0x00000000daa9c000) (0MB) [ 0.000000] EFI: mem49: type=0, attr=0xf, range=[0x00000000daa9c000-0x00000000daa9f000) (0MB) [ 0.000000] EFI: mem50: type=10, attr=0xf, range=[0x00000000daa9f000-0x00000000daadd000) (0MB) [ 0.000000] EFI: mem51: type=10, attr=0xf, range=[0x00000000daadd000-0x00000000dab9f000) (0MB) [ 0.000000] EFI: mem52: type=9, attr=0xf, range=[0x00000000dab9f000-0x00000000dabdc000) (0MB) [ 0.000000] EFI: mem53: type=9, attr=0xf, range=[0x00000000dabdc000-0x00000000dabff000) (0MB) [ 0.000000] EFI: mem54: type=4, attr=0xf, range=[0x00000000dabff000-0x00000000dac00000) (0MB) [ 0.000000] EFI: mem55: type=7, attr=0xf, range=[0x0000000100000000-0x000000021e600000) (4582MB) [ 0.000000] EFI: mem56: type=11, attr=0x8000000000000001, range=[0x00000000f80f8000-0x00000000f80f9000) (0MB) [ 0.000000] EFI: mem57: type=11, attr=0x8000000000000001, range=[0x00000000fed1c000-0x00000000fed20000) (0MB) [ 0.000000] ACPI: UEFI 00000000dabde000 0003E (v01 LENOVO TP-8D 00001280 PTL 00000002) [ 0.000000] ACPI: UEFI 00000000dabdd000 00042 (v01 PTL COMBUF 00000001 PTL 00000001) [ 0.000000] ACPI: UEFI 00000000dabdc000 00292 (v01 LENOVO TP-8D 00001280 PTL 00000002) [ 0.795807] fb0: EFI VGA frame buffer device [ 1.057243] EFI Variables Facility v0.08 2004-May-17 [ 9.122104] fb: conflicting fb hw usage inteldrmfb vs EFI VGA - removing generic driver ReadEFI: /dev/sda , N 128 , 0 , , PRStart 1024 , PRSize 128 WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. =================== PARTITIONS & DISKS: sda4 : sda, not-sepboot, grubenv-ok grub2, grub-efi, update-grub, 64, with-boot, is-os, gpt-but-not-EFI, fstab-has-bad-efi, no-nt, no-winload, no-recov-nor-hid, no-bmgr, no-grldr, no-b-bcd, apt-get, grub-install, . sda3 : sda, maybesepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, gpt-but-not-EFI, part-has-no-fstab, no-nt, haswinload, no-recov-nor-hid, no-bmgr, no-grldr, no-b-bcd, nopakmgr, nogrubinstall, /mnt/boot-sav/sda3. sda1 : sda, maybesepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, is-correct-EFI, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, no-grldr, no-b-bcd, nopakmgr, nogrubinstall, /boot/efi. sda5 : sda, maybesepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, gpt-but-not-EFI, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, no-grldr, no-b-bcd, nopakmgr, nogrubinstall, /home. sda : GPT-BIS, GPT, no-BIOS_boot, has-correctEFI, 2048 sectors * 512 bytes =================== PARTED: Model: ATA HITACHI HTS72323 (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 106MB 105MB fat32 EFI system partition boot 2 106MB 240MB 134MB Microsoft reserved partition msftres 3 240MB 87.2GB 87.0GB ntfs Basic data partition 4 87.2GB 169GB 81.9GB ext4 5 169GB 316GB 147GB ext4 6 316GB 320GB 4096MB linux-swap(v1) =================== MOUNT: /dev/sda4 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /boot/efi type vfat (rw) /dev/sda5 on /home type ext4 (rw) gvfs-fuse-daemon on /home/vierlex/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=vierlex) /dev/sda3 on /mnt/boot-sav/sda3 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /sys/block/sda: alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro sda1 sda2 sda3 sda4 sda5 sda6 size slaves stat subsystem trace uevent /dev: agpgart autofs block bsg btrfs-control bus char console core cpu cpu_dma_latency disk dri ecryptfs fb0 fd full fuse hpet input kmsg log mapper mcelog mei mem net network_latency network_throughput null oldmem port ppp psaux ptmx pts random rfkill rtc rtc0 sda sda1 sda2 sda3 sda4 sda5 sda6 sg0 shm snapshot snd stderr stdin stdout tpm0 uinput urandom usbmon0 usbmon1 usbmon2 v4l vga_arbiter video0 watchdog zero /dev/mapper: control /boot/efi: EFI /boot/efi/EFI: Boot Microsoft ubuntu /boot/efi/efi: Boot Microsoft ubuntu /boot/efi/efi/Boot: bootx64.efi /boot/efi/efi/ubuntu: grubx64.efi WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. =================== DF: Filesystem Type Size Used Avail Use% Mounted on /dev/sda4 ext4 77G 4.1G 69G 6% / udev devtmpfs 3.9G 12K 3.9G 1% /dev tmpfs tmpfs 1.6G 864K 1.6G 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 3.9G 152K 3.9G 1% /run/shm /dev/sda1 vfat 96M 18M 79M 19% /boot/efi /dev/sda5 ext4 137G 2.2G 128G 2% /home /dev/sda3 fuseblk 81G 30G 52G 37% /mnt/boot-sav/sda3 =================== FDISK: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf34fe538 Device Boot Start End Blocks Id System /dev/sda1 1 625142447 312571223+ ee GPT =================== Before mainwindow FSCK no PASTEBIN yes WUBI no WINBOOT yes recommendedrepair, purge, QTY_OF_PART_FOR_REINSTAL 1 no-kernel-purge UNHIDEBOOT_ACTION yes (10s), noflag () PART_TO_REINSTALL_GRUB sda4, FORCE_GRUB no (sda) REMOVABLEDISK no USE_SEPARATEBOOTPART no (sda3) grub2 () UNCOMMENT_GFXMODE no ATA ADD_KERNEL_OPTION no (acpi=off) MBR_TO_RESTORE ( ) EFI detected. Please check the options. =================== Actions FSCK no PASTEBIN yes WUBI no WINBOOT no bootinfo, nombraction, QTY_OF_PART_FOR_REINSTAL 1 no-kernel-purge UNHIDEBOOT_ACTION no (10s), noflag () PART_TO_REINSTALL_GRUB sda4, FORCE_GRUB no (sda) REMOVABLEDISK no USE_SEPARATEBOOTPART no (sda3) grub2 () UNCOMMENT_GFXMODE no ATA ADD_KERNEL_OPTION no (acpi=off) MBR_TO_RESTORE ( ) No change has been performed on your computer. See you soon! internet: connected Thanks for your time and attention. EDIT: additional Info Request =No boot loader is installed in the MBR of /dev/sda. But maybe this is how it is supposed to work? yea this is ok. boot stuff seems to be on a seperate partition, in my case sda1. I'm very new to this UEFI thing too. missing files like bootmgr i don't really have a clue :D but yea, maybe thats how it suppose to be? Instead and whats not shown in the log for some reason: There is additional microsoft bootfiles on sda1 under /efi/microsoft/ [much stuff] I remember also doing some kind of hack to make a UEFI windows 7 usb stick. http://jake.io/b/2011/installing-windows-7-with-uefi-boot-on-an-x220-from-usb/ In short: creating and placing bootx64.efi on the stick so it can be booted in UEFI mode. boot order i decide that in my BIOS. i read somwhere that the thinkpad x220 (essential part of the serial number: 4921 http://www.lenovo.com/shop/americas/content/user_guides/x220_x220i_x220tablet_x220itablet_ug_en.pdf) doesnt really have UEFI interface or something, still, these 2 options are listed with all the other usual devices you can give a boot priority to. Right now it looks like this: Boot Priority Order 1. ubuntu 2. Windows Boot Manager 3. USB FDD 4. USB HDD 5. ATA HDD0 HITACHI [random string]

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • NTFS Corruption: Files created in Linux corrupted when Windows Boots

    - by Logan Mayfield
    I'm getting some file loss and corruption on my Win7/Ubuntu 12.04 dual boot setup. I have a large shared NTFS partition. I have my Windows Docs/Music/etc. directories on that file and have the comparable directors in Linux setup as a sym. link. I'm using ntfs-3g on the linux side of things to manage the ntfs partition. The shared partition is on a logical partition along with my Linux /home / and /swap partitions. The ntfs partition is mounted at boot time via fstab with the following options: ntfs-3g users,nls=utf8,locale=en_US.UTF-8,exec,rw The problem seems to be confined to newly created and recently edited files. I have not see data loss or corruption when creating/editing files in Windows and then moving over to Ubuntu. I've been using the sync command aggressively in Ubuntu to try to ensure everything is getting written to the HDD. I do not use hibernate in Windows so I know it's not the usual missing files due to Hibernation problem. I'm not seeing any mount related issues on dmesg. Most recently I had a set of files related to a LaTeX document go bad. Some of them show up in Ubuntu but I am unable to delete them. In the GUI file browser they are given thumbnails associated with files I created on my last boot of Windows. To be more specific: I created a few png files in Windows. The files corrupted by that Windows boot are associated with running PdfLatex on a file and are not image files. However, two of the corrupted files show up with the thumbnail image of one of the previously mentioned png files. The png files are not in the same directory as the latex files but they are both win the Document Folder tree. I've had sucess with using NTFS for shared data in the past and am hoping there's some quirk here I'm missing and it's not just bad luck. On one hand this appears to be some kind of Windows problem as data loss occurs when I boot to Windows after having worked in Ubuntu for a while. However, I'm assuming it's more on the Ubuntu end as it requires the special NTFS drivers. Edit for more info: This is a Lenovo Thinkpad L430. Purchased new in the last month. So it's a fairly fresh install. Many of the files on the shared partition were copied over from a previous NTFS formatted shared partition on another HDD. As requested: here's a sample chkdsk log. Some of the files its mentioning were files that got deleted off the partition while in Ubuntu. Others were created/edited but not deleted. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is Files. CHKDSK is verifying files (stage 1 of 3)... Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x789f47 for possibly 0x21 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x42 is already in use. Deleting corrupt attribute record (128, "") from file record segment 66. 86496 file records processed. File verification completed. 385 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... Deleted invalid filename Screenshot from 2012-09-09 09:51:27.png (72) in directory 46. The NTFS file name attribute in file 0x48 is incorrect. 53 00 63 00 72 00 65 00 65 00 6e 00 73 00 68 00 S.c.r.e.e.n.s.h. 6f 00 74 00 20 00 66 00 72 00 6f 00 6d 00 20 00 o.t. .f.r.o.m. . 32 00 30 00 31 00 32 00 2d 00 30 00 39 00 2d 00 2.0.1.2.-.0.9.-. 30 00 39 00 20 00 30 00 39 00 3a 00 35 00 31 00 0.9. .0.9.:.5.1. 3a 00 32 00 37 00 2e 00 70 00 6e 00 67 00 0d 00 :.2.7...p.n.g... 00 00 00 00 00 00 90 94 49 1f 5e 00 00 80 d4 00 ......I.^.... File 72 has been orphaned since all its filenames were invalid Windows will recover the file in the orphan recovery phase. Correcting minor file name errors in file 72. Index entry found.000 of index $I30 in file 0x5 points to unused file 0x11. Deleting index entry found.000 in index $I30 of file 5. Index entry found.001 of index $I30 in file 0x5 points to unused file 0x16. Deleting index entry found.001 in index $I30 of file 5. Index entry found.002 of index $I30 in file 0x5 points to unused file 0x15. Deleting index entry found.002 in index $I30 of file 5. Index entry DOWNLO~1 of index $I30 in file 0x28 points to unused file 0x2b6. Deleting index entry DOWNLO~1 in index $I30 of file 40. Unable to locate the file name attribute of index entry Screenshot from 2012-09-09 09:51:27.png of index $I30 with parent 0x2e in file 0x48. Deleting index entry Screenshot from 2012-09-09 09:51:27.png in index $I30 of file 46. An index entry of index $I30 in file 0x32 points to file 0x151e8 which is beyond the MFT. Deleting index entry latexsheet.tex in index $I30 of file 50. An index entry of index $I30 in file 0x58bc points to file 0x151eb which is beyond the MFT. Deleting index entry D8CZ82PK in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151f7 which is beyond the MFT. Deleting index entry EGA4QEAX in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151e9 which is beyond the MFT. Deleting index entry NGTB469M in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151fb which is beyond the MFT. Deleting index entry WU5RKXAB in index $I30 of file 22716. Index entry comp220-lab3.synctex.gz of index $I30 in file 0xda69 points to unused file 0xd098. Deleting index entry comp220-lab3.synctex.gz in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp220-numberGrammars.aux of index $I30 with parent 0xda69 in file 0xa276. Deleting index entry comp220-numberGrammars.aux in index $I30 of file 55913. The file reference 0x500000000cd43 of index entry comp220-numberGrammars.out of index $I30 with parent 0xda69 is not the same as 0x600000000cd43. Deleting index entry comp220-numberGrammars.out in index $I30 of file 55913. The file reference 0x500000000cd45 of index entry comp220-numberGrammars.pdf of index $I30 with parent 0xda69 is not the same as 0xc00000000cd45. Deleting index entry comp220-numberGrammars.pdf in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15290 which is beyond the MFT. Deleting index entry gram.aux in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15291 which is beyond the MFT. Deleting index entry gram.out in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15292 which is beyond the MFT. Deleting index entry gram.pdf in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp230-quiz1.synctex.gz of index $I30 with parent 0xda6f in file 0xd183. Deleting index entry comp230-quiz1.synctex.gz in index $I30 of file 55919. An index entry of index $I30 in file 0xf3cc points to file 0x15283 which is beyond the MFT. Deleting index entry require-transform.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf3cc points to file 0x15284 which is beyond the MFT. Deleting index entry set.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf497 points to file 0x15280 which is beyond the MFT. Deleting index entry logger.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15281 which is beyond the MFT. Deleting index entry misc.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15282 which is beyond the MFT. Deleting index entry more-scheme.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf5bf points to file 0x15285 which is beyond the MFT. Deleting index entry core-layout.rkt in index $I30 of file 62911. An index entry of index $I30 in file 0xf5e0 points to file 0x15286 which is beyond the MFT. Deleting index entry ref.scrbl in index $I30 of file 62944. An index entry of index $I30 in file 0xf6f0 points to file 0x15287 which is beyond the MFT. Deleting index entry base-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15288 which is beyond the MFT. Deleting index entry html-properties.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15289 which is beyond the MFT. Deleting index entry html-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528b which is beyond the MFT. Deleting index entry latex-prefix.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528c which is beyond the MFT. Deleting index entry latex-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528e which is beyond the MFT. Deleting index entry scribble.tex in index $I30 of file 63216. An index entry of index $I30 in file 0xf717 points to file 0x1528a which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63255. An index entry of index $I30 in file 0xf721 points to file 0x1528d which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63265. An index entry of index $I30 in file 0xf764 points to file 0x1528f which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63332. An index entry of index $I30 in file 0x14261 points to file 0x15270 which is beyond the MFT. Deleting index entry fddff3ae9ae2221207f144821d475c08ec3d05 in index $I30 of file 82529. An index entry of index $I30 in file 0x14621 points to file 0x15268 which is beyond the MFT. Deleting index entry FETCH_HEAD in index $I30 of file 83489. An index entry of index $I30 in file 0x14650 points to file 0x15272 which is beyond the MFT. Deleting index entry 86 in index $I30 of file 83536. An index entry of index $I30 in file 0x14651 points to file 0x15266 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.idx in index $I30 of file 83537. An index entry of index $I30 in file 0x14651 points to file 0x15265 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.pack in index $I30 of file 83537. An index entry of index $I30 in file 0x146f1 points to file 0x15275 which is beyond the MFT. Deleting index entry master in index $I30 of file 83697. An index entry of index $I30 in file 0x146f6 points to file 0x15276 which is beyond the MFT. Deleting index entry remotes in index $I30 of file 83702. An index entry of index $I30 in file 0x1477d points to file 0x15278 which is beyond the MFT. Deleting index entry pad.rkt in index $I30 of file 83837. An index entry of index $I30 in file 0x14797 points to file 0x1527c which is beyond the MFT. Deleting index entry pad1.rkt in index $I30 of file 83863. An index entry of index $I30 in file 0x14810 points to file 0x1527d which is beyond the MFT. Deleting index entry cm.rkt in index $I30 of file 83984. An index entry of index $I30 in file 0x14926 points to file 0x1527e which is beyond the MFT. Deleting index entry multi-file-search.rkt in index $I30 of file 84262. An index entry of index $I30 in file 0x149ef points to file 0x1527f which is beyond the MFT. Deleting index entry com.rkt in index $I30 of file 84463. An index entry of index $I30 in file 0x14b47 points to file 0x15202 which is beyond the MFT. Deleting index entry COMMIT_EDITMSG in index $I30 of file 84807. An index entry of index $I30 in file 0x14b47 points to file 0x15279 which is beyond the MFT. Deleting index entry index in index $I30 of file 84807. An index entry of index $I30 in file 0x14b4c points to file 0x15274 which is beyond the MFT. Deleting index entry master in index $I30 of file 84812. An index entry of index $I30 in file 0x14b61 points to file 0x1520b which is beyond the MFT. Deleting index entry 02 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1525a which is beyond the MFT. Deleting index entry 28 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15208 which is beyond the MFT. Deleting index entry 29 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521f which is beyond the MFT. Deleting index entry 2c in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15261 which is beyond the MFT. Deleting index entry 2e in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f0 which is beyond the MFT. Deleting index entry 45 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1523e which is beyond the MFT. Deleting index entry 47 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151e5 which is beyond the MFT. Deleting index entry 49 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15214 which is beyond the MFT. Deleting index entry 58 in index $I30 of file 84833. Index entry 6e of index $I30 in file 0x14b61 points to unused file 0xd182. Deleting index entry 6e in index $I30 of file 84833. Unable to locate the file name attribute of index entry a0 of index $I30 with parent 0x14b61 in file 0xd29c. Deleting index entry a0 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521b which is beyond the MFT. Deleting index entry cd in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15249 which is beyond the MFT. Deleting index entry d6 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15242 which is beyond the MFT. Deleting index entry df in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15227 which is beyond the MFT. Deleting index entry ea in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1522e which is beyond the MFT. Deleting index entry f3 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f2 which is beyond the MFT. Deleting index entry ff in index $I30 of file 84833. An index entry of index $I30 in file 0x14b62 points to file 0x15254 which is beyond the MFT. Deleting index entry 1ed39b36ad4bd48c91d22cbafd7390f1ea38da in index $I30 of file 84834. An index entry of index $I30 in file 0x14b75 points to file 0x15224 which is beyond the MFT. Deleting index entry 96260247010fe9811fea773c08c5f3a314df3f in index $I30 of file 84853. An index entry of index $I30 in file 0x14b79 points to file 0x15219 which is beyond the MFT. Deleting index entry 8f689724ca23528dd4f4ab8b475ace6edcb8f5 in index $I30 of file 84857. An index entry of index $I30 in file 0x14b7c points to file 0x15223 which is beyond the MFT. Deleting index entry 1df17cf850656be42c947cba6295d29c248d94 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15217 which is beyond the MFT. Deleting index entry 31db8a3c72a3e44769bbd8db58d36f8298242c in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15267 which is beyond the MFT. Deleting index entry 8e1254d755ff1882d61c07011272bac3612f57 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b82 points to file 0x15246 which is beyond the MFT. Deleting index entry f959bfaf9643c1b9e78d5ecf8f669133efdbf3 in index $I30 of file 84866. An index entry of index $I30 in file 0x14b88 points to file 0x151fe which is beyond the MFT. Deleting index entry 7e9aa15b1196b2c60116afa4ffa613397f2185 in index $I30 of file 84872. An index entry of index $I30 in file 0x14b8a points to file 0x151ea which is beyond the MFT. Deleting index entry 73cb0cd248e494bb508f41b55d862e84cdd6e0 in index $I30 of file 84874. An index entry of index $I30 in file 0x14b8e points to file 0x15264 which is beyond the MFT. Deleting index entry bd555d9f0383cc14c317120149e9376a8094c4 in index $I30 of file 84878. An index entry of index $I30 in file 0x14b96 points to file 0x15212 which is beyond the MFT. Deleting index entry 630dba40562d991bc6cbb6fed4ba638542e9c5 in index $I30 of file 84886. An index entry of index $I30 in file 0x14b99 points to file 0x151ec which is beyond the MFT. Deleting index entry 478be31ca8e538769246e22bba3330d81dc3c8 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b99 points to file 0x15258 which is beyond the MFT. Deleting index entry 66c60c0a0f3253bc9a5112697e4cbb0dfc0c78 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b9c points to file 0x15238 which is beyond the MFT. Deleting index entry 1c7ceeddc2953496f9ffbfc0b6fb28846e3fe3 in index $I30 of file 84892. An index entry of index $I30 in file 0x14b9c points to file 0x15247 which is beyond the MFT. Deleting index entry ae6e32ffc49d897d8f8aeced970a90d3653533 in index $I30 of file 84892. An index entry of index $I30 in file 0x14ba0 points to file 0x15233 which is beyond the MFT. Deleting index entry f71c7d874e45179a32e138b49bf007e5bbf514 in index $I30 of file 84896. Index entry 2e04fefbd794f050d45e7a717d009e39204431 of index $I30 in file 0x14ba7 points to unused file 0xd097. Deleting index entry 2e04fefbd794f050d45e7a717d009e39204431 in index $I30 of file 84903. An index entry of index $I30 in file 0x14baa points to file 0x15241 which is beyond the MFT. Deleting index entry 0dda7dec1c635cd646dfef308e403c2843d5dc in index $I30 of file 84906. An index entry of index $I30 in file 0x14baa points to file 0x151fc which is beyond the MFT. Deleting index entry 98151e654dd546edcfdec630bc82d90619ac8e in index $I30 of file 84906. An index entry of index $I30 in file 0x14bb1 points to file 0x151e9 which is beyond the MFT. Deleting index entry 1997c5be62ffeebc99253cced7608415e38e4e in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x1521d which is beyond the MFT. Deleting index entry 6bf3aedefd3ac62d9c49cad72d05e8c0ad242c in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x151f4 which is beyond the MFT. Deleting index entry 907b755afdca14c00be0010962d0861af29264 in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb3 points to file 0x15218 which is beyond the MFT. Deleting index entry

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >