Search Results

Search found 1290 results on 52 pages for 'libreoffice writer'.

Page 48/52 | < Previous Page | 44 45 46 47 48 49 50 51 52  | Next Page >

  • Intermittent temporary GUI freeze in Ubuntu 11.10

    - by Oscar
    I've been using Ubuntu 11.10 for a month or so. In the last week it's started freezing randomly (every few hours or minutes). I can still move the mouse and switch to other terminals with ctrl+alt. I thought this was purely a gui issue as I could continue entering commands (mouse clicks and keys) which seem to be processed once the system resumes (generally 30 seconds to a few minutes). I'm using gnome and metacity. I can't identify anything in particular that triggers the freezes. Saving a file in LibreOffice causes the system to hang. I tried disabling most of the services I've installed (dropbox, autokey, etc.) but doesn't help. Switching to another terminal and running top, the CPU column is shared equally among all of my processes (i.e. non-root). I have no idea what that signifies. My PC is unusable in this state. CPU model name : Pentium(R) Dual-Core CPU E6700 @ 3.20GHz [7m PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND [0;10m[39;49m[K [0;10m[0;10m 1499 ogga 20 0 404m 32m 13m R 10 0.8 0:28.19 python [0;10m[39;49m [0;10m[0;10m 1501 ogga 20 0 216m 13m 6224 R 10 0.3 0:18.28 ibus-x11 [0;10m[39;49m [0;10m[0;10m 1679 ogga 20 0 449m 34m 15m R 10 0.9 0:41.10 gnome-panel [0;10m[39;49m [0;10m[0;10m 1710 ogga 20 0 350m 15m 8324 R 10 0.4 0:18.25 bluetooth-apple [0;10m[39;49m [0;10m[0;10m 1752 ogga 20 0 458m 37m 13m R 10 0.9 0:22.62 autokey-gtk [0;10m[39;49m [0;10m[0;10m 2081 ogga 20 0 354m 17m 9800 R 10 0.5 0:16.36 update-notifier [0;10m[39;49m [0;10m[0;10m 5439 ogga 20 0 640m 104m 38m R 10 2.6 0:45.17 chromium-browse [0;10m[39;49m [0;10m[0;10m 5586 ogga 20 0 381m 42m 21m R 10 1.1 0:20.17 chromium-browse [0;10m[39;49m [0;10m[0;10m 6422 ogga 20 0 529m 59m 18m R 10 1.5 0:28.15 sublime_text [0;10m[39;49m [0;10m[0;10m 1362 ogga 20 0 264m 14m 7884 R 8 0.4 0:18.29 gnome-session [0;10m[39;49m [0;10m[0;10m 1673 ogga 20 0 351m 17m 9768 R 8 0.4 0:21.78 metacity [0;10m[39;49m [0;10m[0;10m 1708 ogga 20 0 249m 13m 7156 R 8 0.3 0:18.23 gnome-fallback- [0;10m[39;49m [0;10m[0;10m 1709 ogga 20 0 572m 28m 15m R 8 0.7 0:18.37 nautilus [0;10m[39;49m [0;10m[0;10m 1722 ogga 20 0 467m 18m 9m R 8 0.5 0:18.43 nm-applet [0;10m[39;49m [0;10m[0;10m 1727 ogga 20 0 225m 12m 6304 R 8 0.3 0:18.24 polkit-gnome-au [0;10m[39;49m [0;10m[0;10m 1731 ogga 20 0 422m 19m 10m R 8 0.5 0:26.62 gnome-sound-app [0;10m[39;49m [0;10m[0;10m 1735 ogga 20 0 306m 31m 13m R 8 0.8 0:18.37 python [0;10m[39;49m [0;10m[0;10m 1754 ogga 20 0 286m 16m 8912 R 8 0.4 0:18.90 vino-server [0;10m[39;49m [0;10m[0;10m 1798 ogga 20 0 246m 15m 7476 R 8 0.4 0:18.25 gnome-screensav [0;10m[39;49m [0;10m[0;10m 1851 ogga 20 0 185m 14m 7256 R 8 0.4 0:18.18 gdu-notificatio [0;10m[39;49m [0;10m[0;10m 1923 ogga 20 0 251m 28m 11m R 8 0.7 0:17.96 applet.py [0;10m[39;49m [0;10m[0;10m 4085 ogga 20 0 378m 22m 11m R 8 0.6 0:18.19 gnome-terminal [0;10m[39;49m [0;10m 4213 ogga 20 0 263m 73m 15m S 2 1.9 3:57.44 skype [0;10m[39;49m [0;10m 1 root 20 0 24188 1492 1320 S 0 0.0 0:00.45 init [0;10m[39;49m [0;10m 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd [0;10m[39;49m [0;10m 3 root 20 0 0 0 0 S 0 0.0 0:02.27 ksoftirqd/0 [0;10m[39;49m [0;10m 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 [0;10m[39;49m [0;10m 7 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 [0;10m[39;49m [0;10m 9 root 20 0 0 0 0 S 0 0.0 0:01.97 ksoftirqd/1 [0;10m[39;49m [0;10m 10 root 20 0 0 0 0 S 0 0.0 0:01.16 kworker/0:1 [0;10m[39;49m [0;10m 11 root 0 -20 0 0 0 S 0 0.0 0:00.00 cpuset [0;10m[39;49m [0;10m 12 root 0 -20 0 0 0 S 0 0.0 0:00.00 khelper [0;10m[39;49m [0;10m 13 root 0 -20 0 0 0 S 0 0.0 0:00.00 netns [0;10m[39;49m [0;10m 15 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers [0;10m[39;49m [0;10m 16 root 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default [0;10m[39;49m [0;10m 17 root 0 -20 0 0 0 S 0 0.0 0:00.00 kintegrityd [0;10m[39;49m [0;10m 18 root 0 -20 0 0 0 S 0 0.0 0:00.00 kblockd [0;10m[39;49m [0;10m 19 root 0 -20 0 0 0 S 0 0.0 0:00.00 ata_sff [0;10m[39;49m [0;10m 20 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd [0;10m[39;49m [0;10m 21 root 0 -20 0 0 0 S 0 0.0 0:00.00 md [0;10m[39;49m [0;10m 23 root 20 0 0 0 0 S 0 0.0 0:00.00 khungtaskd [0;10m[39;49m [0;10m 24 root 20 0 0 0 0 S 0 0.0 0:00.14 kswapd0 [0;10m[39;49m [0;10m 25 root 25 5 0 0 0 S 0 0.0 0:00.00 ksmd [0;10m[39;49m [0;10m 26 root 39 19 0 0 0 S 0 0.0 0:00.00 khugepaged [0;10m[39;49m [0;10m 27 root 20 0 0 0 0 S 0 0.0 0:00.00 fsnotify_mark [0;10m[39;49m [0;10m 28 root 20 0 0 0 0 S 0 0.0 0:00.00 ecryptfs-kthrea [0;10m[39;49m [0;10m 29 root 0 -20 0 0 0 S 0 0.0 0:00.00 crypto [0;10m[39;49m [0;10m 37 root 0 -20 0 0 0 S 0 0.0 0:00.00 kthrotld [0;10m[39;49m [0;10m 38 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_0 [0;10m[39;49m [0;10m 39 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_1 [0;10m[39;49m [0;10m 41 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_2 [0;10m[39;49m [0;10m 42 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_3 [0;10m[39;49m [0;10m 64 root 20 0 0 0 0 S 0 0.0 0:02.98 kworker/0:2 [0;10m[39;49m [0;10m 242 root 20 0 0 0 0 S 0 0.0 0:00.39 jbd2/sdb1-8 [0;10m[39;49m [0;10m 243 root 0 -20 0 0 0 S 0 0.0 0:00.00 ext4-dio-unwrit [0;10m[39;49m [0;10m 288 root 20 0 17236 448 448 S 0 0.0 0:00.04 upstart-udev-br [0;10m[39;49m [0;10m 295 root 20 0 21752 884 796 S 0 0.0 0:00.06 udevd And at another time: [7m PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND [0;10m[39;49m[K [0;10m[0;10m 1757 ogga 20 0 222m 9932 6300 R 13 0.2 0:05.69 polkit-gnome-au [0;10m[39;49m [0;10m[0;10m 1559 ogga 20 0 152m 9764 6112 R 13 0.2 0:05.77 ibus-x11 [0;10m[39;49m [0;10m[0;10m 1786 ogga 20 0 457m 33m 13m R 13 0.9 0:06.10 autokey-gtk [0;10m[39;49m [0;10m[0;10m 1395 ogga 20 0 262m 12m 7880 R 12 0.3 0:05.88 gnome-session [0;10m[39;49m [0;10m[0;10m 1557 ogga 20 0 403m 31m 13m R 12 0.8 0:14.95 python [0;10m[39;49m [0;10m[0;10m 1745 ogga 20 0 247m 11m 7196 R 12 0.3 0:05.69 gnome-fallback- [0;10m[39;49m [0;10m[0;10m 1767 ogga 20 0 237m 26m 11m R 12 0.7 0:05.87 python [0;10m[39;49m [0;10m[0;10m 1713 ogga 20 0 440m 25m 13m R 12 0.6 0:13.76 gnome-panel [0;10m[39;49m [0;10m[0;10m 1747 ogga 20 0 348m 13m 8328 R 11 0.3 0:05.22 bluetooth-apple [0;10m[39;49m [0;10m[0;10m 1754 ogga 20 0 465m 16m 10m R 11 0.4 0:05.21 nm-applet [0;10m[39;49m [0;10m[0;10m 1710 ogga 20 0 167m 11m 7564 R 11 0.3 0:05.21 metacity [0;10m[39;49m [0;10m[0;10m 1761 ogga 20 0 406m 17m 9928 R 11 0.4 0:12.71 gnome-sound-app [0;10m[39;49m [0;10m[0;10m 1789 ogga 20 0 283m 13m 8852 R 11 0.3 0:05.55 vino-server [0;10m[39;49m [0;10m[0;10m 1815 ogga 20 0 243m 11m 7452 R 11 0.3 0:05.17 gnome-screensav [0;10m[39;49m [0;10m[0;10m 1885 ogga 20 0 182m 11m 7256 R 11 0.3 0:05.18 gdu-notificatio [0;10m[39;49m [0;10m[0;10m 1957 ogga 20 0 249m 25m 11m R 11 0.7 0:05.32 applet.py [0;10m[39;49m [0;10m[0;10m 2067 ogga 20 0 260m 12m 7828 R 11 0.3 0:05.21 update-notifier [0;10m[39;49m [0;10m 1975 ogga 20 0 292m 48m 11m S 0 1.2 0:08.28 ubuntuone-syncd [0;10m[39;49m [0;10m[0;10m 2363 ogga 20 0 21468 1384 988 R 0 0.0 0:00.01 top [0;10m[39;49m [0;10m 1 root 20 0 24284 2296 1320 S 0 0.1 0:00.46 init [0;10m[39;49m [0;10m 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd [0;10m[39;49m [0;10m 3 root 20 0 0 0 0 S 0 0.0 0:00.05 ksoftirqd/0 [0;10m[39;49m [0;10m 4 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/0:0 [0;10m[39;49m [0;10m 5 root 20 0 0 0 0 S 0 0.0 0:00.19 kworker/u:0 [0;10m[39;49m [0;10m 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 [0;10m[39;49m [0;10m 7 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 [0;10m[39;49m [0;10m 8 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0 [0;10m[39;49m [0;10m 9 root 20 0 0 0 0 S 0 0.0 0:00.06 ksoftirqd/1 [0;10m[39;49m [0;10m 10 root 20 0 0 0 0 S 0 0.0 0:00.09 kworker/0:1 [0;10m[39;49m [0;10m 11 root 0 -20 0 0 0 S 0 0.0 0:00.00 cpuset [0;10m[39;49m [0;10m 12 root 0 -20 0 0 0 S 0 0.0 0:00.00 khelper [0;10m[39;49m [0;10m 13 root 0 -20 0 0 0 S 0 0.0 0:00.00 netns [0;10m[39;49m [0;10m 14 root 20 0 0 0 0 S 0 0.0 0:00.25 kworker/u:1 [0;10m[39;49m [0;10m 15 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers [0;10m[39;49m [0;10m 16 root 20 0 0 0 0 S 0 0.0 0:00.00 bdi-default [0;10m[39;49m [0;10m 17 root 0 -20 0 0 0 S 0 0.0 0:00.00 kintegrityd [0;10m[39;49m [0;10m 18 root 0 -20 0 0 0 S 0 0.0 0:00.00 kblockd [0;10m[39;49m [0;10m 19 root 0 -20 0 0 0 S 0 0.0 0:00.00 ata_sff [0;10m[39;49m [0;10m 20 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd [0;10m[39;49m [0;10m 21 root 0 -20 0 0 0 S 0 0.0 0:00.00 md [0;10m[39;49m [0;10m 22 root 20 0 0 0 0 S 0 0.0 0:00.22 kworker/1:1 [0;10m[39;49m [0;10m 23 root 20 0 0 0 0 S 0 0.0 0:00.00 khungtaskd [0;10m[39;49m [0;10m 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kswapd0 [0;10m[39;49m [0;10m 25 root 25 5 0 0 0 S 0 0.0 0:00.00 ksmd [0;10m[39;49m [0;10m 26 root 39 19 0 0 0 S 0 0.0 0:00.00 khugepaged [0;10m[39;49m [0;10m 27 root 20 0 0 0 0 S 0 0.0 0:00.00 fsnotify_mark [0;10m[39;49m [0;10m 28 root 20 0 0 0 0 S 0 0.0 0:00.00 ecryptfs-kthrea [0;10m[39;49m [0;10m 29 root 0 -20 0 0 0 S 0 0.0 0:00.00 crypto [0;10m[39;49m [0;10m 37 root 0 -20 0 0 0 S 0 0.0 0:00.00 kthrotld [0;10m[39;49m [0;10m 38 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_0 [0;10m[39;49m [0;10m 39 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_1 [0;10m[39;49m [0;10m 40 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/u:2 [0;10m[39;49m [0;10m 41 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_2 [0;10m[39;49m [0;10m 42 root 20 0 0 0 0 S 0 0.0 0:00.00 scsi_eh_3 [0;10m[39;49m [0;10m 43 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/u:3 [0;10m[39;49m [0;10m 44 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/u:4 [0;10m[39;49m [0;10m 45 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/u:5 [0;10m[39;49m[6;1H[K Sorry about the horrible formatting. Thanks for any suggestions... Edit: I notice that my virtual computer (win7 64 on virtualbox) continues to respond most of the time during these 'freezes' Edit2: I suspect this is something to do with UI priority being too low... but I don't know enough about linux to know how to address that.

    Read the article

  • General advice and guidelines on how to properly override object.GetHashCode()

    - by Svish
    According to MSDN, a hash function must have the following properties: If two objects compare as equal, the GetHashCode method for each object must return the same value. However, if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values. The GetHashCode method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object's Equals method. Note that this is true only for the current execution of an application, and that a different hash code can be returned if the application is run again. For the best performance, a hash function must generate a random distribution for all input. I keep finding myself in the following scenario: I have created a class, implemented IEquatable<T> and overridden object.Equals(object). MSDN states that: Types that override Equals must also override GetHashCode ; otherwise, Hashtable might not work correctly. And then it usually stops up a bit for me. Because, how do you properly override object.GetHashCode()? Never really know where to start, and it seems to be a lot of pitfalls. Here at StackOverflow, there are quite a few questions related to GetHashCode overriding, but most of them seems to be on quite particular cases and specific issues. So, therefore I would like to get a good compilation here. An overview with general advice and guidelines. What to do, what not to do, common pitfalls, where to start, etc. I would like it to be especially directed at C#, but I would think it will work kind of the same way for other .NET languages as well(?). I think maybe the best way is to create one answer per topic with a quick and short answer first (close to one-liner if at all possible), then maybe some more information and end with related questions, discussions, blog posts, etc., if there are any. I can then create one post as the accepted answer (to get it on top) with just a "table of contents". Try to keep it short and concise. And don't just link to other questions and blog posts. Try to take the essence of them and then rather link to source (especially since the source could disappear. Also, please try to edit and improve answers instead of created lots of very similar ones. I am not a very good technical writer, but I will at least try to format answers so they look alike, create the table of contents, etc. I will also try to search up some of the related questions here at SO that answers parts of these and maybe pull out the essence of the ones I can manage. But since I am not very stable on this topic, I will try to stay away for the most part :p

    Read the article

  • ASP.NET web forms as ASP.NET MVC

    - by lopkiju
    I am sorry for possible misleading about the title, but I have no idea for a proper title. Feel free to edit. Anyway, I am using ASP.NET Web Forms, and maybe this isn't how web forms is intended to be used, but I like to construct and populate HTML elements manually. It gives me more control. I don't use DataBinding and that kind of stuff. I use SqlConnection, SqlCommand and SqlDataReader, set SQL string etc. and read the data from the DataReader. Old school if you like. :) I do create WebControls so that I don't have to copy-paste every time I need some control, but mostly, I need WebControls to render as HTML so I can append that HTML into some other function that renders the final output with the control inside. I know I can render a control with control.RenderControl(writer), but this can only be done in (pre)Render or RenderContents overrides. For example. I have a dal.cs file where is stored all static functions and voids that communicate with the database. Functions mostly return string so that it can be appended into some other function to render the final result. The reason I am doing like this is that I want to separate the coding from the HTML as much as I can so that I don't do <% while (dataReader.Read()) % in HTML and display the data. I moved this into a CodeBehind. I also use this functions to render in the HttpHandler for AJAX response. That works perfectly, but when I want to add a control (ASP.NET Server control (.cs extension, not .ascx)) I don't know how to do that, so I see my self writing the same control as function that returns string or another function inside that control that returns string and replaces a job that would RenderContents do, so that I can call that function when I need control to be appended into a another string. I know this may not be a very good practice. As I see all the tutorials/videos about the ASP.NET MVC, I think it suite my needs as with the MVC you have to construct everything (or most of it) by your self, which I am already doing right now with web forms. After this long intro, I want to ask how can I build my controls so I can use them as I mentioned (return string) or I have to forget about server controls and build the controls as functions and used them that way? Is that even possible with ASP.NET Server Controls (.cs extension) or am I right when I said that I am not using it right. To be clear, I am talking about how to properly use a web forms, but to avoid data binders because I want to construct everything by my self (render HTML in Code Behind). Someone might think that I am appending strings like "some " + "string", which I am not. I am using StringBuilder for that so there's no slowness. Every opinion is welcome.

    Read the article

  • Please help with 'System.Data.DataRowView' does not contain a property with the name...

    - by Catalin
    My application works just fine until, in some unidentified conditions, it begins throwing this errors: When these errors start appearing, they appear in all over the application, no matter the code is built using ObjectDataSource or SqlDataSource. However, apparently, if the code uses some "classic", code-behind data-binding, that code does not throw errors (they do not show in the application event log), the page is displayed but it does not contain any data.... Please HELP tracking these very strange errors !!!!!! Here is the error log from the Event Viewer Exception information: Exception type: HttpUnhandledException Exception message: Exception of type 'System.Web.HttpUnhandledException' was thrown. Request information: Request URL: http://SITE/agsis/Default.aspx?tabid=1281&error=DataBinding5/7/2009 10:23:03 AMa+'System.Data.DataRowView'+does+not+contain+a+property+with+the+name+'DenumireMaterie'. Request path: /agsis/Default.aspx User host address: 299.299.299.299 ;) User: georgeta Is authenticated: True Authentication Type: Forms Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 1 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.errorpage_aspx.ProcessRequest(HttpContext context) at System.Web.HttpServerUtility.ExecuteInternal(IHttpHandler handler, TextWriter writer, Boolean preserveForm, Boolean setPreviousPage, VirtualPath path, VirtualPath filePath, String physPath, Exception error, String queryStringOverride) Here is the error log from my application: AssemblyVersion: 04.05.01 PortalID: 18 PortalName: Stiinte Economice UserID: 85 UserName: georgeta ActiveTabID: 1281 ActiveTabName: Plan Invatamant RawURL: /agsis/Default.aspx?tabid=1281&error=DataBinding%3a+'System.Data.DataRowView'+does+not+contain+a+property+with+the+name+'DenumireMaterie'. AbsoluteURL: /agsis/Default.aspx AbsoluteURLReferrer: http://SITE/agsis/Default.aspx?tabid=1281&ctl=Login&returnurl=%2fagsis%2fDefault.aspx%3ftabid%3d1281%26ctl%3dNotePlanSemestru%26mid%3d2801%26ID_PlanSemestru%3d518%26ID_PlanInvatamant%3d304%26ID_FC%3d206%26ID_FCForma%3d85%26ID_Domeniu%3d418%26ID_AnStudiu%3d7%26ID_Specializare%3d6522%26ID_AnUniv%3d27 UserAgent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SIMBAR={8C326639-C677-4045-B6D9-F59C330790E7}) DefaultDataProvider: DotNetNuke.Data.SqlDataProvider, DotNetNuke.SqlDataProvider ExceptionGUID: c56dc33e-973f-467a-a90d-2a8fc7193aec InnerException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'ID_PlanInvatamant'. FileName: FileLineNumber: 0 FileColumnNumber: 0 Method: System.Web.UI.DataBinder.GetPropertyValue StackTrace: Message: DotNetNuke.Services.Exceptions.PageLoadException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'ID_PlanInvatamant'. --- System.Web.HttpException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'ID_PlanInvatamant'. at System.Web.UI.DataBinder.GetPropertyValue(Object container, String propName) at System.Web.UI.WebControls.GridView.CreateChildControls(IEnumerable dataSource, Boolean dataBinding) at System.Web.UI.WebControls.CompositeDataBoundControl.PerformDataBinding(IEnumerable data) at System.Web.UI.WebControls.GridView.PerformDataBinding(IEnumerable data) at System.Web.UI.WebControls.DataBoundControl.OnDataSourceViewSelectCallback(IEnumerable data) at System.Web.UI.WebControls.DataBoundControl.PerformSelect() at System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() at System.Web.UI.WebControls.CompositeDataBoundControl.CreateChildControls() at System.Web.UI.Control.EnsureChildControls() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- Source: Server Name: SERVER2 Thank you, Catalin

    Read the article

  • Whats wrong with my triple DES wrapper??

    - by Chen Kinnrot
    it seems that my code adds 6 bytes to the result file after encrypt decrypt is called.. i tries it on a mkv file.. please help here is my code class TripleDESCryptoService : IEncryptor, IDecryptor { public void Encrypt(string inputFileName, string outputFileName, string key) { EncryptFile(inputFileName, outputFileName, key); } public void Decrypt(string inputFileName, string outputFileName, string key) { DecryptFile(inputFileName, outputFileName, key); } static void EncryptFile(string inputFileName, string outputFileName, string sKey) { var outFile = new FileStream(outputFileName, FileMode.OpenOrCreate, FileAccess.ReadWrite); // The chryptographic service provider we're going to use var cryptoAlgorithm = new TripleDESCryptoServiceProvider(); SetKeys(cryptoAlgorithm, sKey); // This object links data streams to cryptographic values var cryptoStream = new CryptoStream(outFile, cryptoAlgorithm.CreateEncryptor(), CryptoStreamMode.Write); // This stream writer will write the new file var encryptionStream = new BinaryWriter(cryptoStream); // This stream reader will read the file to encrypt var inFile = new FileStream(inputFileName, FileMode.Open, FileAccess.Read); var readwe = new BinaryReader(inFile); // Loop through the file to encrypt, line by line var date = readwe.ReadBytes((int)readwe.BaseStream.Length); // Write to the encryption stream encryptionStream.Write(date); // Wrap things up inFile.Close(); encryptionStream.Flush(); encryptionStream.Close(); } private static void SetKeys(SymmetricAlgorithm algorithm, string key) { var keyAsBytes = Encoding.ASCII.GetBytes(key); algorithm.IV = keyAsBytes.Take(algorithm.IV.Length).ToArray(); algorithm.Key = keyAsBytes.Take(algorithm.Key.Length).ToArray(); } static void DecryptFile(string inputFilename, string outputFilename, string sKey) { // The encrypted file var inFile = File.OpenRead(inputFilename); // The decrypted file var outFile = new FileStream(outputFilename, FileMode.OpenOrCreate, FileAccess.ReadWrite); // Prepare the encryption algorithm and read the key from the key file var cryptAlgorithm = new TripleDESCryptoServiceProvider(); SetKeys(cryptAlgorithm, sKey); // The cryptographic stream takes in the encrypted file var encryptionStream = new CryptoStream(inFile, cryptAlgorithm.CreateDecryptor(), CryptoStreamMode.Read); // Write the new unecrypted file var cleanStreamReader = new BinaryReader(encryptionStream); var cleanStreamWriter = new BinaryWriter(outFile); cleanStreamWriter.Write(cleanStreamReader.ReadBytes((int)inFile.Length)); cleanStreamWriter.Close(); outFile.Close(); cleanStreamReader.Close(); } }

    Read the article

  • i had problem in adding the additional content in my pdf

    - by Ayyappan.Anbalagan
    I am converting my data set into a pdf document.My data set contains the product bill details.So,at the top of the pdf i need to added some more content like "my company name & address customer name, date of bill,bill no" Below code i am using to convert into pdf. public static void Exportdata(DataTable dataTable, HttpResponse Response, int val) { //String filename = String.Concat(name, "-", DateTime.Today.Day.ToString(), "/", DateTime.Today.Month.ToString(), "/", DateTime.Today.Year.ToString(), ".pdf"); Document pdfDoc = new Document(PageSize.A4, 30, 30, 40, 25); System.IO.MemoryStream mStream = new System.IO.MemoryStream(); PdfWriter writer = PdfWriter.GetInstance(pdfDoc, mStream); //int cols = 0; //int rows = 0; int cols = dataTable.Columns.Count; int rows = dataTable.Rows.Count; pdfDoc.Open(); iTextSharp.text.Table pdfTable = new iTextSharp.text.Table(cols, rows); pdfTable.BorderWidth = 1; pdfTable.Width = 100; pdfTable.Padding = 1; pdfTable.Spacing = 1; //creating table headers for (int i = 0; i < cols; i++) { Cell cellCols = new Cell(); Font ColFont = FontFactory.GetFont(FontFactory.HELVETICA, 8, Font.BOLD); Chunk chunkCols = new Chunk(dataTable.Columns[i].ColumnName, ColFont); cellCols.Add(chunkCols); pdfTable.AddCell(cellCols); } //creating table data (actual result) for (int k = 0; k < rows; k++) { for (int j = 0; j < cols; j++) { Cell cellRows = new Cell(); Font RowFont = FontFactory.GetFont(FontFactory.HELVETICA, 6); Chunk chunkRows = new Chunk(dataTable.Rows[k][j].ToString(), RowFont); cellRows.Add(chunkRows); pdfTable.AddCell(cellRows); } } pdfDoc.Add(pdfTable); pdfDoc.Close(); Response.ContentType = "application/octet-stream"; if (val == 1) { Response.AddHeader("Content-Disposition", "attachment; filename=Users.pdf"); } else if (val == 2) { Response.AddHeader("Content-Disposition", "attachment; filename=Customers.pdf"); } else if (val == 3) { Response.AddHeader("Content-Disposition", "attachment; filename=Materials.pdf"); } else { Response.AddHeader("Content-Disposition", "attachment; filename=Reports.pdf"); } Response.Clear(); Response.BinaryWrite(mStream.ToArray()); //Response.Write(mStream.ToString()); HttpContext.Current.ApplicationInstance.CompleteRequest(); Response.End(); }

    Read the article

  • Perl Moose::Util::TypeConstraints bug ? what is this error about the name has invalid chars ?

    - by alex8657
    That has been hours i am tracking a Moose::Util::TypeConstraints exceptions, i don't understand where it gets to check a type and tells me that the name is incorrect. I tracked the error to a reduced example to try to locate the problem, and it just shows me that i do not get it. Did i get to a Moose::Util::TypeConstraints bug ? aoffice:new alex$ perl -c ../codesnippets/typeconstrainterror.pl ../codesnippets/typeconstrainterror.pl syntax OK aoffice:new alex$ perl -d ../codesnippets/typeconstrainterror.pl (...) DB<1> r Something::File::LocalFile=HASH(0x100d1bfa8) contains invalid characters for a type name. Names can contain alphanumeric character, ":", and "." at /opt/local/lib/perl5/vendor_perl/5.10.1/darwin-multi-2level/Moose/Util/TypeConstraints.pm line 508 Moose::Util::TypeConstraints::_create_type_constraint('Something::File::LocalFile=HASH(0x100d1bfa8)', undef, undef, undef, undef) called at /opt/local/lib/perl5/vendor_perl/5.10.1/darwin-multi-2level/Moose/Util/TypeConstraints.pm line 285 Moose::Util::TypeConstraints::type('Something::File::LocalFile=HASH(0x100d1bfa8)') called at ../codesnippets/typeconstrainterror.pl line 7 Something::File::is_slink('Something::File::LocalFile=HASH(0x100d1bfa8)') called at ../codesnippets/typeconstrainterror.pl line 33 Debugged program terminated. Use q to quit or R to restart, use o inhibit_exit to avoid stopping after program termination, h q, h R or h o to get additional info. Below, the code that crashes: package Something::File; use Moose; has 'type' =>(is=>'ro', isa=>'Str', writer=>'_set_type' ); sub is_slink { my $self = shift; return ( $self->type eq 'slink' ); } no Moose; __PACKAGE__->meta->make_immutable; 1; package Something::File::LocalFile; use Moose; use Moose::Util::TypeConstraints; extends 'Something::File'; subtype 'PositiveInt' => as 'Int' => where { $_ >0 } => message { 'Only positive greater than zero integers accepted' }; no Moose; __PACKAGE__->meta->make_immutable; 1; my $a = Something::File::LocalFile->new; # $a->_set_type('slink'); print $a->is_slink ." end\n";

    Read the article

  • Merging two XML files into one XML file using Java

    - by dmurali
    I am stuck with how to proceed with combining two different XML files(which has the same structure). When I was doing some research on it, people say that XML parsers like DOM or StAX will have to be used. But cant I do it with the regular IOStream? I am currently trying to do with the help of IOStream but this is not solving my purpose, its being more complex. For example, What I have tried is; public class GUI { public static void main(String[] args) throws Exception { // Creates file to write to Writer output = null; output = new BufferedWriter(new FileWriter("C:\\merged.xml")); String newline = System.getProperty("line.separator"); output.write(""); // Read in xml file 1 FileInputStream in = new FileInputStream("C:\\1.xml"); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String strLine; while ((strLine = br.readLine()) != null) { if (strLine.contains("<MemoryDump>")){ strLine = strLine.replace("<MemoryDump>", "xmlns:xsi"); } if (strLine.contains("</MemoryDump>")){ strLine = strLine.replace("</MemoryDump>", "xmlns:xsd"); } output.write(newline); output.write(strLine); System.out.println(strLine); } // Read in xml file 2 FileInputStream in = new FileInputStream("C:\\2.xml"); BufferedReader br1 = new BufferedReader(new InputStreamReader(in)); String strLine1; while ((strLine1 = br1.readLine()) != null) { if (strLine1.contains("<MemoryDump>")){ strLine1 = strLine1.replace("<MemoryDump>", ""); } if (strLine1.contains("</MemoryDump>")){ strLine1 = strLine1.replace("</MemoryDump>", ""); } output.write(newline); output.write(strLine1); I request you to kindly let me know how do I proceed with merging two XML files by adding additional content as well. It would be great if you could provide me some example links as well..! Thank You in Advance..! System.out.println(strLine1); } }

    Read the article

  • migrating an embedded jetty server from v6 to v7

    - by Ceilingfish
    Hi chaps, I have an embedded servlet which I use in unit tests, looks like this: public class UnitTestWebservices extends AbstractHandler { private Server server; private Map<Route,String> data = new HashMap<Route,String>(); public UnitTestWebservices(int port) throws Exception { server = new Server(port); server.setHandler(this); server.start(); } public void handle(String url, HttpServletRequest request, HttpServletResponse response, int arg3) throws IOException, ServletException { final Route route = Route.valueOf(request.getMethod(), url); final String content = data.get(route); if(content != null) { final ServletOutputStream stream = response.getOutputStream(); stream.print(content); stream.flush(); stream.close(); } else { response.sendError(HttpServletResponse.SC_NOT_FOUND); } } .... } That's written using version 6.1.24 of Jetty. I tried switching over to use Jetty 7.1.1.v20100517, and updated that code to this: public class UnitTestWebservices extends AbstractHandler { private Server server; private Map<Route,String> data = new HashMap<Route,String>(); public UnitTestWebservices(int port) throws Exception { server = new Server(port); server.setHandler(this); server.start(); } public void handle(String url, Request request, HttpServletRequest servletRequest, HttpServletResponse response) throws IOException, ServletException { final Route route = Route.valueOf(request.getMethod(), url); final String content = data.get(route); request.setHandled(true); response.setContentType("application/json"); if(content != null) { response.setStatus(HttpServletResponse.SC_OK); final Writer stream = response.getWriter(); stream.append(content); } else { response.sendError(HttpServletResponse.SC_NOT_FOUND); } } } But whenever I tried to access make a request to the server it would hang indefinitely. Has anyone experienced anything similar?. It also printed this into the log: log4j:WARN No appenders could be found for logger (org.eclipse.jetty.util.log). log4j:WARN Please initialize the log4j system properly. org.eclipse.jetty.server.Server@670655dd STOPPED +-UnitTestWebservices@50ef5502 started

    Read the article

  • Bidirectional FIFO

    - by nunos
    I would like to implement a bidirectional fifo. The code below is functioning but it is not using bidirectional fifo. I have searched all over the internet, but haven't found any good example... How can I do that? Thanks, WRITER.c: #include <stdio.h> #include <unistd.h> #include <string.h> #include <sys/types.h> #include <sys/wait.h> #include <fcntl.h> #define MAXLINE 4096 #define READ 0 #define WRITE 1 int main (int argc, char** argv) { int a, b, fd; do { fd=open("/tmp/myfifo",O_WRONLY); if (fd==-1) sleep(1); } while (fd==-1); while (1) { scanf("%d", &a); scanf("%d", &b); write(fd,&a,sizeof(int)); write(fd,&b,sizeof(int)); if (a == 0 && b == 0) { break; } } close(fd); return 0; } READER.c: #include <stdio.h> #include <unistd.h> #include <string.h> #include <sys/types.h> #include <sys/wait.h> #include <fcntl.h> #include <sys/stat.h> #define MAXLINE 4096 #define READ 0 #define WRITE 1 int main(void) { int n1, n2; int fd; mkfifo("/tmp/myfifo",0660); fd=open("/tmp/myfifo",O_RDONLY); while(read(fd, &n1, sizeof(int) )) { read(fd, &n2, sizeof(int)); if (n1 == 0 && n2 == 0) { break; } printf("soma: %d\n",n1+n2); printf("diferenca: %d\n", n1-n2); printf("divisao: %f\n", n1/(double)n2); printf("multiplicacao: %d\n", n1*n2); } close(fd); return 0; }

    Read the article

  • i had problem in adding the additional content in my pdf...using asp.net c#

    - by Ayyappan.Anbalagan
    I am converting my data set into a pdf document.My data set contains the product bill details.So,at the top of the pdf i need to added some more content like "my company name & address customer name, date of bill,bill no" Below code i am using to convert into pdf. public static void Exportdata(DataTable dataTable, HttpResponse Response, int val) { //String filename = String.Concat(name, "-", DateTime.Today.Day.ToString(), "/", DateTime.Today.Month.ToString(), "/", DateTime.Today.Year.ToString(), ".pdf"); Document pdfDoc = new Document(PageSize.A4, 30, 30, 40, 25); System.IO.MemoryStream mStream = new System.IO.MemoryStream(); PdfWriter writer = PdfWriter.GetInstance(pdfDoc, mStream); //int cols = 0; //int rows = 0; int cols = dataTable.Columns.Count; int rows = dataTable.Rows.Count; pdfDoc.Open(); iTextSharp.text.Table pdfTable = new iTextSharp.text.Table(cols, rows); pdfTable.BorderWidth = 1; pdfTable.Width = 100; pdfTable.Padding = 1; pdfTable.Spacing = 1; //creating table headers for (int i = 0; i < cols; i++) { Cell cellCols = new Cell(); Font ColFont = FontFactory.GetFont(FontFactory.HELVETICA, 8, Font.BOLD); Chunk chunkCols = new Chunk(dataTable.Columns[i].ColumnName, ColFont); cellCols.Add(chunkCols); pdfTable.AddCell(cellCols); } //creating table data (actual result) for (int k = 0; k < rows; k++) { for (int j = 0; j < cols; j++) { Cell cellRows = new Cell(); Font RowFont = FontFactory.GetFont(FontFactory.HELVETICA, 6); Chunk chunkRows = new Chunk(dataTable.Rows[k][j].ToString(), RowFont); cellRows.Add(chunkRows); pdfTable.AddCell(cellRows); } } pdfDoc.Add(pdfTable); pdfDoc.Close(); Response.ContentType = "application/octet-stream"; if (val == 1) { Response.AddHeader("Content-Disposition", "attachment; filename=Users.pdf"); } else if (val == 2) { Response.AddHeader("Content-Disposition", "attachment; filename=Customers.pdf"); } else if (val == 3) { Response.AddHeader("Content-Disposition", "attachment; filename=Materials.pdf"); } else { Response.AddHeader("Content-Disposition", "attachment; filename=Reports.pdf"); } Response.Clear(); Response.BinaryWrite(mStream.ToArray()); //Response.Write(mStream.ToString()); HttpContext.Current.ApplicationInstance.CompleteRequest(); Response.End(); }

    Read the article

  • upgrading from MVC4 to MVC5 pre-Release

    - by Jack M
    I have made that dreadful error of upgrading from MVC4 to MVC5 pre-release by updating the razor, and mvc webpage in my references I have System.Web.Mvc, System.Web.Webpages, System.Web.Webpages.Razor and System.Web.Razor as version v4.0.30319, when I run my application I get [A]System.Web.WebPages.Razor.Configuration.HostSection cannot be cast to [B]System.Web.WebPages.Razor.Configuration.HostSection. Type A originates from 'System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' in the context 'Default' at location 'C:\Windows\Microsoft.Net\assembly\GAC_MSIL\System.Web.WebPages.Razor\v4.0_2.0.0.0__31bf3856ad364e35\System.Web.WebPages.Razor.dll'. Type B originates from 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' in the context 'Default' at location 'C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\membership\c70f06fe\9163b1ca\assembly\dl3\291c956e\73c25daa_cf74ce01\System.Web.WebPages.Razor.dll'. is this the same as http://www.asp.net/whitepapers/mvc4-release-notes Thanks Adding a stacktrace: [InvalidCastException: [A]System.Web.WebPages.Razor.Configuration.HostSection cannot be cast to [B]System.Web.WebPages.Razor.Configuration.HostSection. Type A originates from 'System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' in the context 'Default' at location 'C:\Windows\Microsoft.Net\assembly\GAC_MSIL\System.Web.WebPages.Razor\v4.0_2.0.0.0__31bf3856ad364e35\System.Web.WebPages.Razor.dll'. Type B originates from 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' in the context 'Default' at location 'C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\c70f06fe\9163b1ca\assembly\dl3\291c956e\73c25daa_cf74ce01\System.Web.WebPages.Razor.dll'.] System.Web.WebPages.Razor.WebRazorHostFactory.CreateHostFromConfig(String virtualPath, String physicalPath) +193 System.Web.WebPages.Razor.RazorBuildProvider.GetHostFromConfig() +51 System.Web.WebPages.Razor.RazorBuildProvider.CreateHost() +24 System.Web.WebPages.Razor.RazorBuildProvider.get_Host() +34 System.Web.WebPages.Razor.RazorBuildProvider.EnsureGeneratedCode() +85 System.Web.WebPages.Razor.RazorBuildProvider.get_CodeCompilerType() +34 System.Web.Compilation.BuildProvider.GetCompilerTypeFromBuildProvider(BuildProvider buildProvider) +189 System.Web.Compilation.BuildProvidersCompiler.ProcessBuildProviders() +265 System.Web.Compilation.BuildProvidersCompiler.PerformBuild() +21 System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) +580 System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate) +571 System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate) +203 System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean throwIfNotFound) +249 System.Web.Compilation.BuildManager.GetCompiledType(VirtualPath virtualPath) +17 System.Web.Mvc.BuildManagerCompiledView.Render(ViewContext viewContext, TextWriter writer) +90 System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) +380 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilterRecursive(IList`1 filters, Int32 filterIndex, ResultExecutingContext preContext, ControllerContext controllerContext, ActionResult actionResult) +109 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilterRecursive(IList`1 filters, Int32 filterIndex, ResultExecutingContext preContext, ControllerContext controllerContext, ActionResult actionResult) +890 System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult) +97 System.Web.Mvc.Async.<>c__DisplayClass1e.<BeginInvokeAction>b__1b(IAsyncResult asyncResult) +241 System.Web.Mvc.Controller.<BeginExecuteCore>b__1d(IAsyncResult asyncResult, ExecuteCoreState innerState) +29 System.Web.Mvc.Async.WrappedAsyncVoid`1.CallEndDelegate(IAsyncResult asyncResult) +111 System.Web.Mvc.Controller.EndExecuteCore(IAsyncResult asyncResult) +53 System.Web.Mvc.Async.WrappedAsyncVoid`1.CallEndDelegate(IAsyncResult asyncResult) +19 System.Web.Mvc.MvcHandler.<BeginProcessRequest>b__4(IAsyncResult asyncResult, ProcessRequestState innerState) +51 System.Web.Mvc.Async.WrappedAsyncVoid`1.CallEndDelegate(IAsyncResult asyncResult) +111 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +606 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +288

    Read the article

  • Zend database query result converts column values to null

    - by David Zapata
    Hi again. I am using the next instructions to get some registers from my Database. Create the needed models (from the params module): $obj_paramtype_model = new Params_Model_DbTable_Paramtype(); $obj_param_model = new Params_Model_DbTable_Param(); Getting the available locales from the database // This returns a Zend_Db_Table_Row_Abstract class object $obj_paramtype = $obj_paramtype_model->getParamtypeByValue('available_locales'); // This is a query used to add conditions to the next sentence. This is executed from the Params_Model_DbTable_Param instance class, that depends from Params_Model_DbTable_Paramtype class (reference map and dependentTables arrays are fine in both classes) $obj_select = $this->select()->where('deleted_at IS NULL')->order('name'); // Execute the next query, applying the select restrictions. This returns a Zend_Db_Table_Rowset_Abstract class object. This means "Find Params by Paramtype" $obj_params_rowset = $obj_paramtype->findDependentRowset('Params_Model_DbTable_Param', 'Paramtype', $obj_paramtype); // Here the firebug log displays the queries.... Zend_Registry::get('log')->debug($obj_params_rowset); I have a profiler for all my DB executions from Zend. At this point the log and profiler objects (that includes Firebug writers), shows the executed SQL Queries, and the last line displays the resulting Zend_Db_Table_Rowset_Abstract class object. If I execute the SQL Queries in some MySQL Client, the results are as expected. But the Zend Firebug log writer displays as NULL the column values with latin characters (ñ). In other words, the external SQL client shows es_CO | Español de Colombia and en_US | English of United States but the Query results from Zend displays (and returns) es_CO | null and en_US | English of United States. I've deleted the ñ character from Español de Colombia and the query results are just fine in my Zend Log Firebug screen, and in the final Zend Form element. The MySQL database, tables and columns are in UTF-8 - utf8_unicode_ci collation. All my zend framework pages are in UTF-8 charset. I'm using XAMPP 1.7.1 (PHP 5.2.9, Apache at port 90 and MySQL 5.1.33-community) running on Windows 7 Ultimate; Zend Framework 1.10.1. I'm sorry if there is so much information, but I don't really know why could that happen, so I tryed to provide as much related information as I could to help to find some answer.

    Read the article

  • How do access a secure website within a sharepoint webpart?

    - by Bill
    How do access a secure website within a sharepoint webpart? The following code works fine as a console application but if you run it in a webpart, you will get a access violation WebRequest request = WebRequest.Create("https://somesecuresite.com"); WebResponse firstResponse = null; try { firstResponse = request.GetResponse(); } catch (WebException ex) { writer.WriteLine("Error: " + ex.ToString()); return; } if you access a non secure site, it also works. Any ideas? Error: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at System.Net.UnsafeNclNativeMethods.NativePKI.CertVerifyCertificateChainPolicy(IntPtr policy, SafeFreeCertChain chainContext, ChainPolicyParameter& cpp, ChainPolicyStatus& ps) at System.Net.PolicyWrapper.VerifyChainPolicy(SafeFreeCertChain chainContext, ChainPolicyParameter& cpp) at System.Net.Security.SecureChannel.VerifyRemoteCertificate(RemoteCertValidationCallback remoteCertValidationCallback) at System.Net.Security.SslState.CompleteHandshake() at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • Android USB Host Communication

    - by Kip Russell
    I'm working on a project that utilizes the USB Host capabilities in Android 3.2. I'm suffering from a deplorable lack of knowledge and talent regarding USB/Serial communication in general. I'm also unable to find any good example code for what I need to do. I need to read from a USB Communication Device. Ex: When I connect via Putty (on my PC) I enter: >GO And the device starts spewing out data for me. Pitch/Roll/Temp/Checksum. Ex: $R1.217P-0.986T26.3*60 $R1.217P-0.986T26.3*60 $R1.217P-0.987T26.3*61 $R1.217P-0.986T26.3*60 $R1.217P-0.985T26.3*63 I can send the initial 'GO' command from the Android device at which time I receive an echo of 'GO'. Then nothing else on any subsequent reads. How can I: 1) Send the 'go' command. 2) Read in the stream of data that results. The USB device I'm working with has the following interfaces (endpoints). Device Class: Communication Device (0x2) Interfaces: Interface #0 Class: Communication Device (0x2) Endpoint #0 Direction: Inbound (0x80) Type: Intrrupt (0x3) Poll Interval: 255 Max Packet Size: 32 Attributes: 000000011 Interface #1 Class: Communication Device Class (CDC) (0xa) Endpoint #0 Address: 129 Number: 1 Direction: Inbound (0x80) Type: Bulk (0x2) Poll Interval (0) Max Packet Size: 32 Attributes: 000000010 Endpoint #1 Address: 2 Number: 2 Direction: Outbound (0x0) Type: Bulk (0x2) Poll Interval (0) Max Packet Size: 32 Attributes: 000000010 I'm able to deal with permission, connect to the device, find the correct interface and assign the endpoints. I'm just having trouble figuring out which technique to use to send the initial command read the ensuing data. I'm tried different combinations of bulkTransfer and controlTransfer with no luck. Thanks. I'm using interface#1 as seen below: public AcmDevice(UsbDeviceConnection usbDeviceConnection, UsbInterface usbInterface) { Preconditions.checkState(usbDeviceConnection.claimInterface(usbInterface, true)); this.usbDeviceConnection = usbDeviceConnection; UsbEndpoint epOut = null; UsbEndpoint epIn = null; // look for our bulk endpoints for (int i = 0; i < usbInterface.getEndpointCount(); i++) { UsbEndpoint ep = usbInterface.getEndpoint(i); Log.d(TAG, "EP " + i + ": " + ep.getType()); if (ep.getType() == UsbConstants.USB_ENDPOINT_XFER_BULK) { if (ep.getDirection() == UsbConstants.USB_DIR_OUT) { epOut = ep; } else if (ep.getDirection() == UsbConstants.USB_DIR_IN) { epIn = ep; } } } if (epOut == null || epIn == null) { throw new IllegalArgumentException("Not all endpoints found."); } AcmReader acmReader = new AcmReader(usbDeviceConnection, epIn); AcmWriter acmWriter = new AcmWriter(usbDeviceConnection, epOut); reader = new BufferedReader(acmReader); writer = new BufferedWriter(acmWriter); }

    Read the article

  • Mocking HtmlHelper throws NullReferenceException

    - by Matt Austin
    I know that there are a few questions on StackOverflow on this topic but I haven't been able to get any of the suggestions to work for me. I've been banging my head against this for two days now so its time to ask for help... The following code snippit is a simplified unit test to demonstrate what I'm trying to do, which is basically call RadioButtonFor in the Microsoft.Web.Mvc assembly in a unit test. var model = new SendMessageModel { SendMessageType = SendMessageType.Member }; var vd = new ViewDataDictionary(model); vd.TemplateInfo = new TemplateInfo { HtmlFieldPrefix = string.Empty }; var controllerContext = new ControllerContext(new Mock<HttpContextBase>().Object, new RouteData(), new Mock<ControllerBase>().Object); var viewContext = new Mock<ViewContext>(new object[] { controllerContext, new Mock<IView>().Object, vd, new TempDataDictionary(), new Mock<TextWriter>().Object }); viewContext.Setup(v => v.View).Returns(new Mock<IView>().Object); viewContext.Setup(v => v.ViewData).Returns(vd).Callback(() => {throw new Exception("ViewData extracted");}); viewContext.Setup(v => v.TempData).Returns(new TempDataDictionary()); viewContext.Setup(v => v.Writer).Returns(new Mock<TextWriter>().Object); viewContext.Setup(v => v.RouteData).Returns(new RouteData()); viewContext.Setup(v => v.HttpContext).Returns(new Mock<HttpContextBase>().Object); viewContext.Setup(v => v.Controller).Returns(new Mock<ControllerBase>().Object); viewContext.Setup(v => v.FormContext).Returns(new FormContext()); var mockContainer = new Mock<IViewDataContainer>(); mockContainer.Setup(x => x.ViewData).Returns(vd); var helper = new HtmlHelper<ISendMessageModel>(viewContext.Object, mockContainer.Object, new RouteCollection()); helper.RadioButtonFor(m => m.SendMessageType, "Member", cssClass: "selector"); If I remove the cssClass parameter then the code works ok but fails consistently when adding additional parameters. I've tried every combination of mocking, instantiating concrete types and using fakes that I can think off but I always get a NullReferenceException when I call RadioButtonFor. Any help hugely appreciated!!

    Read the article

  • PHP use of undefined constant error

    - by user272899
    Using a great script to grab details from imdb, I would like to thank Fabian Beiner. Just one error i have encountered with it is: Use of undefined constant sys_get_temp_dir assumed 'sys_get_temp_dir' in '/path/to/directory' on line 49 This is the complete script <?php /** * IMDB PHP Parser * * This class can be used to retrieve data from IMDB.com with PHP. This script will fail once in * a while, when IMDB changes *anything* on their HTML. Guys, it's time to provide an API! * * @link http://fabian-beiner.de * @copyright 2010 Fabian Beiner * @author Fabian Beiner (mail [AT] fabian-beiner [DOT] de) * @license MIT License * * @version 4.1 (February 1st, 2010) * */ class IMDB { private $_sHeader = null; private $_sSource = null; private $_sUrl = null; private $_sId = null; public $_bFound = false; private $_oCookie = '/tmp/imdb-grabber-fb.tmp'; const IMDB_CAST = '#<a href="/name/(\w+)/" onclick="\(new Image\(\)\)\.src=\'/rg/castlist/position-(\d|\d\d)/images/b\.gif\?link=/name/(\w+)/\';">(.*)</a>#Ui'; const IMDB_COUNTRY = '#<a href="/Sections/Countries/(\w+)/">#Ui'; const IMDB_DIRECTOR = '#<a href="/name/(\w+)/" onclick="\(new Image\(\)\)\.src=\'/rg/directorlist/position-(\d|\d\d)/images/b.gif\?link=name/(\w+)/\';">(.*)</a><br/>#Ui'; const IMDB_GENRE = '#<a href="/Sections/Genres/(\w+|\w+\-\w+)/">(\w+|\w+\-\w+)</a>#Ui'; const IMDB_MPAA = '#<h5><a href="/mpaa">MPAA</a>:</h5>\s*<div class="info-content">\s*(.*)\s*</div>#Ui'; const IMDB_PLOT = '#<h5>Plot:</h5>\s*<div class="info-content">\s*(.*)\s*<a#Ui'; const IMDB_POSTER = '#<a name="poster" href="(.*)" title="(.*)"><img border="0" alt="(.*)" title="(.*)" src="(.*)" /></a>#Ui'; const IMDB_RATING = '#<b>(\d\.\d/10)</b>#Ui'; const IMDB_RELEASE_DATE = '#<h5>Release Date:</h5>\s*\s*<div class="info-content">\s*(.*) \((.*)\)#Ui'; const IMDB_RUNTIME = '#<h5>Runtime:</h5>\s*<div class="info-content">\s*(.*)\s*</div>#Ui'; const IMDB_SEARCH = '#<b>Media from&nbsp;<a href="/title/tt(\d+)/"#i'; const IMDB_TAGLINE = '#<h5>Tagline:</h5>\s*<div class="info-content">\s*(.*)\s*</div>#Ui'; const IMDB_TITLE = '#<title>(.*) \((.*)\)</title>#Ui'; const IMDB_URL = '#http://(.*\.|.*)imdb.com/(t|T)itle(\?|/)(..\d+)#i'; const IMDB_VOTES = '#&nbsp;&nbsp;<a href="ratings" class="tn15more">(.*) votes</a>#Ui'; const IMDB_WRITER = '#<a href="/name/(\w+)/" onclick="\(new Image\(\)\)\.src=\'/rg/writerlist/position-(\d|\d\d)/images/b\.gif\?link=name/(\w+)/\';">(.*)</a>#Ui'; const IMDB_REDIRECT = '#Location: (.*)#'; /** * Public constructor. * * @param string $sSearch */ public function __construct($sSearch) { if (function_exists(sys_get_temp_dir)) { $this->_oCookie = tempnam(sys_get_temp_dir(), 'imdb'); } $sUrl = $this->findUrl($sSearch); if ($sUrl) { $bFetch = $this->fetchUrl($this->_sUrl); $this->_bFound = true; } } /** * Little REGEX helper. * * @param string $sRegex * @param string $sContent * @param int $iIndex; */ private function getMatch($sRegex, $sContent, $iIndex = 1) { preg_match($sRegex, $sContent, $aMatches); if ($iIndex > count($aMatches)) return; if ($iIndex == null) { return $aMatches; } return $aMatches[(int)$iIndex]; } /** * Little REGEX helper, I should find one that works for both... ;/ * * @param string $sRegex * @param int $iIndex; */ private function getMatches($sRegex, $iIndex = null) { preg_match_all($sRegex, $this->_sSource, $aMatches); if ((int)$iIndex) return $aMatches[$iIndex]; return $aMatches; } /** * Save an image. * * @param string $sUrl */ private function saveImage($sUrl) { $sUrl = trim($sUrl); $bolDir = false; if (!is_dir(getcwd() . '/posters')) { if (mkdir(getcwd() . '/posters', 0777)) { $bolDir = true; } } $sFilename = getcwd() . '/posters/' . preg_replace("#[^0-9]#", "", basename($sUrl)) . '.jpg'; if (file_exists($sFilename)) { return 'posters/' . basename($sFilename); } if (is_dir(getcwd() . '/posters') OR $bolDir) { if (function_exists('curl_init')) { $oCurl = curl_init($sUrl); curl_setopt_array($oCurl, array ( CURLOPT_VERBOSE => 0, CURLOPT_HEADER => 0, CURLOPT_RETURNTRANSFER => 1, CURLOPT_TIMEOUT => 5, CURLOPT_CONNECTTIMEOUT => 5, CURLOPT_REFERER => $sUrl, CURLOPT_BINARYTRANSFER => 1)); $sOutput = curl_exec($oCurl); curl_close($oCurl); $oFile = fopen($sFilename, 'x'); fwrite($oFile, $sOutput); fclose($oFile); return 'posters/' . basename($sFilename); } else { $oImg = imagecreatefromjpeg($sUrl); imagejpeg($oImg, $sFilename); return 'posters/' . basename($sFilename); } return false; } return false; } /** * Find a valid Url out of the passed argument. * * @param string $sSearch */ private function findUrl($sSearch) { $sSearch = trim($sSearch); if ($aUrl = $this->getMatch(self::IMDB_URL, $sSearch, 4)) { $this->_sId = 'tt' . preg_replace('[^0-9]', '', $aUrl); $this->_sUrl = 'http://www.imdb.com/title/' . $this->_sId .'/'; return true; } else { $sTemp = 'http://www.imdb.com/find?s=all&q=' . str_replace(' ', '+', $sSearch) . '&x=0&y=0'; $bFetch = $this->fetchUrl($sTemp); if( $this->isRedirect() ) { return true; } else if ($bFetch) { if ($strMatch = $this->getMatch(self::IMDB_SEARCH, $this->_sSource)) { $this->_sUrl = 'http://www.imdb.com/title/tt' . $strMatch . '/'; unset($this->_sSource); return true; } } } return false; } /** * Find if result is redirected directly to exact movie. */ private function isRedirect() { if ($strMatch = $this->getMatch(self::IMDB_REDIRECT, $this->_sHeader)) { $this->_sUrl = $strMatch; unset($this->_sSource); unset($this->_sHeader); return true; } return false; } /** * Fetch data from given Url. * Uses cURL if installed, otherwise falls back to file_get_contents. * * @param string $sUrl * @param int $iTimeout; */ private function fetchUrl($sUrl, $iTimeout = 15) { $sUrl = trim($sUrl); if (function_exists('curl_init')) { $oCurl = curl_init($sUrl); curl_setopt_array($oCurl, array ( CURLOPT_VERBOSE => 0, CURLOPT_HEADER => 1, CURLOPT_FRESH_CONNECT => true, CURLOPT_RETURNTRANSFER => 1, CURLOPT_TIMEOUT => (int)$iTimeout, CURLOPT_CONNECTTIMEOUT => (int)$iTimeout, CURLOPT_REFERER => $sUrl, CURLOPT_FOLLOWLOCATION => 0, CURLOPT_COOKIEFILE => $this->_oCookie, CURLOPT_COOKIEJAR => $this->_oCookie, CURLOPT_USERAGENT => 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6' )); $sOutput = curl_exec($oCurl); if ($sOutput === false) { return false; } $aInfo = curl_getinfo($oCurl); if ($aInfo['http_code'] != 200 && $aInfo['http_code'] != 302) { return false; } $sTmpHeader = strpos($sOutput, "\r\n\r\n"); $this->_sHeader = substr($sOutput, 0, $sTmpHeader); $this->_sSource = str_replace("\n", '', substr($sOutput, $sTmpHeader+1)); curl_close($oCurl); return true; } else { $sOutput = @file_get_contents($sUrl, 0); if (strpos($http_response_header[0], '200') === false){ return false; } $this->_sSource = str_replace("\n", '', (string)$sOutput); return true; } return false; } /** * Returns the cast. */ public function getCast($iOutput = null, $bMore = true) { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_CAST, 4); if (is_array($sReturned)) { if ($iOutput) { foreach ($sReturned as $i => $sName) { if ($i >= $iOutput) break; $sReturn[] = $sName; } return implode(' / ', $sReturn) . (($bMore) ? '&hellip;' : ''); } return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the cast as links. */ public function getCastAsUrl($iOutput = null, $bMore = true) { if ($this->_sSource) { $sReturned1 = $this->getMatches(self::IMDB_CAST, 4); $sReturned2 = $this->getMatches(self::IMDB_CAST, 3); if (is_array($sReturned1)) { if ($iOutput) { foreach ($sReturned1 as $i => $sName) { if ($i >= $iOutput) break; $aReturn[] = '<a href="http://www.imdb.com/name/' . $sReturned2[$i] . '/">' . $sName . '</a>';; } return implode(' / ', $aReturn) . (($bMore) ? '&hellip;' : ''); } return implode(' / ', $sReturned); } return '<a href="http://www.imdb.com/name/' . $sReturned2 . '/">' . $sReturned1 . '</a>';; } return 'n/A'; } /** * Returns the countr(y|ies). */ public function getCountry() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_COUNTRY, 1); if (is_array($sReturned)) { return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the countr(y|ies) as link(s). */ public function getCountryAsUrl() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_COUNTRY, 1); if (is_array($sReturned)) { foreach ($sReturned as $sCountry) { $aReturn[] = '<a href="http://www.imdb.com/Sections/Countries/' . $sCountry . '/">' . $sCountry . '</a>'; } return implode(' / ', $aReturn); } return '<a href="http://www.imdb.com/Sections/Countries/' . $sReturned . '/">' . $sReturned . '</a>'; } return 'n/A'; } /** * Returns the director(s). */ public function getDirector() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_DIRECTOR, 4); if (is_array($sReturned)) { return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the director(s) as link(s). */ public function getDirectorAsUrl() { if ($this->_sSource) { $sReturned1 = $this->getMatches(self::IMDB_DIRECTOR, 4); $sReturned2 = $this->getMatches(self::IMDB_DIRECTOR, 1); if (is_array($sReturned1)) { foreach ($sReturned1 as $i => $sDirector) { $aReturn[] = '<a href="http://www.imdb.com/name/' . $sReturned2[$i] . '/">' . $sDirector . '</a>'; } return implode(' / ', $aReturn); } return '<a href="http://www.imdb.com/name/' . $sReturned2 . '/">' . $sReturned1 . '</a>'; } return 'n/A'; } /** * Returns the genre(s). */ public function getGenre() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_GENRE, 1); if (is_array($sReturned)) { return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the genre(s) as link(s). */ public function getGenreAsUrl() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_GENRE, 1); if (is_array($sReturned)) { foreach ($sReturned as $i => $sGenre) { $aReturn[] = '<a href="http://www.imdb.com/Sections/Genres/' . $sGenre . '/">' . $sGenre . '</a>'; } return implode(' / ', $aReturn); } return '<a href="http://www.imdb.com/Sections/Genres/' . $sReturned . '/">' . $sReturned . '</a>'; } return 'n/A'; } /** * Returns the mpaa. */ public function getMpaa() { if ($this->_sSource) { return implode('' , $this->getMatches(self::IMDB_MPAA, 1)); } return 'n/A'; } /** * Returns the plot. */ public function getPlot() { if ($this->_sSource) { return implode('' , $this->getMatches(self::IMDB_PLOT, 1)); } return 'n/A'; } /** * Download the poster, cache it and return the local path to the image. */ public function getPoster() { if ($this->_sSource) { if ($sPoster = $this->saveImage(implode("", $this->getMatches(self::IMDB_POSTER, 5)), 'poster.jpg')) { return $sPoster; } return implode('', $this->getMatches(self::IMDB_POSTER, 5)); } return 'n/A'; } /** * Returns the rating. */ public function getRating() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_RATING, 1)); } return 'n/A'; } /** * Returns the release date. */ public function getReleaseDate() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_RELEASE_DATE, 1)); } return 'n/A'; } /** * Returns the runtime of the current movie. */ public function getRuntime() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_RUNTIME, 1)); } return 'n/A'; } /** * Returns the tagline. */ public function getTagline() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_TAGLINE, 1)); } return 'n/A'; } /** * Get the release date of the current movie. */ public function getTitle() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_TITLE, 1)); } return 'n/A'; } /** * Returns the url. */ public function getUrl() { return $this->_sUrl; } /** * Get the votes of the current movie. */ public function getVotes() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_VOTES, 1)); } return 'n/A'; } /** * Get the year of the current movie. */ public function getYear() { if ($this->_sSource) { return implode('', $this->getMatches(self::IMDB_TITLE, 2)); } return 'n/A'; } /** * Returns the writer(s). */ public function getWriter() { if ($this->_sSource) { $sReturned = $this->getMatches(self::IMDB_WRITER, 4); if (is_array($sReturned)) { return implode(' / ', $sReturned); } return $sReturned; } return 'n/A'; } /** * Returns the writer(s) as link(s). */ public function getWriterAsUrl() { if ($this->_sSource) { $sReturned1 = $this->getMatches(self::IMDB_WRITER, 4); $sReturned2 = $this->getMatches(self::IMDB_WRITER, 1); if (is_array($sReturned1)) { foreach ($sReturned1 as $i => $sWriter) { $aReturn[] = '<a href="http://www.imdb.com/name/' . $sReturned2[$i] . '/">' . $sWriter . '</a>'; } return implode(' / ', $aReturn); } return '<a href="http://www.imdb.com/name/' . $sReturned2 . '/">' . $sReturned1 . '</a>'; } return 'n/A'; } } ?>

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • dasBlog

    - by Daniel Moth
    Some people like blogging on a site that is completely managed by someone else (e.g. http://wordpress.com/) and others, like me, prefer hosting their own blog at their own domain. In the latter case you need to decide what blog engine to install on your web space to power your blog. There are many free blog engines to choose from (e.g. the one from http://wordpress.org/). If, like me, you want to use a blog engine that is based on the .NET platform you have many choices including BlogEngine.NET, Subtext and the one I picked: dasBlog. In this post I'll describe the steps I took to get going with the open source dasBlog (home page, source page). A. Installing First I installed dasBlog on my local Windows 7 machine where I have IIS7 installed. To install dasBlog, I started by clicking the "Install" button on its web gallery page. After that I went through configuration, theming and adding content as described below. Once I was happy that everything was working correctly on the local machine, I set this up on a hosting service. I went for a Windows IIS7 shared hosting 3 month Economy plan from GoDaddy. The dasBlog site lists a bunch of other hosts. You can read the installation instructions for dasBlog, and with GoDaddy I just had to click one button since it is available as part of their quick-install apps. With GoDaddy I had a previewdns option that allowed me to play around and preview my site before going live. B. Configuring After it was installed (on local machine and/or hosting provider), I followed the obvious steps to create an admin user and logged in. This displays an admin navigation bar with the following options: 1. Navigator Links: I decided I was not going to use this feature. I manage links on the side of my blog manually elsewhere as part of the theme. So, I deleted every entry on this page and ignored it thereafter. 2. Blogroll: Ditto - same comment as for Navigator Links. 3. Content Filters: I did not delete (or add) these, but I did ensure both checkboxes are not checked. I.e. I am not using this feature now, but I may return to it in the future. 4. Activity: This is a read-only view of various statistics. So nothing to configure here, but useful to come back to for complementary statistics to whatever other statistical package you use (e.g. free stats as part of the hosting and I also use feedburner for syndication stats). 5. Cross-posting: I did not need that, so I turned it off via the Configuration Settings discussed next. 6. Configuration Settings: This is where the bulk of the configuration for the blog takes place and they are stored in a single XML file: Site.Config file. There are truly self-explanatory options to pick for Basic Settings, Services Settings and Services to Ping, Syndication Settings (this is where you link to your feedburner name if you have one) and Mail to Weblog Settings (I keep this turned off). There are also "Xml Storage System Settings" (I keep this turned off), "OpenId Settings" (I allow OpenID commenters), "Spammer Settings" (Enable captcha, never show email addresses) and "Comment settings" (Enable comments, don't allow on older posts, don't allow html). There are also Appearance Settings (I checked the "Use Post Title for Permalink", replaced spaces with hyphen and unchecked the "Use Unique Title"). Finally, there are also Notification Settings, but they are a bit of hit and miss in my case, in that I don’t always get the emails (still investigating this). C. Adding Content You can add content via the "Add Entry" link on the admin navigation bar or by configuring the "Mail to Weblog" settings and sending email or, do what I've started doing, use Live Writer (also the team has a blog). Another way to add content is programmatically if, for example, you are migrating content from another blog (and I'll cover that in separate post sharing the code). What you should know is that all blog content (posts and comments) live in XML files in a folder called "content" under your dasBlog installation. D. Theming There is a very good guide about themes for dasBlog, there is also a similar guide with screenshots (scroll down to "So how do I create a theme") and the dasBlog macro reference. When you install dasBlog, there are many themes available; each theme is in its own folder (representing the folder name) under the themes folder. You may have noticed that you can switch between these via the "Appearance Settings" described above (look for the combobox after the Default Theme label). I created my own theme by copy-pasting an existing theme folder, renaming it and then switching to it as the default. I then opened the folder in Visual Studio and hacked around the HTML in the 3 files (itemTemplate, homeTemplate and dayTemplate). These files have a blogtemplate file extension, which I temporarily renamed to HTML as I was editing them. There is no more advice I can offer here as this is a matter of taste and the aforementioned links is all I used. Personally, I had salvaged the CSS (and structure) from my previous blog and wanted to make this one match it as closely as possible - I think I have succeeded. E. If you run into any issue with dasBlog... ...use your favorite search engine to find answers. Many bloggers have been using this engine for a while and have documented issues and workarounds over time. One such example is ScottHa's dasBlog category; another example is therightstuff where I "borrowed" the idea/macro for the outlook-style on-page navigation. If you don't find what you want through searching, try posting a question to the forums. Comments about this post welcome at the original blog.

    Read the article

  • CodePlex Daily Summary for Monday, June 14, 2010

    CodePlex Daily Summary for Monday, June 14, 2010New ProjectsBD File Hash: BD File Hash is a convenient file hash and hash compare tool for Windows which currently works with MD5, SHA-1, and SHA-256 algorithms. FileScan: This is an application that searches through a drive or directory structure for files matching a filter. This project was converted from VB to ...genesis9: genesis9HeinanOS: HeinanOS is an operating system developed mainly in C++. HeinanOS is a light OS (1.44 MB image) with a lot of capabilites and many more are being ...MediaBrowserWS - Creates a Web Service for the popular MediaBrowser plugin: Creates a web service in Media Center for accessing your MediaBrowser collection. Allows for external devices (Tablets/phones/laptops) to access a ...MME: New Edition of Managed Menu Extensions for Visual Studio 2010 The Main goal of "MME" is to provide easy access to adding Right Click menus in the ...MVMMapper: Generate the ViewModel and its mapping to the Model when implementing MVVM in .NET. Developed using T4 templates. Current version supports Silver...ProjectArDotNet: Si te agarro te parto! Si te agarro te emperno no me importa que seas menor de edad!Scriptagility for DotNetNuke: Scriptagility is a DotNetNuke module for Javascript developers. This module provides dynamic client scripting infrastructure for developing javascr...simpleLinux Distro: SimpleLinux. is a Linux distributions that is easy to use. Simple Linux website: http://simplelinux.tkTag Cloud Control for asp.net: Tag Cloud Control for asp.net allows the user to display the most important keywords to display in tag cloud. Each Tag has it own navigation url to...thefreeimdb: fsadie qwUppityUp: UppityUp is a simple and light-weight tray application which monitors a remote server and shows a notification when it comes online. This is usefu...Vivid3D 2 - DirectX 10 3D ToolKit: The sequel to my first ever engine wrote several years ago. It is not based on it in anyway. VSIDev: VSI DevXTQXK_WORK: Actionscript 3.0东坡博客: 这是一个ASP。net mvc 2博客。New Releases.NET Extensions - Extension Methods Library: Release 2010.08: Added extension methods for Bitmap manipulation (scaling for now): - Bitmap.ScaleToSize() - Bitmap.ScaleToSizeProportional() - Bitmap.ScaleProport...Black Falcon Software's Database Data-Access-Layers: “SQLHELPER”, “ORAHELPER” - Handling Binary Data: See attached document...BTech Networking Library: BTech Networking Library: Same as pervious just new namespace, extended networking coming soon!!!Community Forums NNTP bridge: Community Forums NNTP Bridge V37: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Generic Entity Model 2: GEM2 build 54383: This is second BETA release of GEM2! Please see source code change sets for updates! Following implementation is not included in this release: My...Hades: Projet Hadès - Official Demo - Version 0.1.0 Beta: ---------------------------------------------------------------------------- - Projet Hadès - Official Demo - Version 0.1.0 Beta ------------------...HeinanOS: HeinanOS M1 Source Code: You can download HeinanOS M1 Source Code and contribute to HeinanOS development! Be aware that you should not use this code for your own systems! ...HeinanOS: Milestone 1: This is the first major release for HeinanOS 1.0 Please note this is a PRE-RELEASE! This release includes the following features: -Bootable DOS-...HKGolden Express: HKGoldenExpress (Build 201006131900): New features: (None) Bug fix: Incorrect message submit date of message/ replies. (Note: Showing message submit date is enabled since Build 20100...HKGolden Express: HKGoldenExpress (Build 201006140110): New features: (None) Bug fix: (None) Improvements: (None) Other changes: Set time zone of message date as Hong Kong. Adjusted the format of messa...MediaCoder.NET: MediaCoder.NET v1.0 Beta 1.5: Installer file for MediaCoder.NET v1.0 beta 1.5. Now converts multiple files.MME: First release: Features of this release 1. One installer MME.msi. However you can also install MMEMenuManagerSetup.vsix which installs a project template that e...MSBuild Launch Pad (mPad): 1.1 Beta 1: Platform selection box is added.MVMMapper: MVMMapper Release v 1.0.1: This release has no downloadable documentation. Please use the Documentation section to get started.NginxTray: NginxTray 0.7 RC2: NginxTray 0.7 RC2PowerAuras: PowerAuras-3.0.0K-beta3: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...PowerAuras: PowerAuras-3.0.0K-beta4: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...Scriptagility for DotNetNuke: Scriptagility 1.0 (Beta): Initial public release please evaluate and feedbackSharpDevelop: SharpDevelop 4.0 Beta 1: Release notes: http://community.sharpdevelop.net/forums/t/11388.aspxsimpleLinux Distro: Project X3: This is an example of download for simpleLinuxSOAPI - StackOverflow API Parser/Wrapper Generator: SOAPI Beta 3: The SOAPI Beta 3 download will be made availabe later today when the initial documentation is complete. The previously available Beta 1 download h...Sofa: Initial release V1.0: This is the first release of Sofa. As it is made of code being previously used, as we tested it is a stable release. But bugs are always possible,...Tag Cloud Control for asp.net: Tag Cloud Control for asp.net: Tag Cloud Control for asp.net allows the user to display the most important keywords to display in tag cloud. Each Tag has it own navigation url to...UppityUp: UppityUp v0.1: First functional version, supports monitoring availability by ping (ICMP) requests. Fit for general use. Consists of one standalone .exe file - no...VCC: Latest build, v2.1.30613.0: Automatic drop of latest buildWindStyle ExifInfo for Windows Live Writer: 1.1.0.0: Add: Multiple Language(English and Simplified Chinese); Add: Insert multiple files; Fix: Error when insert pictures without Exif info; Update: Icon...Work Recorder - Hold on own time!: WorkRecorder 1.2: +Add a whole day chartXsltDb - DotNetNuke Module Builder: 01.01.24: Syntax highlighting delivered!New samples for RadControls. On single page you can find RadTreeView, RadRating, RadChart, RadFormDecorator, RadEdito...xUnit.net Contrib: xunitcontrib 0.4 (ReSharper 5.0 RTM + dotCover): xunitcontrib release 0.4 (ReSharper runner) This release provides a test runner plugin for Resharper 5.0, 4.5 and 4.1, targetting all versions of x...Most Popular ProjectsCommunity Forums NNTP bridgeRIA Services EssentialsNeatUploadBxf (Basic XAML Framework)Agile Personal Development Methodology.NET Transactional File ManagerSOLID by exampleASP.NET MVC Time PlannerWEI ShareSiverlight ProjectMost Active ProjectsjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRhyduino - Arduino and Managed CodeCommunity Forums NNTP bridgeCassandraemonBlogEngine.NETLightweight Fluent WorkflowMediaCoder.NETAndrew's XNA Helpers

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

  • RSS feeds in Orchard

    - by Bertrand Le Roy
    When we added RSS to Orchard, we wanted to make it easy for any module to expose any contents as a feed. We also wanted the rendering of the feed to be handled by Orchard in order to minimize the amount of work from the module developer. A typical example of such feed exposition is of course blog feeds. We have an IFeedManager interface for which you can get the built-in implementation through dependency injection. Look at the BlogController constructor for an example: public BlogController( IOrchardServices services, IBlogService blogService, IBlogSlugConstraint blogSlugConstraint, IFeedManager feedManager, RouteCollection routeCollection) { If you look a little further in that same controller, in the Item action, you’ll see a call to the Register method of the feed manager: _feedManager.Register(blog); This in reality is a call into an extension method that is specialized for blogs, but we could have made the two calls to the actual generic Register directly in the action instead, that is just an implementation detail: feedManager.Register(blog.Name, "rss", new RouteValueDictionary { { "containerid", blog.Id } }); feedManager.Register(blog.Name + " - Comments", "rss", new RouteValueDictionary { { "commentedoncontainer", blog.Id } }); What those two effective calls are doing is to register two feeds: one for the blog itself and one for the comments on the blog. For each call, the name of the feed is provided, then we have the type of feed (“rss”) and some values to be injected into the generic RSS route that will be used later to route the feed to the right providers. This is all you have to do to expose a new feed. If you’re only interested in exposing feeds, you can stop right there. If on the other hand you want to know what happens after that under the hood, carry on. What happens after that is that the feedmanager will take care of formatting the link tag for the feed (see FeedManager.GetRegisteredLinks). The GetRegisteredLinks method itself will be called from a specialized filter, FeedFilter. FeedFilter is an MVC filter and the event we’re interested in hooking into is OnResultExecuting, which happens after the controller action has returned an ActionResult and just before MVC executes that action result. In other words, our feed registration has already been called but the view is not yet rendered. Here’s the code for OnResultExecuting: model.Zones.AddAction("head:after", html => html.ViewContext.Writer.Write( _feedManager.GetRegisteredLinks(html))); This is another piece of code whose execution is differed. It is saying that whenever comes time to render the “head” zone, this code should be called right after. The code itself is rendering the link tags. As a result of all that, here’s what can be found in an Orchard blog’s head section: <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire"     href="/rss?containerid=5" /> <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire - Comments"     href="/rss?commentedoncontainer=5" /> The generic action that these two feeds point to is Index on FeedController. That controller has three important dependencies: an IFeedBuilderProvider, an IFeedQueryProvider and an IFeedItemProvider. Different implementations of these interfaces can provide different formats of feeds, such as RSS and Atom. The Match method enables each of the competing providers to provide a priority for themselves based on arbitrary criteria that can be found on the FeedContext. This means that a provider can be selected based not only on the desired format, but also on the nature of the objects being exposed as a feed or on something even more arbitrary such as the destination device (you could imagine for example giving shorter text only excerpts of posts on mobile devices, and full HTML on desktop). The key here is extensibility and dynamic competition and collaboration from unknown and loosely coupled parts. You’ll find this pattern pretty much everywhere in the Orchard architecture. The RssFeedBuilder implementation of IFeedBuilderProvider is also a regular controller with a Process action that builds a RssResult, which is itself a thin ActionResult wrapper around an XDocument. Let’s get back to the FeedController’s Index action. After having called into each known feed builder to get its priority on the currently requested feed, it will select the one with the highest priority. The next thing it needs to do is to actually fetch the data for the feed. This again is a collaborative effort from a priori unknown providers, the implementations of IFeedQueryProvider. There are several implementations by default in Orchard, the choice of which is again done through a Match method. ContainerFeedQuery for example chimes in when a “containerid” parameter is found in the context (see URL in the link tag above): public FeedQueryMatch Match(FeedContext context) { var containerIdValue = context.ValueProvider.GetValue("containerid"); if (containerIdValue == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } The actual work is done in the Execute method, which finds the right container content item in the Orchard database and adds elements for each of them. In other words, the feed query provider knows how to retrieve the list of content items to add to the feed. The last step is to translate each of the content items into feed entries, which is done by implementations of IFeedItemBuilder. There is no Match method this time. Instead, all providers are called with the collection of items (or more accurately with the FeedContext, but this contains the list of items, which is what’s relevant in most cases). Each provider can then choose to pick those items that it knows how to treat and transform them into the format requested. This enables the construction of heterogeneous feeds that expose content items of various types into a single feed. That will be extremely important when you’ll want to expose a single feed for all your site. So here are feeds in Orchard in a nutshell. The main point here is that there is a fair number of components involved, with some complexity in implementation in order to allow for extreme flexibility, but the part that you use to expose a new feed is extremely simple and light: declare that you want your content exposed as a feed and you’re done. There are cases where you’ll have to dive in and provide new implementations for some or all of the interfaces involved, but that requirement will only arise as needed. For example, you might need to create a new feed item builder to include your custom content type but that effort will be extremely focused on the specialized task at hand. The rest of the system won’t need to change. So what do you think?

    Read the article

  • Blogging tips for SQL Server professionals

    - by jamiet
    For some time now I have been intending to put some material together relating my blogging experiences since I began blogging in 2004 and that led to me submitting a session for SQLBits recently where I intended to do just that. That didn’t get enough votes to allow me to present however so instead I resolved to write a blog post about it and Simon Sabin’s recent post Blogging – how do you do it? has prompted me to get around to completing it. So, here I present a compendium of tips that I’ve picked up from authoring a fair few blog posts over the past 6 years. Feedburner Feedburner.com is a service that can consume your blog’s default RSS feed and provide another, replacement, feed that has exactly the same content. You can then supply that replacement feed on your blog site for other people to consume in their RSS readers. Why would you want to do this? Well, two reasons actually: It makes your blog portable. If you ever want to move your blog to a different URL you don’t have to tell your subscribers to move to a different feed. The feedburner feed is a pointer to your blog content rather than being a copy of it. Feedburner will collect stats telling you how many people are subscribed to your feed, which RSS readers they use, stuff like that. Here’s a sample screenshot for http://sqlblog.com/blogs/jamie_thomson/: It also tells you what your most viewed posts are: Web stats like these are notoriously inaccurate but then again the method of measurement here is not important, what IS important is that it gives you a trustworthy ranking of your blog posts and (in my opinion) knowing which are your most popular posts is more important than knowing exactly how many views each post has had. This is just the tip of the iceberg of what Feedburner provides and I recommend every new blogger to try it! Monitor subscribers using Google Reader If for some reason Feedburner is not to your taste or (more likely) you already have an established RSS feed that you do not want to change then Google provide another way in which you can monitor your readership in the shape of their online RSS reader, Google Reader. It provides, for every RSS feed, a collection of stats including the number of Google Reader users that have subscribed to that RSS feed. This is really valuable information and in fact I have been recording this statistic for mine and a number of other blogs for a few years now and as such I can produce the following chart that indicates how readership is trending for those blogs over time: [Good news for my fellow SQLBlog bloggers.] As Stephen Few readily points out, its not the numbers that are important but the trend. Search Engine Optimisation (SEO) SEO (or “How do I get my blog to show up in Google”) is a massive area of expertise which I don’t want (and am unable) to cover in much detail here but there are some simple rules of thumb that will help: Tags – If your blog engine offers the ability to add tags to your blog post, use them. Invariably those tags go into the meta section of the page HTML and search engines lap that stuff up. For example, from my recent post Microsoft publish Visual Studio 2010 Database Project Guidance: Title – Search engines take notice of web page titles as well so make them specific and descriptive (e.g. “Configuring dtsConfig connection strings”) rather than esoteric and meaningless in a vain attempt to be humorous (e.g. “Last night a DJ saved my ETL batch”)! Title(2) – Make your title even more search engine friendly by mentioning high level subject areas, not dissimilar to Twitter hashtags. For example, if you look at all of my posts related to SSIS you will notice that nearly all contain the word “SSIS” in the title even if I had to shoehorn it in there by putting it in square brackets or similar. Another tip, if you ARE putting words into your titles in this artificial manner then put them at the end so that they’re not that prominent in search engine results; they’re there for the search engines to consume, not for human beings. Images – Always add titles and alternate text (ALT attribute) to images in your blog post. If you use Windows 7 or Windows Vista then you can use Live Writer (which Simon recommended) makes this easy for you. Headings – If you want to highlight section headings use heading tags (e.g. <H1>, <H2>, <H3> etc…) rather than just formatting the text appropriately – again, Live makes this easy. These tags give your blog posts structure that is understood by search engines and RSS readers alike. (I believe it makes them more amenable to CSS as well – though that’s not something I know too much about). If you check the HTML source for the blog post you’re reading right now you’ll be able to scan through and see where I have used heading tags. Microsoft provide a free tool called the SEO Toolkit that will analyse your blog site (for free) and tell you what things you should change to improve SEO. Go read more and download for free at Search Engine Optimization Toolkit. Did I mention that it was free? Miscellaneous Tips If you are including code in your blog post then ensure it is formatted correctly. Use SQL Server Central’s T-SQL prettifier for formatting T-SQL code. Use images and videos. Personally speaking there’s nothing I like less when reading a blog than paragraph after paragraph of text. Images make your blog more appealing which means people are more likely to read what you have written. Be original. Don’t plagiarise other people’s content and don’t simply rewrite the contents of Books Online. Every time you publish a blog post tweet a link to it. Include hashtags in your tweet that are more likely to grab people’s attention. That’s probably enough for now - I hope this blog post proves useful to someone out there. If you would appreciate a related session at a forthcoming SQLBits conference then please let me know. This will likely be my last blog post for 2010 so I would like to take this opportunity to thank everyone that has commented on, linked to or read any of my blog posts in that time. 2011 is shaping up to be a very interesting for SQL Server observers with the impending release of SQL Server code-named Denali and I promise I’ll have lots more content on that as the year progresses. Happy New Year. @Jamiet

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • Brighton Rocks: UA Europe 2011

    - by ultan o'broin
    User Assistance Europe 2011 was held in Brighton, UK. Having seen Quadrophenia a dozen times, I just had to go along (OK, I wanted to talk about messages in enterprise applications). Sadly, it rained a lot, though that was still eminently more tolerable than being stuck home in Dublin during Bloomsday. So, here are my somewhat selective highlights and observations from the conference, massively skewed towards my own interests, as usual. Enjoyed Leah Guren's (Cow TC) great start ‘keynote’ on the Cultural Dimensions of Software Help Usage. Starting out by revisiting Hofstede's and Hall's work on culture (how many times I have done this for Multilingual magazine?) and then Neilsen’s findings on age as an indicator of performance, Leah showed how it is the expertise of the user that user assistance (UA) needs to be designed for (especially for high-end users), with some considerations made for age, while the gender and culture of users are not major factors. Help also needs to be contextual and concise, embedded close to the action. That users are saying things like “If I want help on Office, I go to Google ” isn't all that profound at this stage, but it is always worth reiterating how search can be optimized to return better results for users. Interestingly, regardless of user education level, the issue of information quality--hinging on the lynchpin of terminology reflecting that of the user--is critical. Major takeaway for me there. Matthew Ellison’s sessions on embedded help and demos were also impressive. Embedded help that is concise and contextual is definitely a powerful UX enabler, and I’m pleased to say that in Oracle Fusion Applications we have embraced the concept fully. Matthew also mentioned in his session about successful software demos that the principle of modality with demos is a must. Look no further than Oracle User Productivity Kit demos See It!, Try It!, Know It, and Do It! modes, for example. I also found some key takeaways in the presentation by Marie-Louise Flacke on notes and warnings. Here, legal considerations seemed to take precedence over providing any real information to users. I was delighted when Marie-Louise called out the Oracle JDeveloper documentation as an exemplar of how to use notes and instructions instead of trying to scare the bejaysus out of people and not providing them with any real information they’d find useful instead. My own session on designing messages for enterprise applications was well attended. Knowing your user profiles (remember user expertise is the king maker for UA so write for each audience involved), how users really work, the required application business and UI rules, what your application technology supports, and how messages integrate with the enterprise help desk and support policies and you will go much further than relying solely on the guideline of "writing messages in plain language". And, remember the value in warnings and confirmation messages too, and how you can use them smartly. I hope y’all got something from my presentation and from my answers to questions afterwards. Ellis Pratt stole the show with his presentation on applying game theory to software UA, using plenty of colorful, relevant examples (check out the Atlassian and DropBox approaches, for example), and striking just the right balance between theory and practice. Completely agree that the approach to take here is not to make UA itself a game, but to invoke UA as part of a bigger game dynamic (time-to-task completion, personal and communal goals, personal achievement and status, and so on). Sure there are gotchas and limitations to gamification, and we need to do more research. However, we'll hear a lot more about this subject in coming years, particularly in the enterprise space. I hope. I also heard good things about the different sessions about DITA usage (including one by Sonja Fuga that clearly opens the door for major innovation in the community content space using WordPress), the progressive disclosure of information (Cerys Willoughby), an overview of controlled language (or "information quality", as I like to position it) solutions and rationale by Dave Gash, and others. I also spent time chatting with Mike Hamilton of MadCap Software, who showed me a cool demo of their Flare product, and the Lingo translation solution. I liked the idea of their licensing model for workers-on-the-go; that’s smart UX-awareness in itself. Also chatted with Julian Murfitt of Mekon about uptake of DITA in the enterprise space. In all, it's worth attending UA Europe. I was surprised, however, not to see conference topics about mobile UA, community conversation and content, and search in its own right. These are unstoppable forces now, and the latter is pretty central to providing assistance now to all but the most irredentist of hard-copy fetishists or advanced technical or functional users working away on the back end of applications and systems. Only saw one iPad too (says the guy who carries three laptops). Tweeting during the conference was pretty much nonexistent during the event, so no community energy there. Perhaps all this can be addressed next year. I would love to see the next UA Europe event come to Dublin (despite Bloomsday, it's not a bad place place, really) now that hotels are so cheap and all. So, what is my overall impression of the state of user assistance in Europe? Clearly, there are still many people in the industry who feel there is something broken with the traditional forms of user assistance (particularly printed doc) and something needs to be done about it. I would suggest they move on and try and embrace change, instead. Many others see new possibilities, offered by UX and technology, as well as the reality of online user behavior in an increasingly connected world and that is encouraging. Such thought leaders need to be listened to. As Ellis Pratt says in his great book, Trends in Technical Communication - Rethinking Help: “To stay relevant means taking a new perspective on the role (of technical writer), and delivering “products” over and above the traditional manual and online Help file... there are a number of new trends in this field - some complementary, some conflicting. Whatever trends emerge as the norm, it’s likely the status quo will change.” It already has, IMO. I hear similar debates in the professional translation world about the onset of translation crowd sourcing (the Facebook model) and machine translation (trust me, that battle is over). Neither of these initiatives has put anyone out of a job and probably won't, though the nature of the work might change. If anything, such innovations have increased the overall need for professional translators as user expectations rise, new audiences emerge, and organizations need to collate and curate user-generated content, combining it with their own. Perhaps user assistance professionals can learn from other professions and grow accordingly.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52  | Next Page >