Search Results

Search found 25852 results on 1035 pages for 'linq query syntax'.

Page 612/1035 | < Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >

  • MapReduce in DryadLINQ and PLINQ

    - by JoshReuben
    MapReduce See http://en.wikipedia.org/wiki/Mapreduce The MapReduce pattern aims to handle large-scale computations across a cluster of servers, often involving massive amounts of data. "The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The developer expresses the computation as two Func delegates: Map and Reduce. Map - takes a single input pair and produces a set of intermediate key/value pairs. The MapReduce function groups results by key and passes them to the Reduce function. Reduce - accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the user's Reduce function via an iterator." the canonical MapReduce example: counting word frequency in a text file.     MapReduce using DryadLINQ see http://research.microsoft.com/en-us/projects/dryadlinq/ and http://connect.microsoft.com/Dryad DryadLINQ provides a simple and straightforward way to implement MapReduce operations. This The implementation has two primary components: A Pair structure, which serves as a data container. A MapReduce method, which counts word frequency and returns the top five words. The Pair Structure - Pair has two properties: Word is a string that holds a word or key. Count is an int that holds the word count. The structure also overrides ToString to simplify printing the results. The following example shows the Pair implementation. public struct Pair { private string word; private int count; public Pair(string w, int c) { word = w; count = c; } public int Count { get { return count; } } public string Word { get { return word; } } public override string ToString() { return word + ":" + count.ToString(); } } The MapReduce function  that gets the results. the input data could be partitioned and distributed across the cluster. 1. Creates a DryadTable<LineRecord> object, inputTable, to represent the lines of input text. For partitioned data, use GetPartitionedTable<T> instead of GetTable<T> and pass the method a metadata file. 2. Applies the SelectMany operator to inputTable to transform the collection of lines into collection of words. The String.Split method converts the line into a collection of words. SelectMany concatenates the collections created by Split into a single IQueryable<string> collection named words, which represents all the words in the file. 3. Performs the Map part of the operation by applying GroupBy to the words object. The GroupBy operation groups elements with the same key, which is defined by the selector delegate. This creates a higher order collection, whose elements are groups. In this case, the delegate is an identity function, so the key is the word itself and the operation creates a groups collection that consists of groups of identical words. 4. Performs the Reduce part of the operation by applying Select to groups. This operation reduces the groups of words from Step 3 to an IQueryable<Pair> collection named counts that represents the unique words in the file and how many instances there are of each word. Each key value in groups represents a unique word, so Select creates one Pair object for each unique word. IGrouping.Count returns the number of items in the group, so each Pair object's Count member is set to the number of instances of the word. 5. Applies OrderByDescending to counts. This operation sorts the input collection in descending order of frequency and creates an ordered collection named ordered. 6. Applies Take to ordered to create an IQueryable<Pair> collection named top, which contains the 100 most common words in the input file, and their frequency. Test then uses the Pair object's ToString implementation to print the top one hundred words, and their frequency.   public static IQueryable<Pair> MapReduce( string directory, string fileName, int k) { DryadDataContext ddc = new DryadDataContext("file://" + directory); DryadTable<LineRecord> inputTable = ddc.GetTable<LineRecord>(fileName); IQueryable<string> words = inputTable.SelectMany(x => x.line.Split(' ')); IQueryable<IGrouping<string, string>> groups = words.GroupBy(x => x); IQueryable<Pair> counts = groups.Select(x => new Pair(x.Key, x.Count())); IQueryable<Pair> ordered = counts.OrderByDescending(x => x.Count); IQueryable<Pair> top = ordered.Take(k);   return top; }   To Test: IQueryable<Pair> results = MapReduce(@"c:\DryadData\input", "TestFile.txt", 100); foreach (Pair words in results) Debug.Print(words.ToString());   Note: DryadLINQ applications can use a more compact way to represent the query: return inputTable         .SelectMany(x => x.line.Split(' '))         .GroupBy(x => x)         .Select(x => new Pair(x.Key, x.Count()))         .OrderByDescending(x => x.Count)         .Take(k);     MapReduce using PLINQ The pattern is relevant even for a single multi-core machine, however. We can write our own PLINQ MapReduce in a few lines. the Map function takes a single input value and returns a set of mapped values àLINQ's SelectMany operator. These are then grouped according to an intermediate key à LINQ GroupBy operator. The Reduce function takes each intermediate key and a set of values for that key, and produces any number of outputs per key à LINQ SelectMany again. We can put all of this together to implement MapReduce in PLINQ that returns a ParallelQuery<T> public static ParallelQuery<TResult> MapReduce<TSource, TMapped, TKey, TResult>( this ParallelQuery<TSource> source, Func<TSource, IEnumerable<TMapped>> map, Func<TMapped, TKey> keySelector, Func<IGrouping<TKey, TMapped>, IEnumerable<TResult>> reduce) { return source .SelectMany(map) .GroupBy(keySelector) .SelectMany(reduce); } the map function takes in an input document and outputs all of the words in that document. The grouping phase groups all of the identical words together, such that the reduce phase can then count the words in each group and output a word/count pair for each grouping: var files = Directory.EnumerateFiles(dirPath, "*.txt").AsParallel(); var counts = files.MapReduce( path => File.ReadLines(path).SelectMany(line => line.Split(delimiters)), word => word, group => new[] { new KeyValuePair<string, int>(group.Key, group.Count()) });

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • &lt;%: %&gt;, HtmlEncode, IHtmlString and MvcHtmlString

    - by Shaun
    One of my colleague and friend, Robin is playing and struggling with the ASP.NET MVC 2 on a project these days while I’m struggling with a annoying client. Since it’s his first time to use ASP.NET MVC he was meetings with a lot of problem and I was very happy to share my experience to him. Yesterday he asked me when he attempted to insert a <br /> element into his page he found that the page was rendered like this which is bad. He found his <br /> was shown as a part of the string rather than creating a new line. After checked a bit in his code I found that it’s because he utilized a new ASP.NET markup supported in .NET 4.0 – “<%: %>”. If you have been using ASP.NET MVC 1 or in .NET 3.5 world it would be very common that using <%= %> to show something on the page from the backend code. But when you do it you must ensure that the string that are going to be displayed should be Html-safe, which means all the Html markups must be encoded. Otherwise this might cause an XSS (cross-site scripting) problem. So that you’d better use the code like this below to display anything on the page. In .NET 4.0 Microsoft introduced a new markup to solve this problem which is <%: %>. It will encode the content automatically so that you will no need to check and verify your code manually for the XSS issue mentioned below. But this also means that it will encode all things, include the Html element you want to be rendered. So I changed his code like this and it worked well. After helped him solved this problem and finished a spreadsheet for my boring project I considered a bit more on the <%: %>. Since it will encode all thing why it renders correctly when we use “<%: Html.TextBox(“name”) %>” to show a text box? As you know the Html.TextBox will render a “<input name="name" id="name" type="text"/>” element on the page. If <%: %> will encode everything it should not display a text box. So I dig into the source code of the MVC and found some comments in the class MvcHtmlString. 1: // In ASP.NET 4, a new syntax <%: %> is being introduced in WebForms pages, where <%: expression %> is equivalent to 2: // <%= HttpUtility.HtmlEncode(expression) %>. The intent of this is to reduce common causes of XSS vulnerabilities 3: // in WebForms pages (WebForms views in the case of MVC). This involves the addition of an interface 4: // System.Web.IHtmlString and a static method overload System.Web.HttpUtility::HtmlEncode(object). The interface 5: // definition is roughly: 6: // public interface IHtmlString { 7: // string ToHtmlString(); 8: // } 9: // And the HtmlEncode(object) logic is roughly: 10: // - If the input argument is an IHtmlString, return argument.ToHtmlString(), 11: // - Otherwise, return HtmlEncode(Convert.ToString(argument)). 12: // 13: // Unfortunately this has the effect that calling <%: Html.SomeHelper() %> in an MVC application running on .NET 4 14: // will end up encoding output that is already HTML-safe. As a result, we're changing out HTML helpers to return 15: // MvcHtmlString where appropriate. <%= Html.SomeHelper() %> will continue to work in both .NET 3.5 and .NET 4, but 16: // changing the return types to MvcHtmlString has the added benefit that <%: Html.SomeHelper() %> will also work 17: // properly in .NET 4 rather than resulting in a double-encoded output. MVC developers in .NET 4 will then be able 18: // to use the <%: %> syntax almost everywhere instead of having to remember where to use <%= %> and where to use 19: // <%: %>. This should help developers craft more secure web applications by default. 20: // 21: // To create an MvcHtmlString, use the static Create() method instead of calling the protected constructor. The comment said the encoding rule of the <%: %> would be: If the type of the content is IHtmlString it will NOT encode since the IHtmlString indicates that it’s Html-safe. Otherwise it will use HtmlEncode to encode the content. If we check the return type of the Html.TextBox method we will find that it’s MvcHtmlString, which was implemented the IHtmlString interface dynamically. That is the reason why the “<input name="name" id="name" type="text"/>” was not encoded by <%: %>. So if we want to tell ASP.NET MVC, or I should say the ASP.NET runtime that the content is Html-safe and no need, or should not be encoded we can convert the content into IHtmlString. So another resolution would be like this. Also we can create an extension method as well for better developing experience. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace ShaunXu.Blogs.IHtmlStringIssue 8: { 9: public static class Helpers 10: { 11: public static MvcHtmlString IsHtmlSafe(this string content) 12: { 13: return MvcHtmlString.Create(content); 14: } 15: } 16: } Then the view would be like this. And the page rendered correctly.         Summary In this post I explained a bit about the new markup in .NET 4.0 – <%: %> and its usage. I also explained a bit about how to control the page content, whether it should be encoded or not. We can see the ASP.NET MVC gives us more points to control the web pages.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • CodePlex Daily Summary for Wednesday, June 26, 2013

    CodePlex Daily Summary for Wednesday, June 26, 2013Popular ReleasesNaked Objects: Naked Objects Release 5.5.0: This release includes a number of significant improvements to the usability of the UI, some of which involve new programming conventions or attributes: Action dialogs now appear as pop-up modal dialogs instead of as a new page; query-only actions have an Apply as well as an OK button. See https://nakedobjects.codeplex.com/workitem/175 When a reference object is expanded in-line there is a button to jump straight to an Edit view of that object see https://nakedobjects.codeplex.com/workitem/1...VeraCrypt: VeraCrypt version 1.0b: Changes since version 1.0a :Enhance RIPEMD160 implementation in BootLoaded by using the compiler uint32 type Don't position legacy flag in volume header for newer VeraCrypt releasesPlayer Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for WAMS. Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expecting only finite values. (...SSIS DQS Matching Transformation: SSIS DQS Matching Transformation 1.0: Initial release of the SSIS DQS Matching Component.AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Compare .NET Objects: Version 1.7.2.0: If you like it, please rate it. :) Performance Improvements Fix for deleted row in a data table Added ability to ignore the collection order Fix for Ignoring by AttributesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsStyleMVVM: 3.0.2: This is a minor feature and bug fix release Features: ExportWhenDebuggerIsAttacedAttribute - new attribute that marks an attribute to only be exported when the debugger is attahced InjectedFilterAttributeFilterProvider - new Attribute Filter provider for MVC that injects the attributes Performance Improvements - minor speed improvements all over, and Import collections is now 50% faster Bug Fixes: Open Generic Constraints are now respected when finding exports Fix for fluent registrat...WPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Channel9's Absolute Beginner Series: Windows Phone 8: Entire source code for the Channel 9 series, Windows Phone 8 Development for Absolute Beginners.Indent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Keyboard Image Viewer: 1.5.4: Upgraded folder picker dialog to better version on Win7+ Fixed bug that stopped slideshow from looping back to the start of the list. Added crash dialog that allows you to see and copy exception stack traces for fixing.DependencyAnalysis (Egg and Gherkin): 0.9.4: - Create Visual Studio 2012 Addin for ad-hoc analysis of your project - Display metrics in a grid - Adequate performing serialization between Addin (Visual Studio process) and AnalysisHost process - Display dependencies as graph (proximity graph) - Create a logo for the project - Constructors of anonymous types no longer hide constructors of the declaring type during "build dependencies" phase - Type descriptors were added multiple times to SubmoduleDescriptor. Types collection, same instanc...Bloomberg API Emulator: Bloomberg API Emulator (v 1.0.5): This version contains the full Java port of my original C# code. I just finished the MarketDataSubscription request type. I will start working on a C++ port of my C# code.Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Hyper-V Management Pack Extensions 2012: HyperVMPE2012 (v1.0.1.126): Hyper-V Management Pack Extensions 2012 Beta ReleaseNew Projects.Net Encryption App: This is a C#.Net desktop application that will let users encrypt and de-crypt files with the algorithm of their choosing.AutoSPDocumenter: AutoSPDocumenter utilises PowerShell to document SharePoint farms and provide output in usable formats.Azure Storage Redirector: Azure Storage Redirector. Redirects requests to Global Azure Storage to China Azure Storage. ?????Azure Storage?request??????Azure storagebrownbag: Simple project to show branching, merging, and shelvingChannel9's Absolute Beginner Series: Channel 9's absolute beginner series source code. From Windows Phone 7, Windows Phone 8, Windows Store applications, one stop area for all the seriesFAST for Sharepoint 2010 Query Tool (.NET 3.5): .NET 3.5 version of the FAST Search for SharePoint MOSS 2010 Query Tool (https://fastforsharepoint.codeplex.com/). For environments without .NET 4.0HAOest Framework: HAOest????ListManager: ListManager????? by HADB of HAOestMyPS: mypsNNRel: NNRelO - BV - 2: TestOpen XML SDK for JavaScript: Small JavaScript library that enables you to implement Open XML functionality anywhere you can use JavaScript.Orchard Prefix free: Provides a script manifest for the Prefix free script libraries.PVDesktop: PVDesktop is an application for designing and analyzing specific solar energy sites.Red the sound TowePlay: School project at ISEN Lille. Creation of a collaborative music creation software in C# using.NETScience Kits for Kids: This project is the service for Childhood Education which aged 4 - 8.SharePoint ULS for PowerShell: Allows PowerShell to log to the SharePoint ULS.SSIS DQS Matching Transformation: The SSIS DQS Matching Transformation uses Data Quality Services (DQS) to find duplicate data within the SSIS data flow.TARVOS Computer Networks Simulator: Discrete event-based network simulator, supports simulating MPLS architecture, several RSVP-TE protocol functionalities and fast recovery.Web API Explorer 4 DNN: Web API Explorer for DNN(R) aids module development allowing you to examine the Routing Table entries for a DotNetNuke(R) portal.

    Read the article

  • REST to Objects in C#

    RESTful interfaces for web services are all the rage for many Web 2.0 sites.  If you want to consume these in a very simple fashion, LINQ to XML can do the job pretty easily in C#.  If you go searching for help on this, youll find a lot of incomplete solutions and fairly large toolkits and frameworks (guess how I know this) this quick article is meant to be a no fluff just stuff approach to making this work. POCO Objects Lets assume you have a Model that you want to suck data into from a RESTful web service.  Ideally this is a Plain Old CLR Object, meaning it isnt infected with any persistence or serialization goop.  It might look something like this: public class Entry { public int Id; public int UserId; public DateTime Date; public float Hours; public string Notes; public bool Billable;   public override string ToString() { return String.Format("[{0}] User: {1} Date: {2} Hours: {3} Notes: {4} Billable {5}", Id, UserId, Date, Hours, Notes, Billable); } } Not that this isnt a completely trivial object.  Lets look at the API for the service.  RESTful HTTP Service In this case, its TickSpots API, with the following sample output: <?xml version="1.0" encoding="UTF-8"?> <entries type="array"> <entry> <id type="integer">24</id> <task_id type="integer">14</task_id> <user_id type="integer">3</user_id> <date type="date">2008-03-08</date> <hours type="float">1.00</hours> <notes>Had trouble with tribbles.</notes> <billable>true</billable> # Billable is an attribute inherited from the task <billed>true</billed> # Billed is an attribute to track whether the entry has been invoiced <created_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</created_at> <updated_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</updated_at> # The following attributes are derived and provided for informational purposes: <user_email>[email protected]</user_email> <task_name>Remove converter assembly</task_name> <sum_hours type="float">2.00</sum_hours> <budget type="float">10.00</budget> <project_name>Realign dilithium crystals</project_name> <client_name>Starfleet Command</client_name> </entry> </entries> Im assuming in this case that I dont necessarily care about all of the data fields the service is returning I just need some of them for my applications purposes.  Thus, you can see there are more elements in the <entry> XML than I have in my Entry class. Get The XML with C# The next step is to get the XML.  The following snippet does the heavy lifting once you pass it the appropriate URL: protected XElement GetResponse(string uri) { var request = WebRequest.Create(uri) as HttpWebRequest; request.UserAgent = ".NET Sample"; request.KeepAlive = false;   request.Timeout = 15 * 1000;   var response = request.GetResponse() as HttpWebResponse;   if (request.HaveResponse == true && response != null) { var reader = new StreamReader(response.GetResponseStream()); return XElement.Parse(reader.ReadToEnd()); } throw new Exception("Error fetching data."); } This is adapted from the Yahoo Developer article on Web Service REST calls.  Once you have the XML, the last step is to get the data back as your POCO. Use LINQ-To-XML to Deserialize POCOs from XML This is done via the following code: public IEnumerable<Entry> List(DateTime startDate, DateTime endDate) { string additionalParameters = String.Format("start_date={0}&end_date={1}", startDate.ToShortDateString(), endDate.ToShortDateString()); string uri = BuildUrl("entries", additionalParameters);   XElement elements = GetResponse(uri);   var entries = from e in elements.Elements() where e.Name.LocalName == "entry" select new Entry { Id = int.Parse(e.Element("id").Value), UserId = int.Parse(e.Element("user_id").Value), Date = DateTime.Parse(e.Element("date").Value), Hours = float.Parse(e.Element("hours").Value), Notes = e.Element("notes").Value, Billable = bool.Parse(e.Element("billable").Value) }; return entries; }   For completeness, heres the BuildUrl method for my TickSpot API wrapper: // Change these to your settings protected const string projectDomain = "DOMAIN.tickspot.com"; private const string authParams = "[email protected]&password=MyTickSpotPassword";   protected string BuildUrl(string apiMethod, string additionalParams) { if (projectDomain.Contains("DOMAIN")) { throw new ApplicationException("You must update your domain in ProjectRepository.cs."); } if (authParams.Contains("MyTickSpotPassword")) { throw new ApplicationException("You must update your email and password in ProjectRepository.cs."); } return string.Format("https://{0}/api/{1}?{2}&{3}", projectDomain, apiMethod, authParams, additionalParams); } Thats it!  Now go forth and consume XML and map it to classes you actually want to work with.  Have fun! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CodePlex Daily Summary for Sunday, June 26, 2011

    CodePlex Daily Summary for Sunday, June 26, 2011Popular ReleasesDroid Builder: Droid Builder - 1.0.4194.38898: Support new type of patch package. Support plugin framework.Mosaic Project: Mosaic Alpha build 254: - Added horizontal scroll by mouse in fullscreen mode - Widgets now have fixed size - Reduced spacing between widgets - Widgets menu is scrollable by mouse now and not overlapping back button on small screens.Net Image Processor: v1.0: Initial release of the library containing the core architecture and two filters. To install, extract the library to somewhere sensible then reference as a file from your project in Visual Studio.Usage Agent: Usage Agent 9.0.8: Latest release. Changes include: - Fixes for Optus - Usage Delta statistic for BigPond - Eliminated the need for UAC prompt at every startupjQuery List DragSort: jQuery List DragSort 0.4.3: Fix item not dropping correctly on Chrome and jQuery 1.6KinectNUI: Jun 25 Alpha Release: Initial public version. No installer needed, just run the EXE.TerrariViewer: TerrariViewer v3.3 [v1.0.5 Compatible]: I have added support for all the new items in Terraria v1.0.5. I have also added the ability to put your character in hardcore mode or take them out via a simple checkbox on the stats tab. If you come across any bugs, please let me know immediately.Media Companion: MC 3.409b-1 Weekly: This weeks release is part way through a major rewrite of the TVShow code. This means that a few TV related features & functions are not fully operational at the moment. The reason for this release is so that people can see if their particular issue has been fixed during the week. Some issues may not be able to be fully checked due to the ongoing TV code refactoring. So, I would strongly suggest that you put this version into a separate folder, copy your settings folder across & test MC that...Terraria World Viewer: Version 1.5: Update June 24th Made compatible with the new tiles found in Terraria 1.0.5Kinect Earth Move: KinectEarthMove sample code: Sample code releasedThis is a sample code for Kinect for Windows SDK beta, which was demonstrated on Channel 9 Kinect for Windows SKD beta launch event on June 17 2011. Using color image and skeleton data from Kinect and user in front of Kinect can manipulate the earth between his/her hands.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.9b: Changes: - fix critical issue 262334 (AccessViolationException while using events in a COMAddin) - remove x64 Assemblies (not necessary) Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddi...MiniTwitter: 1.70: MiniTwitter 1.70 ???? ?? ????? xAuth ?? OAuth ??????? 1.70 ??????????????????????????。 ???????????????? Twitter ? Web ??????????、PIN ????????????????????。??????????????????、???????????????????????????。Total Commander SkyDrive File System Plugin (.wfx): Total Commander SkyDrive File System Plugin 0.8.7b: Total Commander SkyDrive File System Plugin version 0.8.7b. Bug fixes: - BROKEN PLUGIN by upgrading SkyDriveServiceClient version 2.0.1b. Please do not forget to express your opinion of the plugin by rating it! Donate (EUR)SkyDrive .Net API Client: SkyDrive .Net API Client 2.0.1b (RELOADED): SkyDrive .Net API Client assembly has been RELOADED in version 2.0.1b as a REAL API. It supports the followings: - Creating root and sub folders - Uploading and downloading files - Renaming and deleting folders and files Bug fixes: - BROKEN API (issue 6834) Please do not forget to express your opinion of the assembly by rating it! Donate (EUR)Mini SQL Query: Mini SQL Query v1.0.0.59794: This release includes the following enhancements: Added a Most Recently Used file list Added Row counts to the query (per tab) and table view windows Added the Command Timeout option, only valid for MSSQL for now - see options If you have no idea what this thing is make sure you check out http://pksoftware.net/MiniSqlQuery/Help/MiniSqlQueryQuickStart.docx for an introduction. PK :-]HydroDesktop - CUAHSI Hydrologic Information System Desktop Application: 1.2.591 Beta Release: 1.2.591 Beta Releasepatterns & practices: Project Silk: Project Silk Community Drop 12 - June 22, 2011: Changes from previous drop: Minor code changes. New "Introduction" chapter. New "Modularity" chapter. Updated "Architecture" chapter. Updated "Server-Side Implementation" chapter. Updated "Client Data Management and Caching" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To ins...DropBox Linker: DropBox Linker 1.3: Added "Get links..." dialog, that provides selective public files links copying Get links link added to tray menu as the default option Fixed URL encoding .NET Framework 4.0 Client Profile requiredDotNetNuke® Community Edition: 06.00.00 Beta: Beta 1 (Build 2300) includes many important enhancements to the user experience. The control panel has been updated for easier access to the most important features and additional forms have been adapted to the new pattern. This release also includes many bug fixes that make it more stable than previous CTP releases. Beta ForumsBlogEngine.NET: BlogEngine.NET 2.5 RC: BlogEngine.NET Hosting - Click Here! 3 Months FREE – BlogEngine.NET Hosting – Click Here! This is a Release Candidate version for BlogEngine.NET 2.5. The most current, stable version of BlogEngine.NET is version 2.0. Find out more about the BlogEngine.NET 2.5 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation. If you are upgrading from a previous version, please take a look at ...New Projects6_6_6_w_m_s_open: jwervxsdfcfcf: cfcfChairforce hackathon project: project for hackathonDot Net Nuke Ajax Modules: This is a small collection of modules I think on once in a while which intend to improve a little dnn's user experience.Gnosis Game Engine: A simple game engine for the XNA 4.0 frame work that I am working on, mostly as a learning experience. I found that XNA game engines either require you to pay or are XNA 4.0 incompatible, and so this is my solution to that problem.KA_WindowsPhone7_Samples: Sample Code for Windows Phone 7 from http://kevinashley.comKinect MIDI Controller: This tool allows you to use a Kinect Sensor as a MIDI Controller for your Digital Audio Workbench. The tool is written in C#, and uses Microsoft Kinect SDK. Mosaic Project: Mosaic is an application that brings Metro UI to your desktop by live widgets.Movie Gate: A movie database that is also able to play the movies with your favorit media player.Musical Collective: An open-source web service that enables Musicians to collaborate on songs. Written in ASP.NET MVC (C#).NcADS-MVC: Clasificados MVCPokeTD: Ein kleines 2D Pokemon Tower-Defense Spiel. Es ist in C# und XNA geschrieben.PRO-TOKOL: PRO-TOKOL Server is a Programmable Logic Controller communication driver. The project is 100% coded in .NET Managed code. So, the dll can be included in any .NET project. The project uses the Microsoft Workflow Foundation to implement the DF1 Receiver and Transmitter logic.ShumaDaf: small project for display movies info directly from file structure using mymovies.xml. program create one simple xml file and display it!Silverlight Policy Service: The windows service act as a server and listens on TCP port 943 using IPv4 and IPv6. The socket policy included in the project allows all silverlight client applications to connect to TCP ports 4502-4506.SkinObject Module Wrapper: The SkinObject Module Wrapper is a DotNetNuke module that will allow you to add any DNN SkinObject to a page dinamically as if it was a DNN Module. Without any skin modification you can now inject new SkinObjects to you pages, configure the properties and change them on the fly.SkyNet0.3: Program that one should not be able to close.Team Zero Game One: SVN for the personal project(s) of Team Zero - Game One. We are creating a free game in HTML5 canvas using the CAKE api, found here: http://code.google.com/p/cakejs/ The game is about programming a small robot to move through a maze, sneaking past guards and other obstacles, using event-based programming. We've seen a number of games that allow you to "program" a character, and thought it would be interesting to do a different take on it. The game is still in early production, and actively ...Test-Driven Scaffolding (TDS): TDS helps developers of C# function members (methods, indexers, etc.) to quickly write drivers for code under development; these can easily be converted later to NUnit tests. TDS consists of C# code that can be pasted into a new or existing project and removed when no longer needed.Usage Agent: The Usage Agent toolset is designed to help manage your ISP data usage without having to log into your ISP usage page. It can optionally monitor your network card throughput and produce reports on usage. Developed in VB.NET.

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 3

    - by Max
    Finally Silverlight 4 RC is released and also that Windows 7 Phone Series will rely heavily on Silverlight platform for apps platform. its a really good news for Silverlight developers and designers. More information on this here. You can use SL 4 RC with VS 2010. SL 4 RC does not come with VS 2010, you need to download it separately and install it. So for the next part, be ready with VS 2010 and SL4 RC, we will start using them and not With this momentum, let us go to the next part of our twitter client tutorial. This tutorial will cover setting your status in Twitter and also retrieving your 1) As everything in Silverlight is asynchronous, we need to have some visual representation showing that something is going on in the background. So what I did was to create a progress bar with indeterminate animation. The XAML is here below. <ProgressBar Maximum="100" Width="300" Height="50" Margin="20" Visibility="Collapsed" IsIndeterminate="True" Name="progressBar1" VerticalAlignment="Center" HorizontalAlignment="Center" /> 2) I will be toggling this progress bar to show the background work. So I thought of writing this small method, which I use to toggle the visibility of this progress bar. Just pass a bool to this method and this will toggle it based on its current visibility status. public void toggleProgressBar(bool Option){ if (Option) { if (progressBar1.Visibility == System.Windows.Visibility.Collapsed) progressBar1.Visibility = System.Windows.Visibility.Visible; } else { if (progressBar1.Visibility == System.Windows.Visibility.Visible) progressBar1.Visibility = System.Windows.Visibility.Collapsed; }} 3) Now let us create a grid to hold a textbox and a update button. The XAML will look like something below <Grid HorizontalAlignment="Center"> <Grid.RowDefinitions> <RowDefinition Height="50"></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="400"></ColumnDefinition> <ColumnDefinition Width="200"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Name="TwitterStatus" Width="380" Height="50"></TextBox> <Button Name="UpdateStatus" Content="Update" Grid.Row="1" Grid.Column="2" Width="200" Height="50" Click="UpdateStatus_Click"></Button></Grid> 4) The click handler for this update button will be again using the Web Client to post values. Posting values using Web Client. The code is: private void UpdateStatus_Click(object sender, RoutedEventArgs e){ toggleProgressBar(true); string statusupdate = "status=" + TwitterStatus.Text; WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp);  WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword());  myService.UploadStringCompleted += new UploadStringCompletedEventHandler(myService_UploadStringCompleted); myService.UploadStringAsync(new Uri("https://twitter.com/statuses/update.xml"), statusupdate);  this.Dispatcher.BeginInvoke(() => ClearTextBoxValue());} 5) In the above code, we have a event handler which will be fired on this request is completed – !! Remember SL is Asynch !! So in the myService_UploadStringCompleted, we will just toggle the progress bar and change some status text to say that its done. The code for this will be StatusMessage is just another textblock conveniently positioned in the page.  void myService_UploadStringCompleted(object sender, UploadStringCompletedEventArgs e){ if (e.Error != null) { StatusMessage.Text = "Status Update Failed: " + e.Error.Message.ToString(); } else { toggleProgressBar(false); TwitterCredentialsSubmit(); }} 6) Now let us look at fetching the friends updates of the logged in user and displaying it in a datagrid. So just define a data grid and set its autogenerate columns as true. 7) Let us first create a data structure for use with fetching the friends timeline. The code is something like below: namespace MaxTwitter.Classes{ public class Status { public Status() {} public string ID { get; set; } public string Text { get; set; } public string Source { get; set; } public string UserID { get; set; } public string UserName { get; set; } }} You can add as many fields as you want, for the list of fields, have a look at here. It will ask for your Twitter username and password, just provide them and this will display the xml file. Go through them pick and choose your desired fields and include in your Data Structure. 8) Now the web client request for this is similar to the one we saw in step 4. Just change the uri in the last but one step to https://twitter.com/statuses/friends_timeline.xml Be sure to change the event handler to something else and within that we will use XLINQ to fetch the required details for us. Now let us how this event handler fetches details. public void parseXML(string text){ XDocument xdoc; if(text.Length> 0) xdoc = XDocument.Parse(text); else xdoc = XDocument.Parse(@"I USED MY OWN LOCAL COPY OF XML FILE HERE FOR OFFLINE TESTING"); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, }).ToList(); //MessageBox.Show(text); //this.Dispatcher.BeginInvoke(() => CallDatabindMethod(StatusCollection)); //MessageBox.Show(statusList.Count.ToString()); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false);} in the event handler, we call this method with e.Result.ToString() Parsing XML files using LINQ is super cool, I love it.   I am stopping it here for  this post. Will post the completed files in next post, as I’ve worked on a few more features in this page and don’t want to confuse you. See you soon in my next post where will play with Twitter lists. Have a nice day! Technorati Tags: Silverlight,LINQ,XLINQ,Twitter API,Twitter,Network Credentials

    Read the article

  • Reading XML Content

    using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; using System.Diagnostics; using System.Threading; using System.Xml; using System.Reflection; namespace XMLReading { class Program     { static void Main(string[] args)         { string fileName = @"C:\temp\t.xml"; List<EmergencyContactXMLDTO> emergencyContacts = new XmlReader<EmergencyContactXMLDTO, EmergencyContactXMLDTOMapper>().Read(fileName); foreach (var item in emergencyContacts)             { Console.WriteLine(item.FileNb);             }          }     } public class XmlReader<TDTO, TMAPPER> where TDTO : BaseDTO, new() where TMAPPER : PCPWXMLDTOMapper, new()     { public List<TDTO> Read(String fileName)         { XmlTextReader reader = new XmlTextReader(fileName); List<TDTO> emergencyContacts = new List<TDTO>(); while (true)             {                 TMAPPER mapper = new TMAPPER(); bool isFound = SeekElement(reader, mapper.GetMainXMLTagName()); if (!isFound) break;                 TDTO dto = new TDTO(); foreach (var propertyKey in mapper.GetPropertyXMLMap())                 { String dtoPropertyName = propertyKey.Key; String xmlPropertyName = propertyKey.Value;                     SeekElement(reader, xmlPropertyName);                     SetValue(dto, dtoPropertyName, reader.ReadElementString());                 }                 emergencyContacts.Add(dto);             } return emergencyContacts;         } private void SetValue(Object dto, String propertyName, String value)         { PropertyInfo prop = dto.GetType().GetProperty(propertyName, BindingFlags.Public | BindingFlags.Instance);             prop.SetValue(dto, value, null);         } private bool SeekElement(XmlTextReader reader, String elementName)         { while (reader.Read())             { XmlNodeType nodeType = reader.MoveToContent(); if (nodeType != XmlNodeType.Element)                 { continue;                 } if (reader.Name == elementName)                 { return true;                 }             } return false;         }     } public class BaseDTO     {     } public class EmergencyContactXMLDTO : BaseDTO     { public string FileNb { get; set; } public string ContactName { get; set; } public string ContactPhoneNumber { get; set; } public string Relationship { get; set; } public string DoctorName { get; set; } public string DoctorPhoneNumber { get; set; } public string HospitalName { get; set; }     } public interface PCPWXMLDTOMapper     { Dictionary<string, string> GetPropertyXMLMap(); String GetMainXMLTagName();     } public class EmergencyContactXMLDTOMapper : PCPWXMLDTOMapper     { public Dictionary<string, string> GetPropertyXMLMap()         { return new Dictionary<string, string>             {                 { "FileNb", "XFileNb" },                 { "ContactName", "XContactName"},                 { "ContactPhoneNumber", "XContactPhoneNumber" },                 { "Relationship", "XRelationship" },                 { "DoctorName", "XDoctorName" },                 { "DoctorPhoneNumber", "XDoctorPhoneNumber" },                 { "HospitalName", "XHospitalName" },             };         } public String GetMainXMLTagName()         { return "EmergencyContact";         }     } } span.fullpost {display:none;}

    Read the article

  • Making an asynchronous Client with boost::asio

    - by tag
    Hello, i'm trying to make an asynchronous Client with boost::asio, i use the daytime asynchronous Server(in the tutorial). However sometimes the Client don't receive the Message, sometimes it do :O I'm sorry if this is too much Code, but i don't know what's wrong :/ Client: #include <iostream> #include <stdio.h> #include <ostream> #include <boost/thread.hpp> #include <boost/bind.hpp> #include <boost/array.hpp> #include <boost/asio.hpp> using namespace std; using boost::asio::ip::tcp; class TCPClient { public: TCPClient(boost::asio::io_service& IO_Service, tcp::resolver::iterator EndPointIter); void Write(); void Close(); private: boost::asio::io_service& m_IOService; tcp::socket m_Socket; boost::array<char, 128> m_Buffer; size_t m_BufLen; private: void OnConnect(const boost::system::error_code& ErrorCode, tcp::resolver::iterator EndPointIter); void OnReceive(const boost::system::error_code& ErrorCode); void DoClose(); }; TCPClient::TCPClient(boost::asio::io_service& IO_Service, tcp::resolver::iterator EndPointIter) : m_IOService(IO_Service), m_Socket(IO_Service) { tcp::endpoint EndPoint = *EndPointIter; m_Socket.async_connect(EndPoint, boost::bind(&TCPClient::OnConnect, this, boost::asio::placeholders::error, ++EndPointIter)); } void TCPClient::Close() { m_IOService.post( boost::bind(&TCPClient::DoClose, this)); } void TCPClient::OnConnect(const boost::system::error_code& ErrorCode, tcp::resolver::iterator EndPointIter) { if (ErrorCode == 0) // Successful connected { m_Socket.async_receive(boost::asio::buffer(m_Buffer.data(), m_BufLen), boost::bind(&TCPClient::OnReceive, this, boost::asio::placeholders::error)); } else if (EndPointIter != tcp::resolver::iterator()) { m_Socket.close(); tcp::endpoint EndPoint = *EndPointIter; m_Socket.async_connect(EndPoint, boost::bind(&TCPClient::OnConnect, this, boost::asio::placeholders::error, ++EndPointIter)); } } void TCPClient::OnReceive(const boost::system::error_code& ErrorCode) { if (ErrorCode == 0) { std::cout << m_Buffer.data() << std::endl; m_Socket.async_receive(boost::asio::buffer(m_Buffer.data(), m_BufLen), boost::bind(&TCPClient::OnReceive, this, boost::asio::placeholders::error)); } else { DoClose(); } } void TCPClient::DoClose() { m_Socket.close(); } int main() { try { boost::asio::io_service IO_Service; tcp::resolver Resolver(IO_Service); tcp::resolver::query Query("127.0.0.1", "daytime"); tcp::resolver::iterator EndPointIterator = Resolver.resolve(Query); TCPClient Client(IO_Service, EndPointIterator); boost::thread ClientThread( boost::bind(&boost::asio::io_service::run, &IO_Service)); std::cout << "Client started." << std::endl; std::string Input; while (Input != "exit") { std::cin >> Input; } Client.Close(); ClientThread.join(); } catch (std::exception& e) { std::cerr << e.what() << std::endl; } } Server: http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html Regards :)

    Read the article

  • WCF Data Service BeginSaveChanges not saving changes in Silverlight app

    - by Enigmativity
    I'm having a hell of a time getting WCF Data Services to work within Silverlight. I'm using the VS2010 RC. I've struggled with the cross domain issue requiring the use of clientaccesspolicy.xml & crossdomain.xml files in the web server root folder, but I just couldn't get this to work. I've resorted to putting both the Silverlight Web App & the WCF Data Service in the same project to get past this issue, but any advice here would be good. But now that I can actually see my data coming from the database and being displayed in a data grid within Silverlight I thought my troubles were over - but no. I can edit the data and the in-memory entity is changing, but when I call BeginSaveChanges (with the appropriate async EndSaveChangescall) I get no errors, but no data updates in the database. Here's my WCF Data Services code: public class MyDataService : DataService<MyEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } protected override void OnStartProcessingRequest(ProcessRequestArgs args) { base.OnStartProcessingRequest(args); HttpContext context = HttpContext.Current; HttpCachePolicy c = HttpContext.Current.Response.Cache; c.SetCacheability(HttpCacheability.ServerAndPrivate); c.SetExpires(HttpContext.Current.Timestamp.AddSeconds(60)); c.VaryByHeaders["Accept"] = true; c.VaryByHeaders["Accept-Charset"] = true; c.VaryByHeaders["Accept-Encoding"] = true; c.VaryByParams["*"] = true; } } I've pinched the OnStartProcessingRequest code from Scott Hanselman's article Creating an OData API for StackOverflow including XML and JSON in 30 minutes. Here's my code from my Silverlight app: private MyEntities _wcfDataServicesEntities; private CollectionViewSource _customersViewSource; private ObservableCollection<Customer> _customers; private void UserControl_Loaded(object sender, RoutedEventArgs e) { if (!System.ComponentModel.DesignerProperties.GetIsInDesignMode(this)) { _wcfDataServicesEntities = new MyEntities(new Uri("http://localhost:7156/MyDataService.svc/")); _customersViewSource = this.Resources["customersViewSource"] as CollectionViewSource; DataServiceQuery<Customer> query = _wcfDataServicesEntities.Customer; query.BeginExecute(result => { _customers = new ObservableCollection<Customer>(); Array.ForEach(query.EndExecute(result).ToArray(), _customers.Add); Dispatcher.BeginInvoke(() => { _customersViewSource.Source = _customers; }); }, null); } } private void button1_Click(object sender, RoutedEventArgs e) { _wcfDataServicesEntities.BeginSaveChanges(r => { var response = _wcfDataServicesEntities.EndSaveChanges(r); string[] results = new[] { response.BatchStatusCode.ToString(), response.IsBatchResponse.ToString() }; _customers[0].FinAssistCompanyName = String.Join("|", results); }, null); } The response string I get back data binds to my grid OK and shows "-1|False". My intent is to get a proof-of-concept working here and then do the appropriate separation of concerns to turn this into a simple line-of-business app. I've spent hours and hours on this. I'm being driven insane. Any ideas how to get this working?

    Read the article

  • Linker error when compiling boost.asio example

    - by Alon
    Hi, I'm trying to learn a little bit C++ and Boost.Asio. I'm trying to compile the following code example: #include <iostream> #include <boost/array.hpp> #include <boost/asio.hpp> using boost::asio::ip::tcp; int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: client <host>" << std::endl; return 1; } boost::asio::io_service io_service; tcp::resolver resolver(io_service); tcp::resolver::query query(argv[1], "daytime"); tcp::resolver::iterator endpoint_iterator = resolver.resolve(query); tcp::resolver::iterator end; tcp::socket socket(io_service); boost::system::error_code error = boost::asio::error::host_not_found; while (error && endpoint_iterator != end) { socket.close(); socket.connect(*endpoint_iterator++, error); } if (error) throw boost::system::system_error(error); for (;;) { boost::array<char, 128> buf; boost::system::error_code error; size_t len = socket.read_some(boost::asio::buffer(buf), error); if (error == boost::asio::error::eof) break; // Connection closed cleanly by peer. else if (error) throw boost::system::system_error(error); // Some other error. std::cout.write(buf.data(), len); } } catch (std::exception& e) { std::cerr << e.what() << std::endl; } return 0; } With the following command line: g++ -I /usr/local/boost_1_42_0 a.cpp and it throws an unclear error: /tmp/ccCv9ZJA.o: In function `__static_initialization_and_destruction_0(int, int)': a.cpp:(.text+0x654): undefined reference to `boost::system::get_system_category()' a.cpp:(.text+0x65e): undefined reference to `boost::system::get_generic_category()' a.cpp:(.text+0x668): undefined reference to `boost::system::get_generic_category()' a.cpp:(.text+0x672): undefined reference to `boost::system::get_generic_category()' a.cpp:(.text+0x67c): undefined reference to `boost::system::get_system_category()' /tmp/ccCv9ZJA.o: In function `boost::system::error_code::error_code()': a.cpp:(.text._ZN5boost6system10error_codeC2Ev[_ZN5boost6system10error_codeC5Ev]+0x10): undefined reference to `boost::system::get_system_category()' /tmp/ccCv9ZJA.o: In function `boost::asio::error::get_system_category()': a.cpp:(.text._ZN5boost4asio5error19get_system_categoryEv[boost::asio::error::get_system_category()]+0x7): undefined reference to `boost::system::get_system_category()' /tmp/ccCv9ZJA.o: In function `boost::asio::detail::posix_thread::~posix_thread()': a.cpp:(.text._ZN5boost4asio6detail12posix_threadD2Ev[_ZN5boost4asio6detail12posix_threadD5Ev]+0x1d): undefined reference to `pthread_detach' /tmp/ccCv9ZJA.o: In function `boost::asio::detail::posix_thread::join()': a.cpp:(.text._ZN5boost4asio6detail12posix_thread4joinEv[boost::asio::detail::posix_thread::join()]+0x25): undefined reference to `pthread_join' /tmp/ccCv9ZJA.o: In function `boost::asio::detail::posix_tss_ptr<boost::asio::detail::call_stack<boost::asio::detail::task_io_service<boost::asio::detail::epoll_reactor<false> > >::context>::~posix_tss_ptr()': a.cpp:(.text._ZN5boost4asio6detail13posix_tss_ptrINS1_10call_stackINS1_15task_io_serviceINS1_13epoll_reactorILb0EEEEEE7contextEED2Ev[_ZN5boost4asio6detail13posix_tss_ptrINS1_10call_stackINS1_15task_io_serviceINS1_13epoll_reactorILb0EEEEEE7contextEED5Ev]+0xf): undefined reference to `pthread_key_delete' /tmp/ccCv9ZJA.o: In function `boost::asio::detail::posix_tss_ptr<boost::asio::detail::call_stack<boost::asio::detail::task_io_service<boost::asio::detail::epoll_reactor<false> > >::context>::posix_tss_ptr()': a.cpp:(.text._ZN5boost4asio6detail13posix_tss_ptrINS1_10call_stackINS1_15task_io_serviceINS1_13epoll_reactorILb0EEEEEE7contextEEC2Ev[_ZN5boost4asio6detail13posix_tss_ptrINS1_10call_stackINS1_15task_io_serviceINS1_13epoll_reactorILb0EEEEEE7contextEEC5Ev]+0x22): undefined reference to `pthread_key_create' collect2: ld returned 1 exit status How can I fix it? Thank you.

    Read the article

  • How to Access a descendant object's internal method in C#

    - by Giovanni Galbo
    I'm trying to access a method that is marked as internal in the parent class (in its own assembly) in an object that inherits from the same parent. Let me explain what I'm trying to do... I want to create Service classes that return IEnumberable with an underlying List to non-Service classes (e.g. the UI) and optionally return an IEnumerable with an underlying IQueryable to other services. I wrote some sample code to demonstrate what I'm trying to accomplish, shown below. The example is not real life, so please remember that when commenting. All services would inherit from something like this (only relevant code shown): public class ServiceBase<T> { protected readonly ObjectContext _context; protected string _setName = String.Empty; public ServiceBase(ObjectContext context) { _context = context; } public IEnumerable<T> GetAll() { return GetAll(false); } //These are not the correct access modifiers.. I want something //that is accessible to children classes AND between descendant classes internal protected IEnumerable<T> GetAll(bool returnQueryable) { var query = _context.CreateQuery<T>(GetSetName()); if(returnQueryable) { return query; } else { return query.ToList(); } } private string GetSetName() { //Some code... return _setName; } } Inherited services would look like this: public class EmployeeService : ServiceBase<Employees> { public EmployeeService(ObjectContext context) : base(context) { } } public class DepartmentService : ServiceBase<Departments> { private readonly EmployeeService _employeeService; public DepartmentService(ObjectContext context, EmployeeService employeeService) : base(context) { _employeeService = employeeService; } public IList<Departments> DoSomethingWithEmployees(string lastName) { //won't work because method with this signature is not visible to this class var emps = _employeeService.GetAll(true); //more code... } } Because the parent class lives is reusable, it would live in a different assembly than the child services. With GetAll(bool returnQueryable) being marked internal, the children would not be able to see each other's GetAll(bool) method, just the public GetAll() method. I know that I can add a new internal GetAll method to each service (or perhaps an intermediary parent class within the same assembly) so that each child service within the assembly can see each other's method; but it seems unnecessary since the functionality is already available in the parent class. For example: internal IEnumerable<Employees> GetAll(bool returnIQueryable) { return base.GetAll(returnIQueryable); } Essentially what I want is for services to be able to access other service methods as IQueryable so that they can further refine the uncommitted results, while everyone else gets plain old lists. Any ideas? EDIT You know what, I had some fun playing a little code golf with this... but ultimately I wouldn't be able to use this scheme anyway because I pass interfaces around, not classes. So in my example GetAll(bool returnIQueryable) would not be in the interface, meaning I'd have to do casting, which goes against what I'm trying to accomplish. I'm not sure if I had a brain fart or if I was just too excited trying to get something that I thought was neat to work. Either way, thanks for the responses.

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • Get User SID From Logon ID (Windows XP and Up)

    - by Dave Ruske
    I have a Windows service that needs to access registry hives under HKEY_USERS when users log on, either locally or via Terminal Server. I'm using a WMI query on win32_logonsession to receive events when users log on, and one of the properties I get from that query is a LogonId. To figure out which registry hive I need to access, now, I need the users's SID, which is used as a registry key name beneath HKEY_USERS. In most cases, I can get this by doing a RelatedObjectQuery like so (in C#): RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); where "logonID" is the logon session ID from the session query. Running the RelatedObjectQuery will generally give me a SID property that contains exactly what I need. There are two issues I have with this. First and most importantly, the RelatedObjectQuery will not return any results for a domain user that logs in with cached credentials, disconnected from the domain. Second, I'm not pleased with the performance of this RelatedObjectQuery --- it can take up to several seconds to execute. Here's a quick and dirty command line program I threw together to experiment with the queries. Rather than setting up to receive events, this just enumerates the users on the local machine: using System; using System.Collections.Generic; using System.Text; using System.Management; namespace EnumUsersTest { class Program { static void Main( string[] args ) { ManagementScope scope = new ManagementScope( "\\\\.\\root\\cimv2" ); string queryString = "select * from win32_logonsession"; // for all sessions //string queryString = "select * from win32_logonsession where logontype = 2"; // for local interactive sessions only ManagementObjectSearcher sessionQuery = new ManagementObjectSearcher( scope, new SelectQuery( queryString ) ); ManagementObjectCollection logonSessions = sessionQuery.Get(); foreach ( ManagementObject logonSession in logonSessions ) { string logonID = logonSession["LogonId"].ToString(); Console.WriteLine( "=== {0}, type {1} ===", logonID, logonSession["LogonType"].ToString() ); RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); ManagementObjectSearcher userQuery = new ManagementObjectSearcher( scope, relatedQuery ); ManagementObjectCollection users = userQuery.Get(); foreach ( ManagementObject user in users ) { PrintProperties( user.Properties ); } } Console.WriteLine( "\nDone! Press a key to exit..." ); Console.ReadKey( true ); } private static void PrintProperty( PropertyData pd ) { string value = "null"; string valueType = "n/a"; if ( null == pd.Value ) value = "null"; if ( pd.Value != null ) { value = pd.Value.ToString(); valueType = pd.Value.GetType().ToString(); } Console.WriteLine( " \"{0}\" = ({1}) \"{2}\"", pd.Name, valueType, value ); } private static void PrintProperties( PropertyDataCollection properties ) { foreach ( PropertyData pd in properties ) { PrintProperty( pd ); } } } } So... is there way to quickly and reliably obtain the user SID given the information I retrieve from WMI, or should I be looking at using something like SENS instead?

    Read the article

  • Using JUnit with App Engine and Eclipse

    - by Mark M
    I am having trouble setting up JUnit with App Engine in Eclipse. I have JUnit set up correctly, that is, I can run tests that don't involve the datastore or other services. However, when I try to use the datastore in my tests they fail. The code I am trying right now is from the App Engine site (see below): http://code.google.com/appengine/docs/java/tools/localunittesting.html#Running_Tests So far I have added the external JAR (using Eclipse) appengine-testing.jar. But when I run the tests I get the exception below. So, I am clearly not understanding the instructions to enable the services from the web page mentioned above. Can someone clear up the steps needed to make the App Engine services available in Eclipse? java.lang.NoClassDefFoundError: com/google/appengine/api/datastore/dev/LocalDatastoreService at com.google.appengine.tools.development.testing.LocalDatastoreServiceTestConfig.tearDown(LocalDatastoreServiceTestConfig.java:138) at com.google.appengine.tools.development.testing.LocalServiceTestHelper.tearDown(LocalServiceTestHelper.java:254) at com.cooperconrad.server.MemberTest.tearDown(MemberTest.java:28) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.ClassNotFoundException: com.google.appengine.api.datastore.dev.LocalDatastoreService at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 25 more Here is the actual code (pretty much copied from the site): package com.example; import static org.junit.Assert.*; import org.junit.After; import org.junit.Before; import org.junit.Test; import com.google.appengine.api.datastore.DatastoreService; import com.google.appengine.api.datastore.DatastoreServiceFactory; import com.google.appengine.api.datastore.Entity; import com.google.appengine.api.datastore.Query; import com.google.appengine.tools.development.testing.LocalDatastoreServiceTestConfig; import com.google.appengine.tools.development.testing.LocalServiceTestHelper; public class MemberTest { private final LocalServiceTestHelper helper = new LocalServiceTestHelper(new LocalDatastoreServiceTestConfig()); @Before public void setUp() { helper.setUp(); } @After public void tearDown() { helper.tearDown(); } // run this test twice to prove we're not leaking any state across tests private void doTest() { DatastoreService ds = DatastoreServiceFactory.getDatastoreService(); assertEquals(0, ds.prepare(new Query("yam")).countEntities()); ds.put(new Entity("yam")); ds.put(new Entity("yam")); assertEquals(2, ds.prepare(new Query("yam")).countEntities()); } @Test public void testInsert1() { doTest(); } @Test public void testInsert2() { doTest(); } @Test public void foo() { assertEquals(4, 2 + 2); } }

    Read the article

  • how openjpa2.0 enhances entities at runtime?

    - by Digambar Daund
    Below is my test code: package jee.jpa2; import java.util.List; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.EntityTransaction; import javax.persistence.Persistence; import javax.persistence.Query; import org.testng.annotations.BeforeClass; import org.testng.annotations.Test; @Test public class Tester { EntityManager em; EntityTransaction tx; EntityManagerFactory emf; @BeforeClass public void setup() { emf = Persistence.createEntityManagerFactory("basicPU", System.getProperties()); } @Test public void insert() { Item item = new Item(); for (int i = 0; i < 1000; ++i) { em = emf.createEntityManager(); tx = em.getTransaction(); tx.begin(); item.setId(null); em.persist(item); tx.commit(); em.clear(); em.close(); tx=null; em=null; } } @Test public void read() { em = emf.createEntityManager(); tx = em.getTransaction(); tx.begin(); Query findAll = em.createNamedQuery("findAll"); List<Item> all = findAll.getResultList(); for (Item item : all) { System.out.println(item); } tx.commit(); } } And here is the entity: package jee.jpa2; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.NamedQuery; @Entity @NamedQuery(name="findAll", query="SELECT i FROM Item i") public class Item { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "ID", nullable = false, updatable= false) protected Long id; protected String name; public Item() { name = "Digambar"; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } @Override public String toString() { return String.format("Item [id=%s, name=%s]", id, name); } } After executing test I get Error: Item [id=1, name=Digambar] Item [id=2, name=Digambar] PASSED: read FAILED: insert <openjpa-2.0.0-r422266:935683 nonfatal store error> org.apache.openjpa.persistence.EntityExistsException: Attempt to persist detached object "jee.jpa2.Item-2". If this is a new instance, make sure any version and/or auto-generated primary key fields are null/default when persisting. FailedObject: jee.jpa2.Item-2 at org.apache.openjpa.kernel.BrokerImpl.persist(BrokerImpl.java:2563) at org.apache.openjpa.kernel.BrokerImpl.persist(BrokerImpl.java:2423) at org.apache.openjpa.kernel.DelegatingBroker.persist(DelegatingBroker.java:1069) at org.apache.openjpa.persistence.EntityManagerImpl.persist(EntityManagerImpl.java:705) at jee.jpa2.Tester.insert(Tester.java:33) Please Explain whats happening here?

    Read the article

  • Problem trying to achieve a join using the `comments` contrib in Django

    - by NiKo
    Hi, Django rookie here. I have this model, comments are managed with the django_comments contrib: class Fortune(models.Model): author = models.CharField(max_length=45, blank=False) title = models.CharField(max_length=200, blank=False) slug = models.SlugField(_('slug'), db_index=True, max_length=255, unique_for_date='pub_date') content = models.TextField(blank=False) pub_date = models.DateTimeField(_('published date'), db_index=True, default=datetime.now()) votes = models.IntegerField(default=0) comments = generic.GenericRelation( Comment, content_type_field='content_type', object_id_field='object_pk' ) I want to retrieve Fortune objects with a supplementary nb_comments value for each, counting their respectve number of comments ; I try this query: >>> Fortune.objects.annotate(nb_comments=models.Count('comments')) From the shell: >>> from django_fortunes.models import Fortune >>> from django.db.models import Count >>> Fortune.objects.annotate(nb_comments=Count('comments')) [<Fortune: My first fortune, from NiKo>, <Fortune: Another One, from Dude>, <Fortune: A funny one, from NiKo>] >>> from django.db import connection >>> connection.queries.pop() {'time': '0.000', 'sql': u'SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21'} Below is the properly formatted sql query: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 Can you spot the problem? Django won't LEFT JOIN the django_comments table with the content_type data (which contains a reference to the fortune one). This is the kind of query I'd like to be able to generate using the ORM: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") LEFT OUTER JOIN "django_content_type" ON ("django_comments"."content_type_id" = "django_content_type"."id") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 But I don't manage to do it, so help from Django veterans would be much appreciated :) Hint: I'm using Django 1.2-DEV Thanks in advance for your help.

    Read the article

  • curl_init undefined?

    - by udaya
    Hi I am importing the contacts from gmail to my page ..... The process doesnot work due to this error 'curl_init' is not defined The suggestion i got is to 1.uncomment destination curl.dll 2.copy the following libraries to the windows/system32 dir. ssleay32.dll libeay32.dll 3.copy php_curl.dll to windows/system32 After trying all these i refreshed my xampp Even then error occurs This is my page where i am trying to import the gmail contacts ` // set URL and other appropriate options curl_setopt($ch, CURLOPT_URL, "http://www.example.com/"); curl_setopt($ch, CURLOPT_HEADER, 0); // grab URL and pass it to the browser curl_exec($ch); // close cURL resource, and free up system resources curl_close($ch); ? "HOSTED_OR_GOOGLE", "Email" = $_POST['Email'], echo "Passwd" = $_POST['Passwd'], "service" = "cp", "source" = "tutsmore/1.2" ); //Now we are going to post these datas to the clientLogin url. // Initialize the curl object with the $curl = curl_init($clientlogin_url); //Make the post true curl_setopt($curl, CURLOPT_POST, true); //Passing the above array of parameters. curl_setopt($curl, CURLOPT_POSTFIELDS, $clientlogin_post); //Set this for authentication and ssl communication. curl_setopt($curl, CURLOPT_HTTPAUTH, CURLAUTH_ANY); //provide false to not to check the server for the certificate. curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false); //Tell curl to just don't echo the data but return it to a variable. curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); //The variable containing response $response = curl_exec($curl); //Check whether the user is successfully login using the preg_match and save the auth key if the user //is successfully logged in preg_match("/Auth=([a-z0-9_-]+)/i", $response, $matches); $auth = $matches[1]; // Include the Auth string in the headers $headers = array("Authorization: GoogleLogin auth=" . $auth); // Make the request to the google contacts feed with the auth key $curl = curl_init('http://www.google.com/m8/feeds/contacts/default/full?max-results=10000'); //passing the headers of auth key. curl_setopt($curl, CURLOPT_HTTPHEADER, $headers); //Return the result in a variable curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); //the variable with the response. $feed = curl_exec($curl); //Create empty array of contacts echo "contacts".$contacts=array(); //Initialize the DOMDocument object $doc=new DOMDocument(); //Check whether the feed is empty //If not empty then load that feed. if (!empty($feed)) $doc-loadHTML($feed); //Initialize the domxpath object and provide the loaded feed $xpath=new DOMXPath($doc); //Get every entry tags from the feed. $query="//entry"; $data=$xpath-query($query); //Process each entry tag foreach ($data as $node) { //children of each entry tag. $entry_nodes=$node-childNodes; //Create a temproray array. $tempArray=array(); //Process the child node of the entry tag. foreach($entry_nodes as $child) { //get the tagname of the child node. $domNodesName=$child-nodeName; switch($domNodesName) { case 'title' : { $tempArray['fullName']=$child-nodeValue; } break; case 'email' : { if (strpos($child-getAttribute('rel'),'home')!==false) $tempArray['email_1']=$child-getAttribute('address'); elseif(strpos($child-getAttribute('rel'),'work')!=false) $tempArray['email_2']=$child-getAttribute('address'); elseif(strpos($child-getAttribute('rel'),'other')!==false) $tempArray['email_3']=$child-getAttribute('address'); } break; } } if (!empty($tempArray['email_1']))$contacts[$tempArray['email_1']]=$tempArray; if(!empty($tempArray['email_2'])) $contacts[$tempArray['email_2']]=$tempArray; if(!empty($tempArray['email_3'])) $contacts[$tempArray['email_3']]=$tempArray; } foreach($contacts as $key=$val) { //Echo the email echo $key.""; } } else { //The form ? " method="POST" Email: Password: tutsmore don't save your email and password trust us. ` code is completely provided for debugging if any optimization is needed i will try to optimize the code

    Read the article

  • Array Problem, need to sort via Keys

    - by sologhost
    Ok, not really sure how to do this. I have values that are being outputted from a SQL query like so: $row[0] = array('lid' => 1, 'llayout' => 1, 'lposition' => 1, 'mid' => 1, 'mlayout' => 1, 'mposition' => 0); $row[1] = array('lid' => 2, 'llayout' => 1, 'lposition' => 0, 'mid' => 2, 'mlayout' => 1, 'mposition' => 0); $row[2] = array('lid' => 2, 'llayout' => 1, 'lposition' => 0, 'mid' => 3, 'mlayout' => 1, 'mposition' => 1); $row[3] = array('lid' => 3, 'llayout' => 1, 'lposition' => 1, 'mid' => 4, 'mlayout' => 1, 'mposition' => 1); $row[4] = array('lid' => 4, 'llayout' => 1, 'lposition' => 2, 'mid' => 5, 'mlayout' => 1, 'mposition' => 0); etc. etc. Ok, so the best thing I can think of for this is to give lid and mid array keys and have it equal the mposition into an array within the while loop of query like so... $old[$row['lid']][$row['mid']] = $mposition; Now if I do this, I need to compare this array's keys with another array that I'll need to build based on a $_POST array[]. $new = array(); foreach($_POST as $id => $data) { // $id = column, but we still need to get each rows position... $id = str_replace('col_', '', $id); // now get the inner array... foreach($data as $pos => $idpos) $new[$id][$idpos] = $pos; } Ok, so now I have 2 arrays of info, 1 from the database, and another from the $_POST positions, I hope I got the array keys correct. Now I need to figure out which one's changed, comparing from the old to the new. And also, need to update the database with all of the new positions where new lid = the old lid, and the new mid = the old mid from each array. I'm sure I'll have to use array_key or array_key_intersect somehow, but not sure exactly how...??? Also, I don't think an UPDATE would be useful in a foreach loop, perhaps there's a way to do a CASE statement in the UPDATE query? Also, Am I going about this the right way? OR should I do it another way instead of using a muli-dimensional array for this.

    Read the article

  • Is this PHP/MySQL login script secure?

    - by NightMICU
    Greetings, A site I designed was compromised today, working on damage control at the moment. Two user accounts, including the primary administrator, were accessed without authorization. Please take a look at the log-in script that was in use, any insight on security holes would be appreciated. I am not sure if this was an SQL injection or possibly breach on a computer that had been used to access this area in the past. Thanks <?php //Start session session_start(); //Include DB config require_once('config.php'); //Error message array $errmsg_arr = array(); $errflag = false; //Connect to mysql server $link = mysql_connect(DB_HOST, DB_USER, DB_PASSWORD); if(!$link) { die('Failed to connect to server: ' . mysql_error()); } //Select database $db = mysql_select_db(DB_DATABASE); if(!$db) { die("Unable to select database"); } //Function to sanitize values received from the form. Prevents SQL injection function clean($str) { $str = @trim($str); if(get_magic_quotes_gpc()) { $str = stripslashes($str); } return mysql_real_escape_string($str); } //Sanitize the POST values $login = clean($_POST['login']); $password = clean($_POST['password']); //Input Validations if($login == '') { $errmsg_arr[] = 'Login ID missing'; $errflag = true; } if($password == '') { $errmsg_arr[] = 'Password missing'; $errflag = true; } //If there are input validations, redirect back to the login form if($errflag) { $_SESSION['ERRMSG_ARR'] = $errmsg_arr; session_write_close(); header("location: http://tapp-essexvfd.org/admin/index.php"); exit(); } //Create query $qry="SELECT * FROM user_control WHERE username='$login' AND password='".md5($_POST['password'])."'"; $result=mysql_query($qry); //Check whether the query was successful or not if($result) { if(mysql_num_rows($result) == 1) { //Login Successful session_regenerate_id(); //Collect details about user and assign session details $member = mysql_fetch_assoc($result); $_SESSION['SESS_MEMBER_ID'] = $member['user_id']; $_SESSION['SESS_USERNAME'] = $member['username']; $_SESSION['SESS_FIRST_NAME'] = $member['name_f']; $_SESSION['SESS_LAST_NAME'] = $member['name_l']; $_SESSION['SESS_STATUS'] = $member['status']; $_SESSION['SESS_LEVEL'] = $member['level']; //Get Last Login $_SESSION['SESS_LAST_LOGIN'] = $member['lastLogin']; //Set Last Login info $qry = "UPDATE user_control SET lastLogin = DATE_ADD(NOW(), INTERVAL 1 HOUR) WHERE user_id = $member[user_id]"; $login = mysql_query($qry) or die(mysql_error()); session_write_close(); if ($member['level'] != "3" || $member['status'] == "Suspended") { header("location: http://members.tapp-essexvfd.org"); //CHANGE!!! } else { header("location: http://tapp-essexvfd.org/admin/admin_main.php"); } exit(); }else { //Login failed header("location: http://tapp-essexvfd.org/admin/index.php"); exit(); } }else { die("Query failed"); } ?>

    Read the article

  • Action bar with Search View. Reverse compatibility issues

    - by suresh
    I am building a sample app to demonstrate SearchView with filter and other Action Bar items. I am able to successfully run this app on 4.2(Nexus 7). But it is not running on 2.3. I googled about the issue. Came to know that i should use SherLock Action bar. I just went to http://actionbarsherlock.com/download.html, downloaded the zip file and added the library as informed in the video: http://www.youtube.com/watch?v=4GJ6yY1lNNY&feature=player_embedde by WiseManDesigns. But still I am unable to figure out the issue. Here is my code: SearchViewActionBar.java public class SearchViewActionBar extends Activity implements SearchView.OnQueryTextListener { private SearchView mSearchView; private TextView mStatusView; int mSortMode = -1; private ListView mListView; private ArrayAdapter<String> mAdapter; protected CharSequence[] _options = { "Wild Life", "River", "Hill Station", "Temple", "Bird Sanctuary", "Hill", "Amusement Park"}; protected boolean[] _selections = new boolean[ _options.length ]; private final String[] mStrings = Cheeses.sCheeseStrings; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); getWindow().requestFeature(Window.FEATURE_ACTION_BAR); setContentView(R.layout.activity_main); // mStatusView = (TextView) findViewById(R.id.status_text); // mSearchView = (SearchView) findViewById(R.id.search_view); mListView = (ListView) findViewById(R.id.list_view); mListView.setAdapter(mAdapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, mStrings)); mListView.setTextFilterEnabled(true); //setupSearchView(); } private void setupSearchView() { mSearchView.setIconifiedByDefault(true); mSearchView.setOnQueryTextListener(this); mSearchView.setSubmitButtonEnabled(false); //mSearchView.setQueryHint(getString(R.string.cheese_hunt_hint)); } @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.searchview_in_menu, menu); MenuItem searchItem = menu.findItem(R.id.action_search); mSearchView = (SearchView) searchItem.getActionView(); //setupSearchView(searchItem); setupSearchView(); return true; } @Override public boolean onPrepareOptionsMenu(Menu menu) { if (mSortMode != -1) { Drawable icon = menu.findItem(mSortMode).getIcon(); menu.findItem(R.id.action_sort).setIcon(icon); } return super.onPrepareOptionsMenu(menu); } @Override public boolean onOptionsItemSelected(MenuItem item) { String c="Category"; String s=(String) item.getTitle(); if(s.equals(c)) { System.out.println("same"); showDialog( 0 ); } //System.out.println(s); Toast.makeText(this, "Selected Item: " + item.getTitle(), Toast.LENGTH_SHORT).show(); return true; } protected Dialog onCreateDialog( int id ) { return new AlertDialog.Builder( this ) .setTitle( "Category" ) .setMultiChoiceItems( _options, _selections, new DialogSelectionClickHandler() ) .setPositiveButton( "SAVE", new DialogButtonClickHandler() ) .create(); } public class DialogSelectionClickHandler implements DialogInterface.OnMultiChoiceClickListener { public void onClick( DialogInterface dialog, int clicked, boolean selected ) { Log.i( "ME", _options[ clicked ] + " selected: " + selected ); } } public class DialogButtonClickHandler implements DialogInterface.OnClickListener { public void onClick( DialogInterface dialog, int clicked ) { switch( clicked ) { case DialogInterface.BUTTON_POSITIVE: printSelectedPlanets(); break; } } } protected void printSelectedPlanets() { for( int i = 0; i < _options.length; i++ ){ Log.i( "ME", _options[ i ] + " selected: " + _selections[i] ); } } public void onSort(MenuItem item) { mSortMode = item.getItemId(); invalidateOptionsMenu(); } public boolean onQueryTextChange(String newText) { if (TextUtils.isEmpty(newText)) { mListView.clearTextFilter(); } else { mListView.setFilterText(newText.toString()); } return true; } public boolean onQueryTextSubmit(String query) { mStatusView.setText("Query = " + query + " : submitted"); return false; } public boolean onClose() { mStatusView.setText("Closed!"); return false; } protected boolean isAlwaysExpanded() { return false; } }

    Read the article

  • Problem creating a database with PHP PDO

    - by Leandro Alonso
    Hello guys, I'm having a problem with a SQL query in my PHP Application. When the user access it for the first time, the app executes this query to create all the database: CREATE TABLE `databases` ( `id` bigint(20) NOT NULL auto_increment, `driver` varchar(45) NOT NULL, `server` text NOT NULL, `user` text NOT NULL, `password` text NOT NULL, `database` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -------------------------------------------------------- -- -- Table structure for table `modules` -- CREATE TABLE `modules` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(100) NOT NULL, `type` varchar(150) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=29 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_data` -- CREATE TABLE `modules_data` ( `id` bigint(20) NOT NULL auto_increment, `module_id` bigint(20) unsigned NOT NULL, `key` varchar(150) NOT NULL, `value` tinytext, PRIMARY KEY (`id`), KEY `fk_modules_data_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=184 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_position` -- CREATE TABLE `modules_position` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, `column` smallint(1) default NULL, `line` smallint(1) default NULL, PRIMARY KEY (`user_id`,`tab_id`,`module_id`), KEY `fk_modules_order_users` (`user_id`), KEY `fk_modules_order_tabs` (`tab_id`), KEY `fk_modules_order_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `tabs` -- CREATE TABLE `tabs` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(60) NOT NULL, `columns` smallint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ; -- -------------------------------------------------------- -- -- Table structure for table `tabs_has_modules` -- CREATE TABLE `tabs_has_modules` ( `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`tab_id`,`module_id`), KEY `fk_tabs_has_modules_tabs` (`tab_id`), KEY `fk_tabs_has_modules_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` bigint(20) unsigned NOT NULL auto_increment, `login` varchar(60) NOT NULL, `password` varchar(64) NOT NULL, `email` varchar(100) NOT NULL, `name` varchar(250) default NULL, `user_level` bigint(20) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `fk_users_user_levels` (`user_level`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -------------------------------------------------------- -- -- Table structure for table `users_has_tabs` -- CREATE TABLE `users_has_tabs` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `order` smallint(2) NOT NULL, `columns_width` varchar(255) default NULL, PRIMARY KEY (`user_id`,`tab_id`), KEY `fk_users_has_tabs_users` (`user_id`), KEY `fk_users_has_tabs_tabs` (`tab_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `user_levels` -- CREATE TABLE `user_levels` ( `id` bigint(20) unsigned NOT NULL auto_increment, `level` smallint(2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- -- Table structure for table `user_meta` -- CREATE TABLE `user_meta` ( `id` bigint(20) unsigned NOT NULL auto_increment, `user_id` bigint(20) unsigned default NULL, `key` varchar(150) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`id`), KEY `fk_user_meta_users` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Constraints for dumped tables -- -- -- Constraints for table `modules_data` -- ALTER TABLE `modules_data` ADD CONSTRAINT `fk_modules_data_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `modules_position` -- ALTER TABLE `modules_position` ADD CONSTRAINT `fk_modules_order_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_tabs` FOREIGN KEY (`tab_id`) REFERENCES `tabs` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `users` -- ALTER TABLE `users` ADD CONSTRAINT `fk_users_user_levels` FOREIGN KEY (`user_level`) REFERENCES `user_levels` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; -- -- Constraints for table `user_meta` -- ALTER TABLE `user_meta` ADD CONSTRAINT `fk_user_meta_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; INSERT INTO `user_levels` VALUES(1, 10); INSERT INTO `user_levels` VALUES(2, 1); INSERT INTO `users` VALUES(1, 'admin', 'password', '[email protected]', NULL, 1); INSERT INTO `user_meta` VALUES (NULL, 1, 'last_tab', 1); In some environments i get this error: SQLSTATE[HY000]: General error: 1005 Can't create table 'dms.databases' (errno: 150) I tried everything that I could find on Google but nothing works. The strange part is that if I run this query in PhpMyAdmin he creates my database, without any error.

    Read the article

  • Flash Media Server Streaming: Content Protection

    - by dbemerlin
    Hi, i have to implement flash streaming for the relaunch of our video-on-demand system but either because i haven't worked with flash-related systems before or because i'm too stupid i cannot get the system to work as it has to. We need: Per file & user access control with checks on a WebService every minute if the lease time ran out mid-stream: cancelling the stream rtmp streaming dynamic bandwidth checking Video Playback with Flowplayer (existing license) I've got the streaming and bandwidth check working, i just can't seem to get the access control working. I have no idea how i know which file is played back or how i can play back a file depending on a key the user has entered. Server-Side Code (main.asc): application.onAppStart = function() { trace("Starting application"); this.payload = new Array(); for (var i=0; i < 1200; i++) { this.payload[i] = Math.random(); //16K approx } } application.onConnect = function( p_client, p_autoSenseBW ) { p_client.writeAccess = ""; trace("client at : " + p_client.uri); trace("client from : " + p_client.referrer); trace("client page: " + p_client.pageUrl); // try to get something from the query string: works var i = 0; for (i = 0; i < p_client.uri.length; ++i) { if (p_client.uri[i] == '?') { ++i; break; } } var loadVars = new LoadVars(); loadVars.decode(p_client.uri.substr(i)); trace(loadVars.toString()); trace(loadVars['foo']); // And accept the connection this.acceptConnection(p_client); trace("accepted!"); //this.rejectConnection(p_client); // A connection from Flash 8 & 9 FLV Playback component based client // requires the following code. if (p_autoSenseBW) { p_client.checkBandwidth(); } else { p_client.call("onBWDone"); } trace("Done connecting"); } application.onDisconnect = function(client) { trace("client disconnecting!"); } Client.prototype.getStreamLength = function(p_streamName) { trace("getStreamLength:" + p_streamName); return Stream.length(p_streamName); } Client.prototype.checkBandwidth = function() { application.calculateClientBw(this); } application.calculateClientBw = function(p_client) { /* lots of lines copied from an adobe sample, appear to work */ } Client-Side Code: <head> <script type="text/javascript" src="flowplayer-3.1.4.min.js"></script> </head> <body> <a class="rtmp" href="rtmp://xx.xx.xx.xx/vod_project/test_flv.flv" style="display: block; width: 520px; height: 330px" id="player"> </a> <script> $f( "player", "flowplayer-3.1.5.swf", { clip: { provider: 'rtmp', autoPlay: false, url: 'test_flv' }, plugins: { rtmp: { url: 'flowplayer.rtmp-3.1.3.swf', netConnectionUrl: 'rtmp://xx.xx.xx.xx/vod_project?foo=bar' } } } ); </script> </body> My first Idea was to get a key from the Query String, ask the web service about which file and user that key is for and play the file but i can't seem to find out how to play a file from server side. My second idea was to let flowplayer play a file, pass the key as query string and if the filename and key don't match then reject the connection but i can't seem to find out which file it's currently playing. The only remaining idea i have is: create a list of all files the user is allowed to open and set allowReadAccess or however it was called to allow those files, but that would be clumsy due to the current infrastructure. Any hints? Thanks.

    Read the article

< Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >