Search Results

Search found 30046 results on 1202 pages for 'linq via c series'.

Page 293/1202 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Finding string in php

    - by Roshan
    I have a content and i want to search a keyword in that content. Content When charting multiple data series, or just to improve the appearance of your charts, you can control the fill for each series in the chart or each item in a series. The fill lets you specify a pattern that defines how Flex draws the chart element. You also use fills to specify the background colors of the chart or bands of background colors defined by the grid lines. Fills can be solid or can use linear and radial gradients. A gradient specifies a gradual color transition in the fill color therefore. NOw, if the keyword we are searching is "When", it is the starting word so should display as "When charting multiple data ..........." If the keyword is "draws" , it lies in middle so should be displayed as ".... how Flex draws....." If the kewyword is "therefore", it is in the last position so should be displayed as ".........transition in the fill color therefore." Can anyone help me out how to do this in php server script?

    Read the article

  • Infopath - Repeating sections

    - by witsEnd
    I have struggled with this for 2 days and posted this question at a couple of other sites but with no luck. I am creating a form as follows. I have an "outer" repeating section in which the user enters a work item id and then selects as many names as he/she wishes from a series of drop down lists. When the user clicks a button (also in this "outer" repeating section), Feedback textboxes are added for each name selected, as well as text boxes with the names selected from the drop down appear above the those provided for feedback. I used the XPathNavigator.Clone() method to create the new feedback textboxes and textboxes that display the user's name that was selected. (The username textbox appears above the respective feedback textbox). This did the trick, but there is a problem. The user should be able to click "Insert item" on the outer repeating section to add another work item wherein he/she may again select as many names as they wish from the series of drop downs, click a button, and have feedback and user name text boxes appear for each name selected from the drop down box associated with the current work item, (i.e, "outer" repeating section). I clicked Insert Item on the "outer" repeating section and selected 2 names from the second series of drop downs, but when I try to clone a node, it is added at the end of the previous repeating section. I don't know how to navigate into (for lack of a better term) subsequent repeating sections. I am using InfoPath 2007 and I am aware that the current() method call is implicit, but this still doesn't work. I'm thinking I need to set up a Rule somewhere, but I am not sure how to do that. Any help would be appreciated.

    Read the article

  • FLEX: how to dynamically add LineSeries to CartesianChart

    - by Patrick
    hi, the LineSeries is not dynamically added to my CartesianChart... What's wrong in this code: ... private function chartComplete():void { var ls:LineSeries = new LineSeries(); ls.styleName = 'timeline'; ls.dataProvider = "{dataManager.tagViewTimelineModel.tags.getItemAt(0).yearPopularity}"; ls.yField = 'popularity'; //ls.s = "{new Stroke(0xCC33CC, 2)}"; AllChart.series[0] = ls; } ... <mx:CartesianChart id="AllChart" width="100%" height="100" creationComplete="chartComplete();"> <mx:horizontalAxis><mx:CategoryAxis id="horiz1" dataProvider="['1','2','3','4','5','6','7','8','9','10','11','23','13','14','15','16','17','18','19','20','21','22','23','24','25','26','27','28','29','30','31']"/></mx:horizontalAxis> <mx:horizontalAxisRenderers><mx:AxisRenderer axis="{horiz1}"/></mx:horizontalAxisRenderers> <mx:verticalAxis><mx:LinearAxis id="vert1" /></mx:verticalAxis> <mx:verticalAxisRenderers><mx:AxisRenderer axis="{vert1}"/></mx:verticalAxisRenderers> <mx:series> <mx:AreaSeries id="timeArea" styleName="timeArea" name="A" dataProvider="{dataManager.tagViewTimelineModel.tags.getItemAt(2).yearPopularity}" areaStroke="{new Stroke(0x0033CC, 2)}" areaFill="{new SolidColor(0x0033CC, 0.5)}" /> </mx:series> </mx:CartesianChart> I can only see the TimeLine if I added it with MXML: <mx:LineSeries styleName="timeLine" dataProvider="{dataManager.tagViewTimelineModel.tags.getItemAt(0).yearPopularity}" yField="popularity" stroke="{new Stroke(0xCC33CC, 2)}" /> But I need to update the view, and add N lines so I cannot do it with MXML. thanks

    Read the article

  • Create Folders from text file and place dummy file in them using a CustomAction

    - by Birkoff
    I want my msi installer to generate a set of folders in a particular location and put a dummy file in each directory. Currently I have the following CustomActions: <CustomAction Id="SMC_SetPathToCmd" Property="Cmd" Value="[SystemFolder]cmd.exe"/> <CustomAction Id="SMC_GenerateMovieFolders" Property="Cmd" ExeCommand="for /f &quot;tokens=* delims= &quot; %a in ([MBSAMPLECOLLECTIONS]movies.txt) do (echo %a)" /> <CustomAction Id="SMC_CopyDummyMedia" Property="Cmd" ExeCommand="for /f &quot;tokens=* delims= &quot; %a in ([MBSAMPLECOLLECTIONS]movies.txt) do (copy [MBSAMPLECOLLECTIONS]dummy.avi &quot;%a&quot;\&quot;%a&quot;.avi)" /> These are called in the InstallExecuteSequence: <Custom Action="SMC_SetPathToCmd" After="InstallFinalize"/> <Custom Action="SMC_GenerateMovieFolders" After="SMC_SetPathToCmd"/> <Custom Action="SMC_CopyDummyMedia" After="SMC_GenerateMovieFolders"/> The custom actions seem to start, but only a blank command prompt window is shown and the directories are not generated. The files needed for the customaction are copied to the correct directory: <Directory Id="WIX_DIR_COMMON_VIDEO"> <Directory Id="MBSAMPLECOLLECTIONS" Name="MB Sample Collections" /> </Directory> <DirectoryRef Id="MBSAMPLECOLLECTIONS"> <Component Id="SampleCollections" Guid="C481566D-4CA8-4b10-B08D-EF29ACDC10B5" DiskId="1"> <File Id="movies.txt" Name="movies.txt" Source="SampleCollections\movies.txt" Checksum="no" /> <File Id="series.txt" Name="series.txt" Source="SampleCollections\series.txt" Checksum="no" /> <File Id="dummy.avi" Name="dummy.avi" Source="SampleCollections\dummy.avi" Checksum="no" /> </Component> </DirectoryRef> What's wrong with these Custom Actions or is there a simpler way to do this?

    Read the article

  • Update C# Chart using BackgroundWorker

    - by Mark
    I am currently trying to update a chart which is on my form to the background worker using: bwCharter.RunWorkerAsync(chart1); Which runs: private void bcCharter_DoWork(object sender, DoWorkEventArgs e) { System.Windows.Forms.DataVisualization.Charting.Chart chart = null; // Convert e.Argument to chart //.. // Converted.. chart.Series.Clear(); e.Result=chart; setChart(c.chart); } private void setChart(System.Windows.Forms.DataVisualization.Charting.Chart arg) { if (chart1.InvokeRequired) { chart1.Invoke(new MethodInvoker(delegate { setChart(arg); })); return; } chart1 = arg; } However, at the point of clearing the series, an exception is thrown. Basically, I want to do a whole lot more processing after clearing the series, which slows the GUI down completely - so wanted this in another thread. I thought that by passing it as an argument, I should be safe, but apparently not! Interestingly, the chart is on a tab page. I can run this over and over if the tabpage is in the background, but if I run this, look at the chart, hide it again, and re-run, it throws the exception. Obviously, it throws if the chart is in the foreground as well. Can anyone suggest what I can do differently? Thanks!

    Read the article

  • PHP code displayed in browser

    - by Drake
    so, I'm working on a databases project, and i'm trying to code incrementally. the problem is, when i go to test the php in browser, it displays the php code after my use of "-". the html printing is displayed properly, which is AFTER the point where the - is. here is the php: <?php function getGraphicNovel(){ include_once("./connect.php"); $db_connection = new mysqli($SERVER, $USERNAME, $PASSWORD, $DATABASE); if (mysqli_connect_errno()) { echo("Can't connect to MySQL Server. Error code: " . mysqli_connect_error()); return null; } $stmt = $db_connection->stmt_init(); $returnValue = "invalid"; if($stmt.prepare("select series from graphic_novel_main natural join graphic_novel_misc")) { $stmt->execute(); $stmt->bind_result($series); while ($stmt->fetch()) { echo "<tr><td>" . $series . "</td></tr>"; } $stmt->close(); } $db_connection->close(); } getGraphicNovel(); ?> here is a link to the page. hopefully it works for people outside the school's network. http://plato.cs.virginia.edu/~paw5k/cainedb/viewall.html if anyone knows why this is happening, your input would be great!

    Read the article

  • <msbuild/> task fails while <devenv/> succeeds for MFC application in CruiseControl.NET?

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (<devenv/>) works, but a simple MSBuild wrapper script (see below) run via the CCNet <msbuild/> task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors The Simple MSBuild Wrapper <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Build"> <!-- Doing some versioning stuff here--> <MSBuild Projects="target.sln" Properties="Configuration=ReleaseUnicode;Platform=Any CPU;..." /> </Target> </Project> My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. Given the vs2010 msbuild support for visual c++ projects (.vcxproj), shouldn't msbuilding a solution be pretty close to compiling via visual studio? But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level. Update 1 This appears to only ever happen in two cases: resource compilation (rc.exe), and precompiled header (stdafx.h) compilation, and only for certain projects? I was thinking it was across the board, but indeed it appears only to be in these cases. I guess I will keep digging and hope someone has some insight they would be willing to share...

    Read the article

  • proftpd, dynamic IP, and filezilla: port troubles

    - by Yami
    The basic setup: Two computers, one running proftpd, one attempting to connect via filezilla. Both linux (xubuntu on the server, kubuntu on the client). Both are at the moment behind a router on a residential (read: dynamic IP) connection; the client is a laptop I plan to take away from the home network, so I'll need this to work externally. I have my router set up to allow specific ports forwarded to each machine and, where possible, have plugged in those numbers into proftpd (via gadmin, double-checking the config file) and filezilla. Attempting to connect via active mode using the internal IP works: Status: Connecting to 192.168.1.139:8085... Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER <redacted> Response: 331 Password required for <redacted> Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PORT 192,168,1,52,153,140 Response: 200 PORT command successful Command: LIST Response: 150 Opening ASCII mode data connection for file list Response: 226 Transfer complete Status: Directory listing successful Attempting to connect via the domain name, however, leads to issues; in active mode, the PORT is the last command to be received according to the server's logs, and in passive mode, it's the PASV command. This leads me to believe I'm being redirected to a bad port? Active Sample: Status: Resolving address of <url> Status: Connecting to <ip:port> Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER <redacted> Response: 331 Password required for <redacted> Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PORT 174,111,127,27,153,139 Response: 200 PORT command successful Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing Passive sample: Status: Resolving address of ftp.bonsaiwebdesigns.com Status: Connecting to 174.111.127.27:8085... Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER yamikuronue Response: 331 Password required for yamikuronue Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (64,95,64,197,101,88). Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing In both cases, the log file ends at "PORT" or "PASV" - there's no record of ever receiving a "LIST" command. Just above that I can see the attempt to connect actively via the internal IP, which does indeed include a LIST command. My config file includes "PassivePorts 20001-26999", which are the port forwards I set up for the ftp server, and "Port 8085", which is also forwarded to the same machine. I also have a MasqueradeAddress set up to prevent it from reporting its internal IP, which was an earlier issue I had. I think what I'm asking is, is there another setting someplace I have to change to get this setup to work?

    Read the article

  • nikto probe warning messages

    - by julio
    Hi-- I have a pretty standard VPS running Ubuntu 8.1, Apache 2.2, PHP 5 etc. -- standard Lamp stack. I am using suhosin and have tried my best to plug the obvious stuff, since I'm the only user-- there's no SSH access except via pubkey on a non-standard port, there's no root access by SSH, no FTP server running, iptables is set to discard anything outside of basically port 80 or my SSH port (there's no mail server or anything else). However, I've still been compromised (not badly as far as I can tell) probably by a SQL injection. I've locked down the SQL user (there's only one outside of root, and he's got limited priv, no file etc.) So I ran nikto to see what I'm doing wrong, and there's a list of things I've never seen, and can't find using "find" or any other method I'm aware of. See below: + /autologon.html?10514: Remotely Anywhere 5.10.415 is vulnerable to XSS attacks that can lead to cookie theft or privilege escalation. This is typically found on port 2000. + /servlet/webacc?User.html=noexist: Netware web access may reveal full path of the web server. Apply vendor patch or upgrade. + OSVDB-35878: /modules.php?name=Members_List&letter='%20OR%20pass%20LIKE%20'a%25'/*: PHP Nuke module allows user names and passwords to be viewed. + OSVDB-3092: /sitemap.xml: This gives a nice listing of the site content. + OSVDB-12184: /index.php?=PHPB8B5F2A0-3C92-11d3-A3A9-4C7B08C10000: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F36-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F34-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F35-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-3092: /administrator/: This might be interesting... + OSVDB-3092: /Agent/: This might be interesting... + OSVDB-3092: /includes/: This might be interesting... + OSVDB-3092: /logs/: This might be interesting... + OSVDB-3092: /tmp/: This might be interesting... + ERROR: /servlet/Counter returned an error: error reading HTTP response + OSVDB-3268: /icons/: Directory indexing is enabled: /icons + OSVDB-3268: /images/: Directory indexing is enabled: /images + OSVDB-3299: /forumscalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /forumzcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /htforumcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbulletincalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-6659: /kCKAowoWuZkKCUPH7Mr675ILd9hFg1lnyc1tWUuEbkYkFCpCdEnCKkkd9L0bY34tIf9l6t2owkUp9nI5PIDmQzMokDbp71QFTZGxdnZhTUIzxVrQhVgwmPYsMK7g34DURzeiy3nyd4ezX5NtUozTGqMkxDrLheQmx4dDYlRx0vKaX41JX40GEMf21TKWxHAZSUxjgXUnIlKav58GZQ5LNAwSAn13l0w<font%20size=50>DEFACED<!--//--: MyWebServer 1.0.2 is vulnerable to HTML injection. Upgrade to a later version. I understand about the trace and index, but what about the vbulletin and autologin? I've searched, and I can't find any files like that on the server. I have no idea about the "MyWebServer" stuff, the PHP Nuke, or the Netware/servlet stuff-- there's nothing really on the server except a pretty standard Joomla site (updated to the latest version). Any help with these messages and/or what I'm doing wrong is very much appreciated.

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • PowerShell Control over Nikon D3000 Camera

    My wife got me a Nikon D3000 camera for Christmas last year, and Im loving it but still trying to wrap my head around some of its features.  For instance, when you plug it into a computer via USB, it doesnt show up as a drive like most cameras Ive used to, but rather it shows up as Computer\D3000.  After a bit of research, Ive learned that this is because it implements the MTP/PTP protocol, and thus doesnt actually let Windows mount the cameras storage as a drive letter.  Nikon describes the use of the MTP and PTP protocols in their cameras here. What Im really trying to do is gain access to the cameras file system via PowerShell.  Ive been using a very handy PowerShell script to pull pictures off of my cameras and organize them into folders by date.  Id love to be able to do the same thing with my Nikon D3000, but so far I havent been able to figure out how to get access to the files in PowerShell.  If you know, Id appreciate any links/tips you can provide.  All I could find is a shareware product called PTPdrive, which Im not prepared to shell out money for (yet).  (and yes you can do much the same thing with Windows 7s Import Pictures and Videos wizard, which is pretty good too) However, in my searching, I did find some really cool stuff you can do with PowerShell and one of these cameras, like actually taking pictures via PowerShell commands.  Credit for this goes to James ONeill and Mark Wilson.  Heres what I was able to do: Taking Pictures via PowerShell with D3000 First, connect your camera, turn it on, and launch PowerShell.  Execute the following commands to see what commands your device supports.  $dialog = New-Object -ComObject "WIA.CommonDialog" $device = $dialog.ShowSelectDevice() $device.Commands You should see something like this: Now, to take a picture, simply point your camera at something and then execute this command: $device.ExecuteCommand("{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Imagine my surprise when this actually took a picture (with auto-focus): Imagine what you could do with a camera completely under the control of your computer  Time-lapse photography would be pretty simple, for instance, with a very simple loop that takes a picture and then sleeps for a minute (or whatever time period).  Hooked up to a laptop for portability (and an A/C power supply), this would be pretty trivial to implement.  I may have to give it a shot and report back. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • C# 5 Async, Part 2: Asynchrony Today

    - by Reed
    The .NET Framework has always supported asynchronous operations.  However, different mechanisms for supporting exist throughout the framework.  While there are at least three separate asynchronous patterns used through the framework, only the latest is directly usable with the new Visual Studio Async CTP.  Before delving into details on the new features, I will talk about existing asynchronous code, and demonstrate how to adapt it for use with the new pattern. The first asynchronous pattern used in the .NET framework was the Asynchronous Programming Model (APM).  This pattern was based around callbacks.  A method is used to start the operation.  It typically is named as BeginSomeOperation.  This method is passed a callback defined as an AsyncCallback, and returns an object that implements IAsyncResult.  Later, the IAsyncResult is used in a call to a method named EndSomeOperation, which blocks until completion and returns the value normally directly returned from the synchronous version of the operation.  Often, the EndSomeOperation call would be called from the callback function passed, which allows you to write code that never blocks. While this pattern works perfectly to prevent blocking, it can make quite confusing code, and be difficult to implement.  For example, the sample code provided for FileStream’s BeginRead/EndRead methods is not simple to understand.  In addition, implementing your own asynchronous methods requires creating an entire class just to implement the IAsyncResult. Given the complexity of the APM, other options have been introduced in later versions of the framework.  The next major pattern introduced was the Event-based Asynchronous Pattern (EAP).  This provides a simpler pattern for asynchronous operations.  It works by providing a method typically named SomeOperationAsync, which signals its completion via an event typically named SomeOperationCompleted. The EAP provides a simpler model for asynchronous programming.  It is much easier to understand and use, and far simpler to implement.  Instead of requiring a custom class and callbacks, the standard event mechanism in C# is used directly.  For example, the WebClient class uses this extensively.  A method is used, such as DownloadDataAsync, and the results are returned via the DownloadDataCompleted event. While the EAP is far simpler to understand and use than the APM, it is still not ideal.  By separating your code into method calls and event handlers, the logic of your program gets more complex.  It also typically loses the ability to block until the result is received, which is often useful.  Blocking often requires writing the code to block by hand, which is error prone and adds complexity. As a result, .NET 4 introduced a third major pattern for asynchronous programming.  The Task<T> class introduced a new, simpler concept for asynchrony.  Task and Task<T> effectively represent an operation that will complete at some point in the future.  This is a perfect model for thinking about asynchronous code, and is the preferred model for all new code going forward.  Task and Task<T> provide all of the advantages of both the APM and the EAP models – you have the ability to block on results (via Task.Wait() or Task<T>.Result), and you can stay completely asynchronous via the use of Task Continuations.  In addition, the Task class provides a new model for task composition and error and cancelation handling.  This is a far superior option to the previous asynchronous patterns. The Visual Studio Async CTP extends the Task based asynchronous model, allowing it to be used in a much simpler manner.  However, it requires the use of Task and Task<T> for all operations.

    Read the article

  • Access Control Service: Transitioning between Active and Passive Scenarios

    - by Your DisplayName here!
    As I mentioned in my last post, ACS features a number of ways to transition between protocol and token types. One not so widely known transition is between passive sign ins (browser) and active service consumers. Let’s see how this works. We all know the usual WS-Federation handshake via passive redirect. But ACS also allows driving the sign in process yourself via specially crafted WS-Federation query strings. So you can use the following URL to sign in using LiveID via ACS. ACS will then redirect back to the registered reply URL in your application: GET /login.srf?   wa=wsignin1.0&   wtrealm=https%3a%2f%2faccesscontrol.windows.net%2f&   wreply=https%3a%2f%2fleastprivilege.accesscontrol.windows.net%3a443%2fv2%2fwsfederation&   wp=MBI_FED_SSL&   wctx=pr%3dwsfederation%26rm%3dhttps%253a%252f%252froadie%252facs2rp%252frest%252f The wsfederation bit in the wctx parameter indicates, that the response to the token request will be transmitted back to the relying party via a POST. So far so good – but how can an active client receive that token now? ACS knows an alternative way to send the token request response. Instead of doing the redirect back to the RP, it emits a page that in turn echoes the token response using JavaScript’s window.external.notify. The URL would look like this: GET /login.srf?   wa=wsignin1.0&   wtrealm=https%3a%2f%2faccesscontrol.windows.net%2f&   wreply=https%3a%2f%2fleastprivilege.accesscontrol.windows.net%3a443%2fv2%2fwsfederation&   wp=MBI_FED_SSL&   wctx=pr%3djavascriptnotify%26rm%3dhttps%253a%252f%252froadie%252facs2rp%252frest%252f ACS would then render a page that contains the following script block: <script type="text/javascript">     try{         window.external.Notify('token_response');     }     catch(err){         alert("Error ACS50021: windows.external.Notify is not registered.");     } </script> Whereas token_response is a JSON encoded string with the following format: {   "appliesTo":"...",   "context":null,   "created":123,   "expires":123,   "securityToken":"...",   "tokenType":"..." } OK – so how does this all come together now? As an active client (Silverlight, WPF, WP7, WinForms etc). application, you would host a browser control and use the above URL to trigger the right series of redirects. All the browser controls support one way or the other to register a callback whenever the window.external.notify function is called. This way you get the JSON string from ACS back into the hosting application – and voila you have the security token. When you selected the SWT token format in ACS – you can use that token e.g. for REST services. When you have selected SAML, you can use the token e.g. for SOAP services. In the next post I will show how to retrieve these URLs from ACS and a practical example using WPF.

    Read the article

  • Best practices for using the Entity Framework with WPF DataBinding

    - by Ken Smith
    I'm in the process of building my first real WPF application (i.e., the first intended to be used by someone besides me), and I'm still wrapping my head around the best way to do things in WPF. It's a fairly simple data access application using the still-fairly-new Entity Framework, but I haven't been able to find a lot of guidance online for the best way to use these two technologies (WPF and EF) together. So I thought I'd toss out how I'm approaching it, and see if anyone has any better suggestions. I'm using the Entity Framework with SQL Server 2008. The EF strikes me as both much more complicated than it needs to be, and not yet mature, but Linq-to-SQL is apparently dead, so I might as well use the technology that MS seems to be focusing on. This is a simple application, so I haven't (yet) seen fit to build a separate data layer around it. When I want to get at data, I use fairly simple Linq-to-Entity queries, usually straight from my code-behind, e.g.: var families = from family in entities.Family.Include("Person") orderby family.PrimaryLastName, family.Tag select family; Linq-to-Entity queries return an IOrderedQueryable result, which doesn't automatically reflect changes in the underlying data, e.g., if I add a new record via code to the entity data model, the existence of this new record is not automatically reflected in the various controls referencing the Linq query. Consequently, I'm throwing the results of these queries into an ObservableCollection, to capture underlying data changes: familyOC = new ObservableCollection<Family>(families.ToList()); I then map the ObservableCollection to a CollectionViewSource, so that I can get filtering, sorting, etc., without having to return to the database. familyCVS.Source = familyOC; familyCVS.View.Filter = new Predicate<object>(ApplyFamilyFilter); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("PrimaryLastName", System.ComponentModel.ListSortDirection.Ascending)); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("Tag", System.ComponentModel.ListSortDirection.Ascending)); I then bind the various controls and what-not to that CollectionViewSource: <ListBox DockPanel.Dock="Bottom" Margin="5,5,5,5" Name="familyList" ItemsSource="{Binding Source={StaticResource familyCVS}, Path=., Mode=TwoWay}" IsSynchronizedWithCurrentItem="True" ItemTemplate="{StaticResource familyTemplate}" SelectionChanged="familyList_SelectionChanged" /> When I need to add or delete records/objects, I manually do so from both the entity data model, and the ObservableCollection: private void DeletePerson(Person person) { entities.DeleteObject(person); entities.SaveChanges(); personOC.Remove(person); } I'm generally using StackPanel and DockPanel controls to position elements. Sometimes I'll use a Grid, but it seems hard to maintain: if you want to add a new row to the top of your grid, you have to touch every control directly hosted by the grid to tell it to use a new line. Uggh. (Microsoft has never really seemed to get the DRY concept.) I almost never use the VS WPF designer to add, modify or position controls. The WPF designer that comes with VS is sort of vaguely helpful to see what your form is going to look like, but even then, well, not really, especially if you're using data templates that aren't binding to data that's available at design time. If I need to edit my XAML, I take it like a man and do it manually. Most of my real code is in C# rather than XAML. As I've mentioned elsewhere, entirely aside from the fact that I'm not yet used to "thinking" in it, XAML strikes me as a clunky, ugly language, that also happens to come with poor designer and intellisense support, and that can't be debugged. Uggh. Consequently, whenever I can see clearly how to do something in C# code-behind that I can't easily see how to do in XAML, I do it in C#, with no apologies. There's been plenty written about how it's a good practice to almost never use code-behind in WPF page (say, for event-handling), but so far at least, that makes no sense to me whatsoever. Why should I do something in an ugly, clunky language with god-awful syntax, an astonishingly bad editor, and virtually no type safety, when I can use a nice, clean language like C# that has a world-class editor, near-perfect intellisense, and unparalleled type safety? So that's where I'm at. Any suggestions? Am I missing any big parts of this? Anything that I should really think about doing differently?

    Read the article

  • How can I bind the nested viewmodels to properties of a control

    - by Robert
    I used Microsoft's Chart Control of the WPF toolkit to write my own chart control. I blogged about it here. My Chart control stacks the yaxes in the chart on top of each other. As you can read in the article this all works quite well. Now I want to create a viewmodel that controls the data and axes in the chart. So far I'm able to add axes to the chart and show them in the chart. But I have a problem when I try to add the lineseries because it has one DependentAxis and one InDependentAxis property. I don't know how to assign the proper xAxis and yAxis controls to it. Below you see part of the LineSeriesViewModel. It has a nested XAxisViewModel and YAxisViewModel property. public class LineSeriesViewModel : ViewModelBase, IChartComponent { XAxisViewModel _xAxis; public XAxisViewModel XAxis { get { return _xAxis; } set { _xAxis = value; RaisePropertyChanged(() => XAxis); } } //The YAxis Property look the same } The viewmodels all have their own datatemplate. The xaml code looks like this: <UserControl.Resources> <DataTemplate x:Key="xAxisTemplate" DataType="{x:Type l:YAxisViewModel}"> <chart:LinearAxis x:Name="yAxis" Orientation="Y" Location="Left" Minimum="0" Maximum="10" IsHitTestVisible="False" Width="50" /> </DataTemplate> <DataTemplate x:Key="yAxisTemplate" DataType="{x:Type l:XAxisViewModel}"> <chart:LinearAxis x:Name="xAxis" Orientation="X" Location="Bottom" Minimum="0" Maximum="100" IsHitTestVisible="False" Height="50" /> </DataTemplate> <DataTemplate DataType="{x:Type l:LineSeriesViewModel}"> <!--Binding doesn't work on the Dependent and IndependentAxis! --> <!--YAxis XAxis and Series are properties of the LineSeriesViewModel --> <l:FastLineSeries DependentAxis="{Binding Path=YAxis}" IndependentAxis="{Binding Path=XAxis}" ItemsSource="{Binding Path=Series}"/> </DataTemplate> <Style TargetType="ItemsControl"> <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate> <!--My stacked chart control --> <l:StackedPanel x:Name="stackedPanel" Width="Auto" Height="Auto" Background="LightBlue"> </l:StackedPanel> </ItemsPanelTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" ClipToBounds="True"> <!-- View is an ObservableCollection of all axes and series--> <ItemsControl x:Name="chartItems" ItemsSource="{Binding Path=View}" Focusable="False"> </ItemsControl> </Grid> This code works quite well. When I add axes they get drawn. But the DependentAxis and InDependentAxis of the lineseries control stay null, so the series doesn't get drawn. How can I bind the nested viewmodels to the properties of a control?

    Read the article

  • iPhone App to Remotely Control Windows Media Center

    - by Barry-Jon
    Can anyone recommend an iPhone app, for non-jailbroken devices, that can be used to remotely control Windows (7, if it matters) Media Center from outside the home WiFi network? The objective is to be able to connect to the Media Center box during the day when I am not home. For instance, if I realise that the new series of [insert very trendy new show here] is starting tonight and I hadn’t set up a series link I could fire up the app and set my machine to record it. Solutions could include native iPhone apps, iPhone optimised web apps or regular web apps.

    Read the article

  • Sony Vaio CPU fan runs at full speed after installing Windows 8.1 preview

    - by greg27
    I installed the Windows 8.1 preview on my Sony Vaio E Series 14P (http://www.sony.com.au/product/sve14a27cg) and now my CPU fan is running at full speed all the time. When I first start my computer it runs normally, but a few seconds after reaching the Windows login screen the fan speed will suddenly jump up to max. It's pretty loud, so I'm hoping there's a solution! I've tried updating my bios to the latest version but that didn't help. I've tried programs like SpeedFan, but that wasn't able to detect my CPU fan at all. My CPU usage and temperature is normal and doesn't warrant the max fan speed. Someone else has reported the issue here: http://answers.microsoft.com/en-us/windows/forum/windows8_1_pr-hardware/sony-vaio-e-series-sve14a35cxh100-fan-throttle/45ec823a-2bc8-43ea-8557-b1a5dd0a6870 Is there any way to fix this without having to refresh/reinstall Windows?

    Read the article

  • Best solution for Multi-WAN failover (inside & out)?

    - by Sean O
    Looking for a way to setup 2 ISPs in failover mode, for both incoming & outgoing traffic, for our small (<100 devices) network. The leading contender for now seems to be the Peplink Balance 310. However, a reseller I spoke with said it's great for 100% outgoing connectivity, but didn't seem to be confident in its abilities to handle incoming traffic. This is important as we host our own web site, Exchange e-mail, and virtual desktops (RDP). Do any Peplink owners use this for failover of incoming traffic? Are there other devices I should be considering? We're currently using a Cisco 1800 series router & ASA 5500 series firewall, with Comcast & T-1 lines (the goal being to replace the T with DSL/FiOS {whenever that becomes availble}). Price range: ~$1000 - $2500 USD. Thanks.

    Read the article

  • Flex 4, chart, Error 1009

    - by Stephane
    Hello, I am new to flex development and I am stuck with this error TypeError: Error #1009: Cannot access a property or method of a null object reference. at mx.charts.series::LineSeries/findDataPoints()[E:\dev\4.0.0\frameworks\projects\datavisualization\src\mx\charts\series\LineSeries.as:1460] at mx.charts.chartClasses::ChartBase/findDataPoints()[E:\dev\4.0.0\frameworks\projects\datavisualization\src\mx\charts\chartClasses\ChartBase.as:2202] at mx.charts.chartClasses::ChartBase/mouseMoveHandler()[E:\dev\4.0.0\frameworks\projects\datavisualization\src\mx\charts\chartClasses\ChartBase.as:4882] I try to create a chart where I can move the points with the mouse I notice that the error doesn't occur if I move the point very slowly I have try to use the debugger and pint some debug every where without success All the behaviours were ok until I had the modifyData Please let me know if you have some experience with this kind of error I will appreciate any help. Is it also possible to remove the error throwing because after that the error occur if I click the dismiss all error button then the component work great there is the simple code of the chart <fx:Declarations> </fx:Declarations> <fx:Script> <![CDATA[ import mx.charts.HitData; import mx.charts.events.ChartEvent; import mx.charts.events.ChartItemEvent; import mx.collections.ArrayCollection; [Bindable] private var selectItem:Object; [Bindable] private var chartMouseY:int; [Bindable] private var hitData:Boolean=false; [Bindable] private var profitPeriods:ArrayCollection = new ArrayCollection( [ { period: "Qtr1 2006", profit: 32 }, { period: "Qtr2 2006", profit: 47 }, { period: "Qtr3 2006", profit: 62 }, { period: "Qtr4 2006", profit: 35 }, { period: "Qtr1 2007", profit: 25 }, { period: "Qtr2 2007", profit: 55 } ]); public function chartMouseUp(e:MouseEvent):void{ if(hitData){ hitData = false; } } private function chartMouseMove(e:MouseEvent):void { if(hitData){ var p:Point = new Point(linechart.mouseX,linechart.mouseY); var d:Array = linechart.localToData(p); chartMouseY=d[1]; modifyData(); } } public function modifyData():void { var idx:int = profitPeriods.getItemIndex(selectItem); var item:Object = profitPeriods.getItemAt(idx); item.profit = chartMouseY; profitPeriods.setItemAt(item,idx); } public function chartMouseDown(e:MouseEvent):void{ if(!hitData){ var hda:Array = linechart.findDataPoints(e.currentTarget.mouseX, e.currentTarget.mouseY); if (hda[0]) { selectItem = hda[0].item; hitData = true; } } } ]]> </fx:Script> <s:layout> <s:HorizontalLayout horizontalAlign="center" verticalAlign="middle" /> </s:layout> <s:Panel title="LineChart Control" > <s:VGroup > <s:HGroup> <mx:LineChart id="linechart" color="0x323232" height="500" width="377" mouseDown="chartMouseDown(event)" mouseMove="chartMouseMove(event)" mouseUp="chartMouseUp(event)" showDataTips="true" dataProvider="{profitPeriods}" > <mx:horizontalAxis> <mx:CategoryAxis categoryField="period"/> </mx:horizontalAxis> <mx:series> <mx:LineSeries yField="profit" form="segment" displayName="Profit"/> </mx:series> </mx:LineChart> <mx:Legend dataProvider="{linechart}" color="0x323232"/> </s:HGroup> <mx:Form> <mx:TextArea id="DEBUG" height="200" width="300"/> </mx:Form> </s:VGroup> </s:Panel> UPDATE 30/2010 : the null object is _renderData.filteredCache from the chartline the code call before error is the default mouseMoveHandler of chartBase to chartline. Is it possible to remove it ? or provide a filteredCache

    Read the article

  • Synchronizing an ERWin model with a Visual Studio 2008 GDR 2/2010 db project

    - by Grant Back
    I am looking for options to get our vast collection of DB objects across many DBs into source control (TFS 2010). Once we succeed here, we will work toward generating our alter scripts for a particular DB change via TFS build. The problem is, our data architecture group is responsible for maintaining the DB objects (excluding SPs), and they work within a model centric process, via ERWin. What this means, is that they maintain the DBs via ERWin models, and generate alters from them that are used to release changes. In order to achieve our goal of getting the DB objects (not just the ERWin models) into TFS, I believe the best option is to do this via Visual Studio DB projects. From what I can tell, there is very little urgency for CA to continue supporting an integration between ERWin and Visual Studio, that no longer works as of Visual Studio 2008 DB Ed. GDR. If I have been mislead in this regard, please feel free to set me straight. One potential solution is to: Perform changes in the ERWin model. Take the alter script generated from ERWin, and import the script into the appropriate Visual Studio DB project, updating the objects in the in the DB project Check the changed objects in the DB project into TFS. TFS Build executes to generate the alter scripts that will be used to push the changes through our release process. My question is, is this solution viable, or are there any other options?

    Read the article

  • scripting a google docs form submission

    - by justSteve
    I'm trying to create a bookmarklet that parses a page and sends the results to a googledocs spreadsheet via a form that I've defined. The relevent bit of the script is: var form = document.createElement("form"); form.action = "http://spreadsheets.google.com/formResponse?formkey=Fd0SHgwQ3YwSFd5UHZpM1QxMlNOdlE6MA&ifq"; form.method = "POST"; form.id="ss-form"; form.innerHTML = ["<input id='entry_0' name = 'entry.0.single' value = '" + orderDate + "'/>", "<input name = 'entry.2.single' value = '" + email + "'/>", "<input name = 'entry.3.single' value = '" + customerID + "'/>", ].join(""); form.submit(); alert(form.innerHTML); // returns: Nothing is being saved to the form via the bookmarklet - any way to capture google's response in my bookmarklet's code? (fwiw, i've injected jQuery via jQueryify) EDIT: Firebug's Net panel isn't hearing any of the activity triggered by the bookmarklet - How about i approach this from goolgle's viewform method instead of formresponse. The form i'm trying to submit is located at: http://spreadsheets.google.com/viewform?hl=en&formkey=dFd0SHgwQ3YwSFd5UHZpM1QxMlNOdlE6MA How can I go about injecting the script values into that form and then submitting that - again...via script within the bookmarklet that would have been triggered while on the page being parsed?

    Read the article

  • MSSQL 2005 migration to 2008 Express Edition - Any complications?

    - by FullTrust
    Hey, I've developed an application that uses ASP.NET, Linq-to-SQL and MSSQL 2005. However, I would like to migrate it to MSSQL 2008. I don't have MSSQL 2008, so I was wondering if it's possible for me to detach my 2005 db and attach it within 2008 express edition, to test if it will work on my host's MSSQL 2008 server? I haven't done anything complicated (CRUD is done from Linq to SQL, and all stored procs are the ASP.NET Membership default ones). Would this work, or will I get an error since I'm 'downgrading' so to speak? If I download MSSQL 2008 express edition, it will be on the same system as my MSSQL 2005 Developer Edition. I'm hoping this won't cause any problems? Thanks

    Read the article

  • Mystery Key Value Coding Key

    - by Stephen Furlani
    Hello, I'm attempting to load data from an undocumented API (OsiriX). Getting the NSManagedObject like this: NSManagedObject *itemStudy = [[BrowserController databaseOutline] itemAtRow: [[BrowserController databaseOutline] selectedRow]]; works just fine. But getting the NSManagedObject like this: seriesArray = [_context executeFetchRequest:request error:&error]; NSManagedObject *itemSeries = [seriesArray objectAtIndex:0]; Generates an error when I call [itemSeries valueForKey:@"type"] 2010-05-27 11:04:48.178 rcOsirix[27712:7b03] Exception: [<NSManagedObject 0xd30fd0> valueForUndefinedKey:]: the entity Series is not key value coding-compliant for the key "type". This confuses me thoroughly. If I print the KVC values for itemSeries I get this list: 2010-05-27 11:04:48.167 rcOsirix[27712:7b03] KVC comment 2010-05-27 11:04:48.168 rcOsirix[27712:7b03] KVC date 2010-05-27 11:04:48.168 rcOsirix[27712:7b03] KVC dateAdded 2010-05-27 11:04:48.169 rcOsirix[27712:7b03] KVC dateOpened 2010-05-27 11:04:48.169 rcOsirix[27712:7b03] KVC displayStyle 2010-05-27 11:04:48.170 rcOsirix[27712:7b03] KVC id 2010-05-27 11:04:48.170 rcOsirix[27712:7b03] KVC modality 2010-05-27 11:04:48.170 rcOsirix[27712:7b03] KVC name 2010-05-27 11:04:48.171 rcOsirix[27712:7b03] KVC numberOfImages 2010-05-27 11:04:48.171 rcOsirix[27712:7b03] KVC numberOfKeyImages 2010-05-27 11:04:48.171 rcOsirix[27712:7b03] KVC rotationAngle 2010-05-27 11:04:48.172 rcOsirix[27712:7b03] KVC scale 2010-05-27 11:04:48.172 rcOsirix[27712:7b03] KVC seriesDICOMUID 2010-05-27 11:04:48.173 rcOsirix[27712:7b03] KVC seriesDescription 2010-05-27 11:04:48.173 rcOsirix[27712:7b03] KVC seriesInstanceUID 2010-05-27 11:04:48.173 rcOsirix[27712:7b03] KVC seriesSOPClassUID 2010-05-27 11:04:48.174 rcOsirix[27712:7b03] KVC stateText 2010-05-27 11:04:48.174 rcOsirix[27712:7b03] KVC thumbnail 2010-05-27 11:04:48.174 rcOsirix[27712:7b03] KVC windowLevel 2010-05-27 11:04:48.175 rcOsirix[27712:7b03] KVC windowWidth 2010-05-27 11:04:48.175 rcOsirix[27712:7b03] KVC xFlipped 2010-05-27 11:04:48.176 rcOsirix[27712:7b03] KVC xOffset 2010-05-27 11:04:48.176 rcOsirix[27712:7b03] KVC yFlipped 2010-05-27 11:04:48.176 rcOsirix[27712:7b03] KVC yOffset 2010-05-27 11:04:48.177 rcOsirix[27712:7b03] KVC mountedVolume 2010-05-27 11:04:48.177 rcOsirix[27712:7b03] KVC study 2010-05-27 11:04:48.178 rcOsirix[27712:7b03] KVC images The KVC for itemStudy is this: 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC accessionNumber 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC comment 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC date 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC dateAdded 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC dateOfBirth 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC dateOpened 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC dictateURL 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC expanded 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC hasDICOM 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC id 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC institutionName 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC lockedStudy 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC modality 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC name 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC numberOfImages 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC patientID 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC patientSex 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC patientUID 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC performingPhysician 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC referringPhysician 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC reportURL 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC stateText 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC studyInstanceUID 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC studyName 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC windowsState 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC albums 2010-05-27 10:46:40.340 OsiriX[27266:a0f] KVC series If I use code: NSDictionary *props = [[item entity] propertiesByName]; for (NSString *s in [props allKeys]) { NSLog(@"KVC %@", s); } Yet itemStudy throws no error if I call [itemStudy valueForKey:@"type"] when it should because there's no KVC for @"type"!!! Granted, the objects are different but neither of them contain the key @"type" and they both should throw errors, yet the Osirix code Tests for both conditions: if ([[item valueForKey:@"type"] isEqualToString:@"Series"]) { ... } if ([[item valueForKey:@"type"] isEqualToString:@"Study"]) { ... } And throws no errors. Yet when I load an NSManagedObject of the same exact model and entity @"Series" it throws the 'no key value' when passed into the conditions above. Am I missing something? Both the superentity and subentities of itemSeries and itemStudy are nil so they don't inherit from something that has KVC @"type". I'm totally at a loss as to explain what is going on. --- EDIT --- I know no one can explain what is going on... but maybe where to start looking? How would itemStudy have the extra KVC @"type" that doesn't show up in it's property list? Thank you for your assistance, -Stephen

    Read the article

  • Parsing JSON into XML using Windows Phone

    - by Henry Edwards
    I have this code, but can't get it all working. I am trying to get a json string into xml. So that I can get a list of items when i parse the data. Is there a better way to parse json into xml. If so what's the best way to do it, and if possible could you give me a working example? The URL that is in the code is not the URL that i am using using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Microsoft.Phone.Controls; using Newtonsoft.Json; using Newtonsoft.Json.Serialization; using Newtonsoft.Json.Converters; using Newtonsoft.Json.Utilities; using Newtonsoft.Json.Linq; using Newtonsoft.Json.Schema; using Newtonsoft.Json.Bson; using System.Xml; using System.Xml.Serialization; using System.Xml.Linq; using System.Xml.Linq.XDocument; using System.IO; namespace WindowsPhonePanoramaApplication3 { public partial class Page2 : PhoneApplicationPage { public Page2() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e1) { /* because the origional JSON string has multiple root's this needs to be added */ string json = "{BFBC2_GlobalStats:"; json += DownlodUrl("http://api.bfbcs.com/api/xbox360?globalstats"); json += "}"; XmlDocument doc = (XmlDocument)JsonConvert.DeserializeObject(json); textBox1.Text = GetXmlString(doc); } private string GetXmlString() { throw new NotImplementedException(); } private string DownlodUrl(string url) { string result = null; try { WebClient client = new WebClient(); result = client.DownloadString(url); } catch (Exception ex) { // handle error result = ex.Message; } return result; } private string GetXmlString(XmlDocument xmlDoc) { sw = new StringWriter(); XmlTextWriter xw = new XmlTextWriter(sw); xw.Formatting = System.Xml.Formatting.Indented; xmlDoc.WriteTo(xw); return sw.ToString(); } } } The URL outputs the following code: {"StopName":"Race Hill", "stopId":7553, "NaptanCode":"bridwja", "LongName":"Race Hill", "OperatorsCode1":" 5", "OperatorsCode2":" ", "OperatorsCode3":" ", "OperatorsCode4":"bridwja", "Departures":[ { "ServiceName":"", "Destination":"", "DepartureTimeAsString":"", "DepartureTime":"30/01/2012 00:00:00", "Notes":""}` Thanks for your responses. So Should i just leave the data a json and then view the data via that??? Is this a way to show the data from a json string. public void Load() { // form the URI UriBuilder uri = new UriBuilder("http://mysite.com/events.json"); WebClient proxy = new WebClient(); proxy.OpenReadCompleted += new OpenReadCompletedEventHandler(OnReadCompleted); proxy.OpenReadAsync(uri.Uri); } void OnReadCompleted(object sender, OpenReadCompletedEventArgs e) { if (e.Error == null) { var serializer = new DataContractJsonSerializer(typeof(EventList)); var events = (EventList)serializer.ReadObject(e.Result); foreach (var ev in events) { Items.Add(ev); } } } public ObservableCollection<EventDetails> Items { get; private set; } Edit: Have now kept the url as json and have now got it working by using the json way.

    Read the article

  • MSBuild "Wrapper" fails while VS2010 "Pure" compile succeeds for MFC application in CruiseControl.NE

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (devenv) works, but a wrapper MSBuild script run via the CCNet MSBuild task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >