Search Results

Search found 35275 results on 1411 pages for 'white list'.

Page 321/1411 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • directory services group query changing randomly

    - by yamspog
    I am receiving an unusual behaviour in my asp.net application. I have code that uses Directory Services to find the AD groups for a given, authenticated user. The code goes something like ... string username = "user"; string domain = "LDAP://DC=domain,DC=com"; DirectorySearcher search = new DirectorySearcher(domain); search.Filter = "(SAMAccountName=" + username + ")"; And then I query and get the list of groups for the given user. The problem is that the code was receiving the list of groups as a list of strings. With our latest release of the software, we are starting to receive the list of groups as a byte[]. The system will return string, suddenly return byte[] and then with a reboot it returns string again. Anyone have any ideas? code sample: DirectoryEntry dirEntry = new DirectoryEntry("LDAP://" + ldapSearchBase); DirectorySearcher userSearcher = new DirectorySearcher(dirEntry) { SearchScope = SearchScope.Subtree, CacheResults = false, Filter = ("(" + txtLdapSearchNameFilter.Text + "=" + userName + ")") }; userResult = userSearcher.FindOne(); ResultPropertyValueCollection valCol = userResult.Properties["memberOf"]; foreach (object val in valCol) { if (val is string) { distName = val.ToString(); } else { distName = enc.GetString((Byte[])val); } }

    Read the article

  • Question about Displaying Documents and the CQWP in MOSS 2007

    - by Psycho Bob
    My organization is in the process of converting our intranet over to a SharePoint solution. Part of this intranet will be the movement and organization of all our internal documents. Currently, we have 11 pages of document links, each with its own subheadings. So far I have it set where each document has a custom field called "Page" with a check box list of all the document pages on the intranet site. On each individual page, I have setup a Content Query Web Part that displays the documents that have the corresponding Page value set (i.e. if a document Page value has been checked for "HR" it will appear on the HR page). The goal of this setup is to allow the nontechnical personal who will be responsible for the maintenance of the documents to be able to upload new documents to the documents list and note on which pages they should appear on without having to manually update the pages themselves. The problem that I am having is that I cannot seem to find a good way to sort the documents into their subheadings once they are on the appropriate page. I could create individual check boxes for each page/subheading combination, but this would create a list of approximately 50-75 items. Does anyone have any ideas as to how I could accomplish this, either via CQWP or by different means? Goals/Requirements of Installation Allow Intranet documents to be maintained by nontechnical personnel Display documents on the appropriate pages without user having to edit actual page or web part Denote document page location using user settable document attributes (if possible) Maintain current intranet organization and workflow Use only one document list without subdirectories NOTE: I am aware that this is not the most efficient or elegant way to do things, but these are the requirements I have been given for the project.

    Read the article

  • CentOS security for lazy admins

    - by Robby75
    I'm running CentOS 5.5 (basic LAMP with Parallels Power Panel and Plesk) and have thus far neglected security (because it's not my full-time job, there is always something more important on my todo-list). My server does not contain any secret data and also no lives depend on it - Basically what I want is to make sure it does not become part of a botnet, that is "good enough" security in my case. Anyway, I don't want to become a full-time paranoid admin (like constantly watching and patching everything because of some obscure problem), I also don't care about most security problems like DOS attacks or problems that only exist when using some arcane settings. I'm in search of a "happy medium", for example a list of known important problems in the default installation of CentOS 5.5 and/or a list of security problems that have actually been exploited - not the typical endless list of buffer overflows that "maybe" a problem in some special case. The problem that I have with the usually recommended approaches (joining mailing lists, etc.) is that the really important problems (something where an exploit exists, that is exploitable in a common setup and where the attacker can do something really useful - i.e. not a DOS) are completely and utterly swamped by millions of tiny security alerts that surely are important for high-security servers, but not for me. Thanks for all suggestions!

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • Mailman delivery troubles

    - by stanigator
    I'm not sure if this is a good place to ask this question. It's about mailing list management software called Mailman from GNU. Here are the details: Hosting provider: Vlexofree Domain: www.sysil.com with Google Apps Mailing List created from hosting cpanel: [email protected] I have registered a list of subscribers, and tried sending an email to [email protected]. I got the following error message: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://mail.google.com/support/bin/answer.py?answer=6596 23si6479194ewy.44 (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.216.90.136 with SMTP id e8mr1469147wef.110.1264220118960; Fri, 22 Jan 2010 20:15:18 -0800 (PST) Date: Fri, 22 Jan 2010 20:15:18 -0800 Message-ID: <[email protected]> Subject: From: Stanley Lee <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=0016e6dab0931bccc3047dcd2f1e - Show quoted text - Is there any way of fixing this problem? I would like to be able to have this mailing list to work through my hosting and domain. Thanks in advance.

    Read the article

  • Unicorn 3.3.1 and Rack 1.1.0 issues?

    - by user41422
    I'm upgrading from the Ruby Enterprise Edition 1.8.6 to the latest 1.8.7 version with Unicorn to facilitate an upgrade to Rails 2.3.10, and am running into some issues. Should I uninstall the older versions of these gems? Here's the log messages: I'm upgrading from the Ruby Enterprise Edition 1.8.6 to the latest 1.8.7 version with Unicorn to facilitate an upgrade to Rails 2.3.10, and am running into some issues. Should I uninstall the older versions of these gems? I, [2011-02-02T22:06:16.328076 #30672] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:06:16.333137 #30672] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]> I, [2011-02-02T22:07:12.259436 #30701] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:07:12.259952 #30701] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]> I, [2011-02-02T22:09:27.787177 #30772] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:09:27.787691 #30772] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]> I, [2011-02-02T22:10:44.175407 #30846] INFO -- : listening on addr=0.0.0.0:8080 fd=3 I, [2011-02-02T22:10:44.175928 #30846] INFO -- : Refreshing Gem list /srv/ree/bin/unicorn_rails must be run inside RAILS_ROOT: #<Gem::LoadError: can't activate rack (~> 1.1.0, runtime) for ["actionpack-2.3.10", "rails-2.3.10"], already activated rack-1.2.1 for ["unicorn-3.3.1"]>

    Read the article

  • Add files/folders/Apps to a new User Account

    - by odeho19
    I have created a new user acct. for my roommate. I/we don't really care much for IE, and I'd like to add Google Chrome, and FF for her, to let her choose her own browser. Plus I'd like to add some other apps for her, and transfer some of her personal info that she's been storing on my administrative acct. (i.e. personal photos and such). So, like I said the account is set up. But when I go to use Chrome, it won't open. It, along with other apps are listed in the list of "approved" applications under the parental control list. (Application Restrictions List) But they either don't show up when searched for,( in her acct.) or won't open and operate. One other example is Faststone Capture. And then, I've got one application, Yahoo, "Unchecked", and yet there it is in her account, AND IT WORKS! I do not get it. The purpose to setting up an account for her was to only restrict that she couldn't delete, or uninstall anything on the computer. I don't care, (obviously within reason) what she installs, but NO TAKING AWAY! lol Here is a screen shot of the restrictions list: So, here I sit, wondering what I did wrong, or forgot to do in order for these to work. I don't care if Yahoo works or not. But her photos being transferred, Chrome/FF working, and Faststone Capture working, are a priority for me. If anyone has any ideas, I would appreciate any advice. Thanks! OH Edit: Also, my account is an Administrative one. The account I am creating for her is just a Standard User Acct.

    Read the article

  • FTP issues with Windows 2003 box and Filezilla

    - by vanhornRF
    We've set up a Windows 2003 server with IIS 6 and the FTP is Filezilla. I'm not a sys admin by any means and neither is my other developer. I'm trying to connect to FTP on a Mac with a Cyberduck FTP client. I'm running OSX 10.6.3. The problem is that I can connect to the FTP address, but when I do it will hang for a minute or two trying to list the directories. This address is pointed to the webroot folder so it's only trying to list the pertinent folders for the site we're working on. Eventually after a minute or two it will go through and list everything and then if you try and open a file or another folder, it will hang again and then eventually list the folders. My question: Is there some stupid default setting we're missing? Is this a common occurrence? Like I said I'm not a sys admin at all and I'm sure I'm missing some valuable questions for anyone that reads this, so please fire away if you can help and you need more info, I'll do my best to provide it. Thanks!

    Read the article

  • gitolite mac don't add new user to authorized_keys

    - by crashbus
    I installed gitolite and every thing works fine for me as admin. But when I'd like to add add a new user the new user can't connect to the server. After I looked into the file authorized_keys I saw that the new user wasn't added to the file. During the commit of the new public-key I get some workings: WARNING: split conf not set, gl-conf present for 'gitolite-admin' Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 882 bytes, done. Total 4 (delta 1), reused 0 (delta 0) remote: WARNING: split conf not set, gl-conf present for 'gitolite-admin' remote: WARNING: ?? @staff christianwaldmann markwelch remote: sh: find: command not found remote: sh: find: command not found remote: sh: sort: command not found remote: sh: find: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: cut: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 23: grep: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sort: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sed: command not found remote: sh: find: command not found remote: sh: find: command not found How can I fix it that gitolite auto-add the new user to the authorized_keys.

    Read the article

  • Lookup Multiple Results for Multiple Criteria

    - by Matt
    I've got a list of parent SKUs for items I need to create in my inventory system. This list has been finely paired down to the 165 products we would like to carry. However, each one of these 165 SKUs has between 2 and 8 child SKUs of different colors, sizes, etc. Those are stored on a different worksheet, mixed into around 2500 items. Those are the SKUs I need to input into my inventory system. Here is what it looks like. Sheet 1 is just SKUs: A 1 2 3 4 Sheet 2 is comprised of all the child SKUs, with parent SKUs in column B. Not all parents have the same number of children: A B 1BLKM 1 1BLKL 1 1BLUM 1 2BLKM 2 2BLKL 2 2BLUM 2 2ORAM 2 3BLKM 3 3BLUM 3 I want to look up all of the child SKUs for the Parent SKU list that has been fine tuned. Parent SKU is included as a column on the child SKU worksheet. I need to lookup all matches of the Parent SKU, then continue to move down the parent SKU list until all matches for all 165 parent items have been found. It seems like every function I try can't use an Array for input. Is there a way to do this with Lookup or some combination of index, match, row, etc? Any way at all to do it without VBA? Or maybe even a VBA solution with code that I can understand, as someone who hasn't used VBA before.

    Read the article

  • VNC Address Book - Window Invisible? Working as intended?

    - by user1445967
    I have VNC Server and VNC Viewer on the same PC with Windows 7 Ultimate x64. Is VNC Address Book supposed to have a main application window? I can't tell if this is bugged or working as intended. When I start VNC Address book, it creates an item in the task list and also an icon in the notification area. If I click the task list item once, nothing happens. If I click again, it disappears from the task list but remains in the notification area**. If I right-click the icon on the notification area, there is an option 'open address book', which if chosen, creates the item in the task list again, with the exact same behavior. If I left or right click on the address book icon, I have ways to connect to servers in the address book. However I have no way to add/remove/edit servers in the address book. **Note that this behavior is identical to that of an application with a main window where there is an option to 'minimize to system tray / notification area'. Only here, no main window is visible, ever. If fixing this problem is beyond the scope of this site, that's fine, but at this point I have no real way to tell if the program is malfunctioning on my pc or if it is far more minimalist than I expected. There appears to be no forum on realvnc.com where I can ask about this, perhaps this is to prevent me from getting any help unless I pay to register?

    Read the article

  • DHCPD (Slackware) - Disabling auto-generation of gateway as DNS server

    - by Dogbert
    Good day, I am using a Linux workstation on Slackware 13.37. One "problem" I have had to deal with ever since 11.0 is the following: DNS servers are queried and determined at startup by DHCP daemon (DHCPD) This is invoked at startup by a script located at /etc/rc.d/rc.dhcpd My DNS servers for my ISP are resolved correctly, and are stored in a list located at /etc/resolv.conf However, the one annoying problem is that my gateway IP (ie: 192.168.1.1) is always automatically put at the top of the list in resolv.conf, meaning I have to always wait for a timeout before a valid DNS server is used to resolve an address (ie: timeout on 192.168.1.1 because it is not actually a DNS server, then DHCP uses the next server in the list). I could lower my DNS resolution timeout so the gateway query times out quicker, but that's not what I want, as I don't want to degrade the abilities of legitimate DNS servers. What I would like to do is change how DHCPD operates so that it does NOT put my gateway IP address at the beginning of this list. I've searched via "man dhcpd", etc, and haven't found the exact answer yet. Any help on this issue is appreciated. Thank you all in advance for your time and assistance.

    Read the article

  • TFS 2010 Subfolder Permissions

    - by gmcalab
    I am a TFSAdmin and when I have a TFS project in which a subfolder needs specific permissions to deny some users. So, I right click on the folder in question hit Properties, and click the Security tab. There I select the Windows User or Group radio, then click Add. I put in the AD User that I want specific permissions for and hit Check Names. That resolves, so I click OK. Next, I select the permissions to Allow or Deny below in the Permissions for list. I hit OK. The permission are honored by TFS, this user no longer has PendChange permissions and I was expecting. The odd thing is, I was expecting to be able to go back into the Security tab and see that User in the list of Users and Groups and see the current state. But the list is always empty. Not sure why, but the permissions are definitely being honored, I can re-add the user with different permissions and those are also honored. Any ideas why the current users are not showing up in the Users and Groups list under the Security tab for a folder's properties? I also used the tf permission $\... to see if there were any permissions but it always returns There are no permissions set for this item (Inherit: Yes)

    Read the article

  • How to show what will be updated next pull?

    - by ???
    In SVN, doing svn update will show a list of full paths with a status prefix: $ svn update M foo/bar U another/bar Revision 123 I need to get this update list to do some post-process work. After I have transferred the SVN repository to Git, I can't find a way to get the update list: $ git pull Updating 9607ca4..61584c3 Fast-forward .gitignore | 1 + uni/.gitignore | 1 + uni/package/cooldeb/.gitignore | 1 + uni/package/cooldeb/Makefile.am | 2 +- uni/package/cooldeb/VERSION.av | 10 +- uni/package/cooldeb/cideb | 10 +- uni/package/cooldeb/cooldeb.sh | 2 +- uni/package/cooldeb/newdeb | 53 +++- ...update-and-deb-redist => update-and-deb-redist} | 5 +- uni/utils/2tree/{list2tree => 2tree} | 12 +- uni/utils/2tree/ChangeLog | 4 +- uni/utils/2tree/Makefile.am | 2 +- I can translate the Git pull status list to SVN's format: M .gitignore M uni/.gitignore M uni/package/cooldeb/.gitignore M uni/package/cooldeb/Makefile.am M uni/package/cooldeb/VERSION.av M uni/package/cooldeb/cideb M uni/package/cooldeb/cooldeb.sh M uni/package/cooldeb/newdeb M ...update-and-deb-redist => update-and-deb-redist} M uni/utils/2tree/{list2tree => 2tree} M uni/utils/2tree/ChangeLog M uni/utils/2tree/Makefile.am However, some entries having long path names are abbreviated, such as uni/package/cooldeb/update-and-deb-redist is abbreviated to ...update-and-deb-redist. I deem I can do with Git directly, maybe I can configure git pull's output in special format. Any idea?

    Read the article

  • cisco asa + action drop issue

    - by ghp
    Have created a tunnel between 10.x.y.z network and 122.a.b.c ..the tunnel is up and active, but when I try the packet tracer output ..I get the ACTION as drop. I have also enabled same-security-traffic permit intra-interface. Can someone help me what does this drop mean? Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule Packet Tracer output @Shane Madden: please find below the packet tracer output. CASA5K-A# CASA5K-A# config t CASA5K-A(config)# packet-tracer input inside tcp 10.x.y.112 0 122.a.b.c 0 Phase: 1 Type: ROUTE-LOOKUP Subtype: input Result: ALLOW Config: Additional Information: in 0.0.0.0 0.0.0.0 outside Phase: 2 Type: ACCESS-LIST Subtype: Result: DROP Config: Implicit Rule Additional Information: Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule CASA5K-A(config)# ======================================================================== The access-group are as follows : access-group acl-inbound in interface outside access-group acl-outbound in interface inside and the access-list's are access-list acl-inbound extended permit tcp any any gt 1023 access-list acl-outbound extended permit ip object-group net-Source object net-dest

    Read the article

  • JBoss Seam: In ScopeType.PAGE I get: java.lang.IllegalStateException: No conversation context active

    - by Markos Fragkakis
    Hi all, I have a page-scoped component, which has an instance variable List with data, which I display in a datatable. This datatable has pagination, sorting and filtering. The first time gate into the page, I get this appended in my URL: ?conversationId=97. The page works correctly, and when I change datatable pages no now component is created. After a minute or two, and at seamingly random time, I get an exception saying that there is no context. I have not used @Create in my code or my navigation files. So, I have two questions: Why do I get this suffix in my URL? Why did a conversation start? Why the exception? The component is scoped to PAGE. If I received an exception, it should not be related to a conversation. Right? Or is the conversation the exception is referring a temporary conversation? Cheers! UPDATE: This is the page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:a4j="http://richfaces.org/a4j" xmlns:rich="http://richfaces.org/rich"> <body> <ui:composition template="/WEB-INF/facelets/templates/template.xhtml"> <ui:define name="content"> <!-- This method returns focus on the filter --> <script type="text/javascript"> function submitByEnter(event){ if (event.keyCode == 13) { if (event.preventDefault) { // Firefox event.preventDefault(); } else { // IE event.returnValue = false; } document.getElementById("refreshButton").click(); } } </script> <h:form prependId="false"> <h:commandButton action="Back" value="Back to home page" /> <br /> <p><h:outputText value="Applicants and Products (experimentation page)" class="page_title" /></p> <h:commandButton action="#{applicantProductListBean.showCreateApplicant}" value="Create Applicant" id="createApplicantButton"> </h:commandButton> <a4j:commandButton value="Refresh" id="refreshButton" action="#{applicantProductListBean.refreshData}" image="/images/icons/refresh48x48.gif" reRender="compositeTable, compositeScroller"> <!-- <f:setPropertyActionListener--> <!-- target="# {pageScrollerBean.applicantProductListPage}" value="1" />--> </a4j:commandButton> <rich:toolTip for="createApplicantButton" value="Create Applicant" /> <rich:dataTable styleClass="composite2DataTable" id="compositeTable" rows="1" columnClasses="col" value="#{applicantProductListBean.dataModel}" var="pageAppList"> <f:facet name="header"> <rich:columnGroup> <rich:column colspan="3"> <h:outputText styleClass="headerText" value="Applicants" /> </rich:column> <rich:column colspan="3"> <h:outputText styleClass="headerText" value="Products" /> </rich:column> <rich:column breakBefore="true"> <h:outputText styleClass="headerText" value="Applicant Name" /> <a4j:commandButton id="sortingApplicantNameButton" action="#{applicantProductListBean.toggleSorting('applicantName')}" image="/images/icons/sorting/#{sortingFilteringBean.applicantProductListSorting.sortingValues['applicantName']}.gif" reRender="sortingApplicantNameButton, sortingApplicantEmailButton, compositeTable, compositeScroller"> <!-- <f:setPropertyActionListener--> <!-- target="#{pageScrollerBean.applicantProductListPage}" value="1" />--> </a4j:commandButton> <br /> <h:inputText value="#{sortingFilteringBean.applicantProductListFiltering.filteringValues['applicantName']}" id="applicantNameFilterValue" onkeypress="return submitByEnter(event)"> </h:inputText> </rich:column> <rich:column> <h:outputText styleClass="headerText" value="Applicant Email" /> <a4j:commandButton id="sortingApplicantEmailButton" action="#{applicantProductListBean.toggleSorting('applicantEmail')}" image="/images/icons/sorting/#{sortingFilteringBean.applicantProductListSorting.sortingValues['applicantEmail']}.gif" reRender="sortingApplicantNameButton, sortingApplicantEmailButton, compositeTable, compositeScroller"> <!-- <f:setPropertyActionListener--> <!-- target="#{pageScrollerBean.applicantProductListPage}" value="1" />--> </a4j:commandButton> <br /> <h:inputText value="#{sortingFilteringBean.applicantProductListFiltering.filteringValues['applicantEmail']}" id="applicantEmailFilterValue" onkeypress="return submitByEnter(event)"> </h:inputText> </rich:column> <rich:column> <h:outputText styleClass="headerText" value="Applicant Actions" /> </rich:column> <rich:column> <h:outputText styleClass="headerText" value="Product Name" /> <a4j:commandButton id="sortingProductNameButton" action="#{applicantProductListBean.toggleSorting('productName')}" immediate="true" image="/images/icons/sorting/#{sortingFilteringBean.applicantProductListSorting.sortingValues['productName']}.gif" reRender="sortingProductNameButton, compositeTable, compositeScroller"> </a4j:commandButton> <br /> <h:inputText value="#{sortingFilteringBean.applicantProductListFiltering.filteringValues['productName']}" id="productNameFilterValue" onkeypress="return submitByEnter(event)"> </h:inputText> </rich:column> <rich:column> <h:outputText styleClass="headerText" value="Product Email" /> <br /> <h:inputText value="#{sortingFilteringBean.applicantProductListFiltering.filteringValues['productEmail']}" id="productEmailFilterValue" onkeypress="return submitByEnter(event)"> </h:inputText> </rich:column> <rich:column> <h:outputText styleClass="headerText" value="Product Actions" /> </rich:column> </rich:columnGroup> </f:facet> <rich:subTable rowClasses="odd_applicant_row, even_applicant_row" value="#{pageAppList}" var="app"> <rich:column styleClass=" internal_cell composite2TextContainingColumn" valign="top"> <h:outputText value="#{app.name}" /> </rich:column> <rich:column styleClass="internal_cell composite2TextContainingColumn" valign="top"> <h:outputText value="#{app.receiptEmail}" /> </rich:column> <rich:column valign="top" styleClass="buttonsColumn"> <h:commandButton action="#{applicantProductListBean.showUpdateApplicant(app)}" image="/images/icons/edit.jpg"> </h:commandButton> <!-- <rich:toolTip for="editApplicantButton" value="Edit Applicant" />--> <h:commandButton action="#{applicantProductListBean.showDeleteApplicant(app)}" image="/images/icons/delete.png"> </h:commandButton> <!-- <rich:toolTip for="deleteApplicantButton" value="Delete Applicant" />--> </rich:column> <rich:column colspan="3"> <table class="productsTableTable"> <tbody> <tr> <td class="createProductButtonTableCell"><h:commandButton action="#{applicantProductListBean.showCreateProduct(app)}" value="Create Product"> </h:commandButton> <!-- <rich:toolTip for="createProductButton" value="Create Product" />--> </td> </tr> <tr> <td><rich:dataTable value="#{app.products}" var="prod" rowClasses="odd_product_row, even_product_row"> <rich:column styleClass="internal_cell composite2TextContainingColumn"> <h:outputText value="#{prod.inventedName}" /> </rich:column> <rich:column styleClass="internal_cell composite2TextContainingColumn"> <h:outputText value="#{prod.receiptEmail}" /> </rich:column> <rich:column styleClass="buttonsColumn"> <h:commandButton action="#{applicantProductListBean.showUpdateProduct(prod)}" image="/images/icons/edit.jpg"> </h:commandButton> <!-- <rich:toolTip for="editProductButton" value="Edit Product" />--> <h:commandButton action="#{applicantProductListBean.showDeleteProduct(prod)}" image="/images/icons/delete.png"> <f:setPropertyActionListener target="#{productBean.product}" value="#{prod}" /> </h:commandButton> <!-- <rich:toolTip for="deleteProductButton" value="Delete Product" />--> </rich:column> </rich:dataTable></td> </tr> </tbody> </table> </rich:column> </rich:subTable> <f:facet name="footer"> <h:panelGrid columns="1" styleClass="applicantProductListFooter"> <h:outputText value="#{msgs.no_results}" rendered="#{(empty applicantProductListBean.dataModel) || (applicantProductListBean.dataModel.rowCount==0)}"/> <rich:datascroller align="center" for="compositeTable" page="#{pageScrollerBean.applicantProductListPage}" id="compositeScroller" reRender="compositeTable" renderIfSinglePage="false" fastControls="hide"> <f:facet name="first"> <h:outputText value="#{msgs.first}" styleClass="scrollerCell" /> </f:facet> <f:facet name="first_disabled"> <h:outputText value="#{msgs.first}" styleClass="scrollerCell" /> </f:facet> <f:facet name="last"> <h:outputText value="#{msgs.last}" styleClass="scrollerCell" /> </f:facet> <f:facet name="last_disabled"> <h:outputText value="#{msgs.last}" styleClass="scrollerCell" /> </f:facet> <f:facet name="next"> <h:outputText value="#{msgs.next}" styleClass="scrollerCell" /> </f:facet> <f:facet name="next_disabled"> <h:outputText value="#{msgs.next}" styleClass="scrollerCell" /> </f:facet> <f:facet name="previous"> <h:outputText value="#{msgs.previous}" styleClass="scrollerCell" /> </f:facet> <f:facet name="previous_disabled"> <h:outputText value="#{msgs.previous}" styleClass="scrollerCell" /> </f:facet> </rich:datascroller> </h:panelGrid> </f:facet> </rich:dataTable> </h:form> </ui:define> This is the backing bean: @Name("applicantProductListBean") @Scope(ScopeType.PAGE) public class ApplicantProductListBean extends BasePagedSortableFilterableListBean { /** * Public field for ad-hoc injection to work. */ @EJB(name = "FacadeService") public ApplicantFacadeService applicantFacadeService; @Logger private static Log logger; private final int pageSize = 10; @Out(scope = ScopeType.CONVERSATION, required = false) Applicant currentApplicant; @Out(scope = ScopeType.CONVERSATION, required = false) Product product; @Create public void onCreate() { System.out.println("Create"); } @Override protected DataModel initDataModel(int pageSize) { // get filtering and sorting from session sorting = getSorting(); filtering = getFiltering(); // System.out.println("Initializing a Composite3DataModel"); // System.out.println("Pagesize: " + pageSize); // System.out.println("Filtering: " + filtering.getFilteringValues()); // System.out.println("Sorting: " + sorting.getSortingValues()); return new Composite3DataModel(1, sorting, filtering); } // Navigation methods /** * Navigation-returning method, returns the action to follow after pressing * the "Create Applicant" button * * @return the action to be taken */ public Navigation.ApplicantProductList showCreateApplicant() { return Navigation.ApplicantProductList.SHOW_CREATE_APPLICANT; } /** * Navigation-returning method, returns the action to follow after pressing * the "Edit Applicant" button * * @return the action to be taken */ public Navigation.ApplicantProductList showUpdateApplicant( Applicant applicant) { this.currentApplicant = applicant; return Navigation.ApplicantProductList.SHOW_UPDATE_APPLICANT; } /** * Navigation-returning method, returns the action to follow after pressing * the "Delete Applicant" button * * @return the action to be taken */ public Navigation.ApplicantProductList showDeleteApplicant( Applicant applicant) { this.currentApplicant = applicant; return Navigation.ApplicantProductList.SHOW_DELETE_APPLICANT; } /** * Navigation-returning method, returns the action to follow after pressing * the "Create Product" button * * @return the action to be taken */ public Navigation.ApplicantProductList showCreateProduct(Applicant app) { this.product = new Product(); this.product.setApplicant(app); return Navigation.ApplicantProductList.SHOW_CREATE_PRODUCT; } /** * Navigation-returning method, returns the action to follow after pressing * the "Edit Product" button * * @return the action to be taken */ public Navigation.ApplicantProductList showUpdateProduct(Product prod) { this.product = prod; return Navigation.ApplicantProductList.SHOW_UPDATE_PRODUCT; } /** * Navigation-returning method, returns the action to follow after pressing * the "Delete Product" button * * @return the action to be taken */ public Navigation.ApplicantProductList showDeleteProduct(Product prod) { this.product = prod; return Navigation.ApplicantProductList.SHOW_DELETE_PRODUCT; } /** * */ @Override public Sorting getSorting() { if (sorting == null) { return (getSortingFilteringBeanFromSession() .getApplicantProductListSorting()); } return sorting; } /** * */ @Override public void setSorting(Sorting sorting) { getSortingFilteringBeanFromSession().setApplicantProductListSorting( sorting); } /** * */ @Override public Filtering getFiltering() { if (filtering == null) { return (getSortingFilteringBeanFromSession() .getApplicantProductListFiltering()); } return filtering; } /** * */ @Override public void setFiltering(Filtering filtering) { getSortingFilteringBeanFromSession().setApplicantProductListFiltering( filtering); } /** * @return the currentApplicant */ public Applicant getCurrentApplicant() { return currentApplicant; } /** * @param currentApplicant * the currentApplicant to set */ public void setCurrentApplicant(Applicant applicant) { this.currentApplicant = applicant; } /** * The model for this page * */ private class Composite3DataModel extends PagedSortableFilterableDataModel<List<Applicant>> { public Composite3DataModel(int pageSize, Sorting sorting, Filtering filtering) { super(pageSize, sorting, filtering); } @Override protected DataPage<List<Applicant>> fetchPage(int fakeStartRow, int fakePageSize) { // if (logger.isTraceEnabled()) { System.out.println("Getting page with fakeStartRow: " + fakeStartRow + " and fakePageSize " + fakePageSize); // } // to find the page size multiply the startRow and the fakePageSize // (which is 1) to the actual page size int startRow = fakeStartRow * ApplicantProductListBean.this.pageSize; int pageSize = fakePageSize * ApplicantProductListBean.this.pageSize; // if (logger.isTraceEnabled()) { System.out.println("Getting page with startRow: " + startRow + " and pageSize " + pageSize); // } List<Applicant> pageApplicants = applicantFacadeService .findPagedWithCriteria(startRow, pageSize, filtering, sorting); // List<Applicant> pageApplicants = applicantFacadeService // .findPagedWithDynamicQuery(startRow, pageSize, filtering, // sorting, true); // if (logger.isTraceEnabled()) { System.out.println("Set of applicants: " + pageApplicants.size()); // } List<List<Applicant>> pageApplicantsListContainer = new ArrayList<List<Applicant>>(); pageApplicantsListContainer.add(pageApplicants); DataPage<List<Applicant>> dataPage = new DataPage<List<Applicant>>( this.getRowCount(), fakeStartRow, pageApplicantsListContainer); return dataPage; } @Override protected int getDatasetSize() { // int size = getServiceFacade().countWithCriteria(filtering, // sorting); // int size = // applicantFacadeService.countWithDynamicQuery(filtering, sorting, // false); int size = (int) Math.ceil((double) applicantFacadeService .countWithCriteria(filtering, sorting, false) / pageSize); if (logger.isTraceEnabled()) { logger.trace("Got Dataset Size: " + size); } return size; } } /** * @return the product */ public Product getProduct() { return product; } /** * @param product * the product to set */ public void setProduct(Product product) { this.product = product; } }

    Read the article

  • How to edit item in a listbox shown from reading a .csv file?

    - by Shuvo
    I am working in a project where my application can open a .csv file and read data from it. The .csv file contains the latitude, longitude of places. The application reads data from the file shows it in a static map and display icon on the right places. The application can open multiple file at a time and it opens with a new tab every time. But I am having trouble in couple of cases When I am trying to add a new point to the .csv file opened. I am able to write new point on the same file instead adding a new point data to the existing its replacing others and writing the new point only. I cannot use selectedIndexChange event to perform edit option on the listbox and then save the file. Any direction would be great. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; namespace CourseworkExample { public partial class Form1 : Form { public GPSDataPoint gpsdp; List<GPSDataPoint> data; List<PictureBox> pictures; List<TabPage> tabs; public static int pn = 0; private TabPage currentComponent; private Bitmap bmp1; string[] symbols = { "hospital", "university" }; Image[] symbolImages; ListBox lb = new ListBox(); string name = ""; string path = ""; public Form1() { InitializeComponent(); data = new List<GPSDataPoint>(); pictures = new List<PictureBox>(); tabs = new List<TabPage>(); symbolImages = new Image[symbols.Length]; for (int i = 0; i < 2; i++) { string location = "data/" + symbols[i] + ".png"; symbolImages[i] = Image.FromFile(location); } } private void openToolStripMenuItem_Click(object sender, EventArgs e) { FileDialog ofd = new OpenFileDialog(); string filter = "CSV File (*.csv)|*.csv"; ofd.Filter = filter; DialogResult dr = ofd.ShowDialog(); if (dr.Equals(DialogResult.OK)) { int i = ofd.FileName.LastIndexOf("\\"); name = ofd.FileName; path = ofd.FileName; if (i > 0) { name = ofd.FileName.Substring(i + 1); path = ofd.FileName.Substring(0, i + 1); } TextReader input = new StreamReader(ofd.FileName); string mapName = input.ReadLine(); GPSDataPoint gpsD = new GPSDataPoint(); gpsD.setBounds(input.ReadLine()); string s; while ((s = input.ReadLine()) != null) { gpsD.addWaypoint(s); } input.Close(); TabPage tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; lb.Width = 300; int selectedindex = lb.SelectedIndex; lb.Items.Add(mapName); lb.Items.Add("Bounds"); lb.Items.Add(gpsD.Bounds[0] + " " + gpsD.Bounds[1] + " " + gpsD.Bounds[2] + " " + gpsD.Bounds[3]); lb.Items.Add("Waypoint"); foreach (WayPoint wp in gpsD.DataList) { lb.Items.Add(wp.Name + " " + wp.Latitude + " " + wp.Longitude + " " + wp.Ele + " " + wp.Sym); } tabPage.Controls.Add(lb); pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = name; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); tabControl1.Controls.Add(tabPage); tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = mapName; string location = path + mapName; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); PictureBox pb = new PictureBox(); pb.Name = "pictureBox" + pn; pb.Image = Image.FromFile(location); tabControl2.Controls.Add(tabPage); pb.Width = pb.Image.Width; pb.Height = pb.Image.Height; tabPage.Controls.Add(pb); currentComponent = tabPage; tabPage.Width = pb.Width; tabPage.Height = pb.Height; pn++; tabControl2.Width = pb.Width; tabControl2.Height = pb.Height; bmp1 = (Bitmap)pb.Image; int lx, ly; float realWidth = gpsD.Bounds[1] - gpsD.Bounds[3]; float imageW = pb.Image.Width; float dx = imageW * (gpsD.Bounds[1] - gpsD.getWayPoint(0).Longitude) / realWidth; float realHeight = gpsD.Bounds[0] - gpsD.Bounds[2]; float imageH = pb.Image.Height; float dy = imageH * (gpsD.Bounds[0] - gpsD.getWayPoint(0).Latitude) / realHeight; lx = (int)dx; ly = (int)dy; using (Graphics g = Graphics.FromImage(bmp1)) { Rectangle rect = new Rectangle(lx, ly, 20, 20); if (gpsD.getWayPoint(0).Sym.Equals("")) { g.DrawRectangle(new Pen(Color.Red), rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("hospital")) { g.DrawImage(symbolImages[0], rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("university")) { g.DrawImage(symbolImages[1], rect); } } } } pb.Image = bmp1; pb.Invalidate(); } } private void openToolStripMenuItem_Click_1(object sender, EventArgs e) { FileDialog ofd = new OpenFileDialog(); string filter = "CSV File (*.csv)|*.csv"; ofd.Filter = filter; DialogResult dr = ofd.ShowDialog(); if (dr.Equals(DialogResult.OK)) { int i = ofd.FileName.LastIndexOf("\\"); name = ofd.FileName; path = ofd.FileName; if (i > 0) { name = ofd.FileName.Substring(i + 1); path = ofd.FileName.Substring(0, i + 1); } TextReader input = new StreamReader(ofd.FileName); string mapName = input.ReadLine(); GPSDataPoint gpsD = new GPSDataPoint(); gpsD.setBounds(input.ReadLine()); string s; while ((s = input.ReadLine()) != null) { gpsD.addWaypoint(s); } input.Close(); TabPage tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; ListBox lb = new ListBox(); lb.Width = 300; lb.Items.Add(mapName); lb.Items.Add("Bounds"); lb.Items.Add(gpsD.Bounds[0] + " " + gpsD.Bounds[1] + " " + gpsD.Bounds[2] + " " + gpsD.Bounds[3]); lb.Items.Add("Waypoint"); foreach (WayPoint wp in gpsD.DataList) { lb.Items.Add(wp.Name + " " + wp.Latitude + " " + wp.Longitude + " " + wp.Ele + " " + wp.Sym); } tabPage.Controls.Add(lb); pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = name; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); tabControl1.Controls.Add(tabPage); tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = mapName; string location = path + mapName; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); PictureBox pb = new PictureBox(); pb.Name = "pictureBox" + pn; pb.Image = Image.FromFile(location); tabControl2.Controls.Add(tabPage); pb.Width = pb.Image.Width; pb.Height = pb.Image.Height; tabPage.Controls.Add(pb); currentComponent = tabPage; tabPage.Width = pb.Width; tabPage.Height = pb.Height; pn++; tabControl2.Width = pb.Width; tabControl2.Height = pb.Height; bmp1 = (Bitmap)pb.Image; int lx, ly; float realWidth = gpsD.Bounds[1] - gpsD.Bounds[3]; float imageW = pb.Image.Width; float dx = imageW * (gpsD.Bounds[1] - gpsD.getWayPoint(0).Longitude) / realWidth; float realHeight = gpsD.Bounds[0] - gpsD.Bounds[2]; float imageH = pb.Image.Height; float dy = imageH * (gpsD.Bounds[0] - gpsD.getWayPoint(0).Latitude) / realHeight; lx = (int)dx; ly = (int)dy; using (Graphics g = Graphics.FromImage(bmp1)) { Rectangle rect = new Rectangle(lx, ly, 20, 20); if (gpsD.getWayPoint(0).Sym.Equals("")) { g.DrawRectangle(new Pen(Color.Red), rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("hospital")) { g.DrawImage(symbolImages[0], rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("university")) { g.DrawImage(symbolImages[1], rect); } } } } pb.Image = bmp1; pb.Invalidate(); MessageBox.Show(data.ToString()); } } private void exitToolStripMenuItem_Click(object sender, EventArgs e) { this.Close(); } private void addBtn_Click(object sender, EventArgs e) { string wayName = nameTxtBox.Text; float wayLat = Convert.ToSingle(latTxtBox.Text); float wayLong = Convert.ToSingle(longTxtBox.Text); float wayEle = Convert.ToSingle(elevTxtBox.Text); WayPoint wp = new WayPoint(wayName, wayLat, wayLong, wayEle); GPSDataPoint gdp = new GPSDataPoint(); data = new List<GPSDataPoint>(); gdp.Add(wp); lb.Items.Add(wp.Name + " " + wp.Latitude + " " + wp.Longitude + " " + wp.Ele + " " + wp.Sym); lb.Refresh(); StreamWriter sr = new StreamWriter(name); sr.Write(lb); sr.Close(); DialogResult result = MessageBox.Show("Save in New File?","Save", MessageBoxButtons.YesNo); if (result == DialogResult.Yes) { SaveFileDialog saveDialog = new SaveFileDialog(); saveDialog.FileName = "default.csv"; DialogResult saveResult = saveDialog.ShowDialog(); if (saveResult == DialogResult.OK) { sr = new StreamWriter(saveDialog.FileName, true); sr.WriteLine(wayName + "," + wayLat + "," + wayLong + "," + wayEle); sr.Close(); } } else { // sr = new StreamWriter(name, true); // sr.WriteLine(wayName + "," + wayLat + "," + wayLong + "," + wayEle); sr.Close(); } MessageBox.Show(name + path); } } } GPSDataPoint.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace CourseworkExample { public class GPSDataPoint { private float[] bounds; private List<WayPoint> dataList; public GPSDataPoint() { dataList = new List<WayPoint>(); } internal void setBounds(string p) { string[] b = p.Split(','); bounds = new float[b.Length]; for (int i = 0; i < b.Length; i++) { bounds[i] = Convert.ToSingle(b[i]); } } public float[] Bounds { get { return bounds; } } internal void addWaypoint(string s) { WayPoint wp = new WayPoint(s); dataList.Add(wp); } public WayPoint getWayPoint(int i) { if (i < dataList.Count) { return dataList[i]; } else return null; } public List<WayPoint> DataList { get { return dataList; } } internal void Add(WayPoint wp) { dataList.Add(wp); } } } WayPoint.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace CourseworkExample { public class WayPoint { private string name; private float ele; private float latitude; private float longitude; private string sym; public WayPoint(string name, float latitude, float longitude, float elevation) { this.name = name; this.latitude = latitude; this.longitude = longitude; this.ele = elevation; } public WayPoint() { name = "no name"; ele = 3.5F; latitude = 3.5F; longitude = 0.0F; sym = "no symbol"; } public WayPoint(string s) { string[] bits = s.Split(','); name = bits[0]; longitude = Convert.ToSingle(bits[2]); latitude = Convert.ToSingle(bits[1]); if (bits.Length > 4) sym = bits[4]; else sym = ""; try { ele = Convert.ToSingle(bits[3]); } catch (Exception e) { ele = 0.0f; } } public float Longitude { get { return longitude; } set { longitude = value; } } public float Latitude { get { return latitude; } set { latitude = value; } } public float Ele { get { return ele; } set { ele = value; } } public string Name { get { return name; } set { name = value; } } public string Sym { get { return sym; } set { sym = value; } } } } .csv file data birthplace.png 51.483788,-0.351906,51.460745,-0.302982 Born Here,51.473805,-0.32532,-,hospital Danced here,51,483805,-0.32532,-,hospital

    Read the article

  • Why my application ask for a codec to pla the MVI(.MOV) video files while i can play them on WMP and QuickTime?

    - by Daniel Lip
    I have an application i did some time ago when im loading the video file its ok when trying to play/use the file im getting the messageBox message say that its need a codec to use gspot or search the internet. Wehn im playing this files on my hard disk with Windows Media Play or either QuickTime there is no problems. The Video files for example name are: MVI_2483 in the file name properties i see its type: Quick Time Movie (.MOV) In my application im using DirectShowLib-2005.dll this is the class im using in my case to extract the video file im using it in my application to extract only lightnings from the video file name. In Form1 i have a button click event that just starting the action: private void button8_Click(object sender, EventArgs e) { viewToolStripMenuItem.Enabled = false; fileToolStripMenuItem.Enabled = false; button2.Enabled = false; label14.Visible = false; label15.Visible = false; label21.Visible = false; label22.Visible = false; label24.Visible = false; label25.Visible = false; ExtractAutomatic = true; DirectoryInfo info = new DirectoryInfo(_videoFile); string dirName = info.Name; automaticModeDirectory = dirName + "_Automatic"; subDirectoryName = _outputDir + "\\" + automaticModeDirectory; if (secondPass == true) { Start(true); } Start(false); } This is the function start in Form1: private void Start(bool secondpass) { setpicture(-1); if (Directory.Exists(_outputDir) && secondpass == false) { } else { Directory.CreateDirectory(_outputDir); } if (ExtractAutomatic == true) { string subDirectory_Automatic_Name = _outputDir + "\\" + automaticModeDirectory; Directory.CreateDirectory(subDirectory_Automatic_Name); f = new WmvAdapter(_videoFile, Path.Combine(subDirectory_Automatic_Name)); } else { string subDirectory_Manual_Name; if (Directory.Exists(subDirectoryName)) { subDirectory_Manual_Name = subDirectoryName; f = new WmvAdapter(_videoFile, Path.Combine(subDirectory_Manual_Name)); } else { subDirectory_Manual_Name = _outputDir + "\\" + averagesListTextFileDirectory + "_Manual"; Directory.CreateDirectory(subDirectory_Manual_Name); f = new WmvAdapter(_videoFile, Path.Combine(subDirectory_Manual_Name)); } } button1.Enabled = false; f.Secondpass = secondpass; f.FramesToSave = _fts; f.FrameCountAvailable += new WmvAdapter.FrameCountEventHandler(f_FrameCountAvailable); f.StatusChanged += new WmvAdapter.EventHandler(f_StatusChanged); f.ProgressChanged += new WmvAdapter.ProgressEventHandler(f_ProgressChanged); this.Text = "Processing Please Wait..."; label5.ForeColor = Color.Green; label5.Text = "Processing Please Wait"; button8.Enabled = false; button5.Enabled = false; label5.Visible = true; pictureBox1.Image = Lightnings_Extractor.Properties.Resources.Weather_Michmoret; Hrs = 0; //number of hours Min = 0; //number of Minutes Sec = 0; //number of Sec timeElapsed = 0; label10.Text = "00:00:00"; label11.Visible = false; label12.Visible = false; label9.Visible = false; label8.Visible = false; this.button1.Enabled = false; myTrackPanelss1.trackBar1.Enabled = false; this.checkBox2.Enabled = false; this.checkBox1.Enabled = false; numericUpDown1.Enabled = false; timer1.Start(); label2.Text = ""; label1.Visible = true; label2.Visible = true; label3.Visible = true; label4.Visible = true; f.Start(); } And this is the class wich is not my oqn class i just just defined it in some places wich making the problem: using System; using System.Diagnostics; using System.Drawing; using System.Drawing.Imaging; using System.IO; using System.Runtime.InteropServices; using DirectShowLib; using System.Collections.Generic; using Extracting_Frames; using System.Windows.Forms; namespace Polkan.DataSource { internal class WmvAdapter : ISampleGrabberCB, IDisposable { #region Fields_Properties_and_Events bool dis = false; int count = 0; const string fileName = @"d:\histogramValues.dat"; private IFilterGraph2 _filterGraph; private IMediaControl _mediaCtrl; private IMediaEvent _mediaEvent; private int _width; private int _height; private readonly string _outFolder; private int _frameId; //better use a custom EventHandler that passes the results of the action to the subscriber. public delegate void EventHandler(object sender, EventArgs e); public event EventHandler StatusChanged; public delegate void FrameCountEventHandler(object sender, FrameCountEventArgs e); public event FrameCountEventHandler FrameCountAvailable; public delegate void ProgressEventHandler(object sender, ProgressEventArgs e); public event ProgressEventHandler ProgressChanged; private IMediaSeeking _mSeek; private long _duration = 0; private long _avgFrameTime = 0; //just save the averages to a List (not to fs) public List<double> AveragesList { get; set; } public List<long> histogramValuesList; public bool Secondpass { get; set; } public List<int> FramesToSave { get; set; } #endregion #region Constructors and Destructors public WmvAdapter(string file, string outFolder) { _outFolder = outFolder; try { SetupGraph(file); } catch { Dispose(); MessageBox.Show("A codec is required to load this video file. Please use http://www.headbands.com/gspot/ or search the web for the correct codec"); } } ~WmvAdapter() { CloseInterfaces(); } #endregion public void Dispose() { CloseInterfaces(); } public void Start() { EstimateFrameCount(); int hr = _mediaCtrl.Run(); WaitUntilDone(); DsError.ThrowExceptionForHR(hr); } public void WaitUntilDone() { int hr; const int eAbort = unchecked((int)0x80004004); do { System.Windows.Forms.Application.DoEvents(); EventCode evCode; if (dis == true) { return; } hr = _mediaEvent.WaitForCompletion(100, out evCode); }while (hr == eAbort); DsError.ThrowExceptionForHR(hr); OnStatusChanged(); } //Edit: added events protected virtual void OnStatusChanged() { if (StatusChanged != null) StatusChanged(this, new EventArgs()); } protected virtual void OnFrameCountAvailable(long frameCount) { if (FrameCountAvailable != null) FrameCountAvailable(this, new FrameCountEventArgs() { FrameCount = frameCount }); } protected virtual void OnProgressChanged(int frameID) { if (ProgressChanged != null) ProgressChanged(this, new ProgressEventArgs() { FrameID = frameID }); } /// <summary> build the capture graph for grabber. </summary> private void SetupGraph(string file) { ISampleGrabber sampGrabber = null; IBaseFilter capFilter = null; IBaseFilter nullrenderer = null; _filterGraph = (IFilterGraph2)new FilterGraph(); _mediaCtrl = (IMediaControl)_filterGraph; _mediaEvent = (IMediaEvent)_filterGraph; _mSeek = (IMediaSeeking)_filterGraph; var mediaFilt = (IMediaFilter)_filterGraph; try { // Add the video source int hr = _filterGraph.AddSourceFilter(file, "Ds.NET FileFilter", out capFilter); DsError.ThrowExceptionForHR(hr); // Get the SampleGrabber interface sampGrabber = new SampleGrabber() as ISampleGrabber; var baseGrabFlt = sampGrabber as IBaseFilter; ConfigureSampleGrabber(sampGrabber); // Add the frame grabber to the graph hr = _filterGraph.AddFilter(baseGrabFlt, "Ds.NET Grabber"); DsError.ThrowExceptionForHR(hr); // --------------------------------- // Connect the file filter to the sample grabber // Hopefully this will be the video pin, we could check by reading it's mediatype IPin iPinOut = DsFindPin.ByDirection(capFilter, PinDirection.Output, 0); // Get the input pin from the sample grabber IPin iPinIn = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Input, 0); hr = _filterGraph.Connect(iPinOut, iPinIn); DsError.ThrowExceptionForHR(hr); // Add the null renderer to the graph nullrenderer = new NullRenderer() as IBaseFilter; hr = _filterGraph.AddFilter(nullrenderer, "Null renderer"); DsError.ThrowExceptionForHR(hr); // --------------------------------- // Connect the sample grabber to the null renderer iPinOut = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Output, 0); iPinIn = DsFindPin.ByDirection(nullrenderer, PinDirection.Input, 0); hr = _filterGraph.Connect(iPinOut, iPinIn); DsError.ThrowExceptionForHR(hr); // Turn off the clock. This causes the frames to be sent // thru the graph as fast as possible hr = mediaFilt.SetSyncSource(null); DsError.ThrowExceptionForHR(hr); // Read and cache the image sizes SaveSizeInfo(sampGrabber); //Edit: get the duration hr = _mSeek.GetDuration(out _duration); DsError.ThrowExceptionForHR(hr); } finally { if (capFilter != null) { Marshal.ReleaseComObject(capFilter); } if (sampGrabber != null) { Marshal.ReleaseComObject(sampGrabber); } if (nullrenderer != null) { Marshal.ReleaseComObject(nullrenderer); } GC.Collect(); } } private void EstimateFrameCount() { try { //1sec / averageFrameTime double fr = 10000000.0 / _avgFrameTime; double frameCount = fr * (_duration / 10000000.0); OnFrameCountAvailable((long)frameCount); } catch { } } public double framesCounts() { double fr = 10000000.0 / _avgFrameTime; double frameCount = fr * (_duration / 10000000.0); return frameCount; } private void SaveSizeInfo(ISampleGrabber sampGrabber) { // Get the media type from the SampleGrabber var media = new AMMediaType(); int hr = sampGrabber.GetConnectedMediaType(media); DsError.ThrowExceptionForHR(hr); if ((media.formatType != FormatType.VideoInfo) || (media.formatPtr == IntPtr.Zero)) { throw new NotSupportedException("Unknown Grabber Media Format"); } // Grab the size info var videoInfoHeader = (VideoInfoHeader)Marshal.PtrToStructure(media.formatPtr, typeof(VideoInfoHeader)); _width = videoInfoHeader.BmiHeader.Width; _height = videoInfoHeader.BmiHeader.Height; //Edit: get framerate _avgFrameTime = videoInfoHeader.AvgTimePerFrame; DsUtils.FreeAMMediaType(media); GC.Collect(); } private void ConfigureSampleGrabber(ISampleGrabber sampGrabber) { var media = new AMMediaType { majorType = MediaType.Video, subType = MediaSubType.RGB24, formatType = FormatType.VideoInfo }; int hr = sampGrabber.SetMediaType(media); DsError.ThrowExceptionForHR(hr); DsUtils.FreeAMMediaType(media); GC.Collect(); hr = sampGrabber.SetCallback(this, 1); DsError.ThrowExceptionForHR(hr); } private void CloseInterfaces() { try { if (_mediaCtrl != null) { _mediaCtrl.Stop(); _mediaCtrl = null; dis = true; } } catch (Exception ex) { Debug.WriteLine(ex); } if (_filterGraph != null) { Marshal.ReleaseComObject(_filterGraph); _filterGraph = null; } GC.Collect(); } int ISampleGrabberCB.SampleCB(double sampleTime, IMediaSample pSample) { Marshal.ReleaseComObject(pSample); return 0; } int ISampleGrabberCB.BufferCB(double sampleTime, IntPtr pBuffer, int bufferLen) { if (Form1.ExtractAutomatic == true) { using (var bitmap = new Bitmap(_width, _height, _width * 3, PixelFormat.Format24bppRgb, pBuffer)) { if (!this.Secondpass) { long[] HistogramValues = Form1.GetHistogram(bitmap); long t = Form1.GetTopLumAmount(HistogramValues, 1000); Form1.averagesTest.Add(t); } else { //this is the changed part if (_frameId > 0) { if (Form1.averagesTest[_frameId] / 1000.0 - Form1.averagesTest[_frameId - 1] / 1000.0 > 150.0) { count = 6; } if (count > 0) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); count --; } } } _frameId++; //let only report each 100 frames for performance if (_frameId % 100 == 0) OnProgressChanged(_frameId); } } else { using (var bitmap = new Bitmap(_width, _height, _width * 3, PixelFormat.Format24bppRgb, pBuffer)) { if (!this.Secondpass) { //get avg double average = GetAveragePixelValue(bitmap); if (AveragesList == null) AveragesList = new List<double>(); //save avg AveragesList.Add(average); //***************************\\ // for (int i = 0; i < (int)framesCounts(); i++) // { // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); //***************************\\ //} } else { if (FramesToSave != null && FramesToSave.Contains(_frameId)) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); using (BinaryWriter binWriter = new BinaryWriter(File.Open(fileName, FileMode.Create))) { for (int i = 0; i < histogramValuesList.Count; i++) { binWriter.Write(histogramValuesList[(int)i]); } binWriter.Close(); } } } _frameId++; //let only report each 100 frames for performance if (_frameId % 100 == 0) OnProgressChanged(_frameId); } } return 0; } /* int ISampleGrabberCB.SampleCB(double sampleTime, IMediaSample pSample) { Marshal.ReleaseComObject(pSample); return 0; } int ISampleGrabberCB.BufferCB(double sampleTime, IntPtr pBuffer, int bufferLen) { using (var bitmap = new Bitmap(_width, _height, _width * 3, PixelFormat.Format24bppRgb, pBuffer)) { if (!this.Secondpass) { //get avg double average = GetAveragePixelValue(bitmap); if (AveragesList == null) AveragesList = new List<double>(); //save avg AveragesList.Add(average); //***************************\\ // for (int i = 0; i < (int)framesCounts(); i++) // { // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); long t = Form1.GetTopLumAmount(HistogramValues, 1000); //***************************\\ Form1.averagesTest.Add(t); // to add this list to a text file or binary file and read the averages from the file when its is Secondpass !!!!! //} } else { if (FramesToSave != null && FramesToSave.Contains(_frameId)) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); using (BinaryWriter binWriter = new BinaryWriter(File.Open(fileName, FileMode.Create))) { for (int i = 0; i < histogramValuesList.Count; i++) { binWriter.Write(histogramValuesList[(int)i]); } binWriter.Close(); } } for (int x = 1; x < Form1.averagesTest.Count; x++) { double fff = Form1.averagesTest[x] / 1000.0 - Form1.averagesTest[x - 1] / 1000.0; if (Form1.averagesTest[x] / 1000.0 - Form1.averagesTest[x - 1] / 1000.0 > 180.0) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); _frameId++; } } } _frameId++; //let only report each 100 frames for performance if (_frameId % 100 == 0) OnProgressChanged(_frameId); } return 0; }*/ private unsafe double GetAveragePixelValue(Bitmap bmp) { BitmapData bmData = null; try { bmData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); int stride = bmData.Stride; IntPtr scan0 = bmData.Scan0; int w = bmData.Width; int h = bmData.Height; double sum = 0; long pixels = bmp.Width * bmp.Height; byte* p = (byte*)scan0.ToPointer(); for (int y = 0; y < h; y++) { p = (byte*)scan0.ToPointer(); p += y * stride; for (int x = 0; x < w; x++) { double i = ((double)p[0] + p[1] + p[2]) / 3.0; sum += i; p += 3; } //no offset incrementation needed when getting //the pointer at the start of each row } bmp.UnlockBits(bmData); double result = sum / (double)pixels; return result; } catch { try { bmp.UnlockBits(bmData); } catch { } } return -1; } } public class FrameCountEventArgs { public long FrameCount { get; set; } } public class ProgressEventArgs { public int FrameID { get; set; } } } I remember i had this codec problem/s before and i installed the codec/'s that were needed but in this case both quick time and windows media player can play the video files so why the application cant detect and find the codec/'s on my computer ? Gspot say that the codec is AVC1 but again wmp and quicktime play the video files no problems. The video files are from my digital camera !

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 2

    - by shiju
    In my previous post Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1, we have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. We have created generic repository and unit of work with EF Code First for our ASP.NET MVC 3 application and did basic CRUD operations against a simple domain entity. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. You can download the source code from http://efmvc.codeplex.com . The following frameworks will be used for this step by step tutorial.    1. ASP.NET MVC 3 RTM    2. EF Code First CTP 5    3. Unity 2.0 Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } }   View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below   public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. jquery-ui.js jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script>   Source Code You can download the source code from http://efmvc.codeplex.com/ . Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • Team Leaders & Authors - Manage and Report Workflow using "Print an Outline" in UPK

    - by [email protected]
    Did you know you can "print an outline?" You can print any outline or portion of an outline. Why might you want to "print an outline" in UPK... Have you ever wondered how many topics you have recorded, how many of your topics are ready for review, or even better, how many topics are complete! Do you need to report your project status to management? Maybe you just like to have a copy of your outline to refer to during development. Included in this output is the outline structure as well as the layout defined in the Details View of the Outline Editor. To print an outline, you must open either a module or section in the Outline Editor. A set of default data columns is automatically included in the output; however, you can configure which columns you want to appear in the report by switching to the Details view and customizing the columns. (To learn more about customizing your columns refer to the Add and Remove Columns section of the Content Development.pdf guide) To print an outline from the Outline Editor: 1. Open a module or section document in the Outline Editor. 2. Expand the documents to display the details that you want included in the report. 3. On the File menu, choose Print and use the toolbar icons to print, view, or save the report to a file. Personally, I opt to save my outline in Microsoft Excel. Using the delivered features of Microsoft Excel you can add columns of information, such as development notes, to your outline or you can graph and chart your Project status. As mentioned above you can configure what columns you want to appear in the outline. When utilizing the Print an Outline feature in conjunction with the Managing Workflow features of the UPK Multi-user instance you as a Team Lead or Author can better report project status. Read more about Managing Workflow below. Managing Workflow: The Properties toolpane contains special properties that allow authors to track document status or State as well as assign Document Ownership. Assign Content State The State property is an editable property for communicating the status of a document. This is particularly helpful when collaborating with other authors in a development team. Authors can assign a state to documents from the master list defined by the administrator. The default list of States includes (blank), Not Started, Draft, In Review, and Final. Administrators can customize the list by adding, deleting or renaming the values. To assign a State value to a document: 1. Make sure you are working online. 2. Display the Properties toolpane. 3. Select the document(s) to which you want to assign a state. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the State cell. 5. Select a value from the list. Assign Document Ownership In many enterprises, multiple authors often work together developing content in a team environment. Team leaders typically handle large projects by assigning specific development responsibilities to authors. The Owner property allows team leaders and authors to assign documents to themselves and other authors to track who is responsible for a specific document. You view and change document assignments for a document using the Owner property in the Properties toolpane. To assign a document owner: 1. Make sure you are working online. 2. On the View menu, choose Properties. 3. Select the document(s) to which you want to assign document responsibility. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the Owner cell. 5. Select a name from the list. Is anyone out there already using this feature? Share your ideas with the group. Those of you new to this feature, give it a test drive and let us know what you think. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >