Search Results

Search found 88714 results on 3549 pages for 'data type'.

Page 558/3549 | < Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >

  • Migrating BizTalk 2006 R2 to BizTalk 2010 XLANGs Issue

    - by SURESH GIRIRAJAN
    When we migrate some BizTalk apps from BizTalk 2006 R2 to BizTalk 2010, and we ran into issue when a .net component called inside the orchestration. In the .net component we are trying to retrieve some promoted property and we also checked in the BizTalk group hub to validate it was promoted, no issues there.  Only when we try to access the data into the .net component we had issue. We just moved all the assembly what we had in BizTalk 2006 R2 to BizTalk 2010, didn’t recompile anything in BizTalk 2010 environment. But looking further there is couple of new namespace added to the Microsoft.XLANGs… in BizTalk 2010 compared to BizTalk 2006 R2 caused the issue. So all we did to fix the issue is recompile the project in 2010 environment and it worked fine. So it looks like some backward compatibility issue.  public static void Load(XLANGMessage msg) {  try  {      // get the process id from context.       object ctxVal = msg.GetPropertyValue(typeof(ProcessID)); … } BizTalk 2010: Error Message in the event viewer:  The service instance will remain suspended until administratively resumed or terminated. If resumed the instance will continue from its last persisted state and may re-throw the same unexpected exception. InstanceId: 441d73d3-2e84-49d2-b6bd-7218065b5e1d Shape name: Bulk Load ShapeId: bb959e56-9221-48be-a80f-24051196617d Exception thrown from: segment 1, progress 65 Inner exception: A property cannot be associated with the type 'Tellago.Common.Schemas.ProcessId'.   Exception type: InvalidPropertyTypeException Source: Microsoft.XLANGs.Engine Target Site: Microsoft.XLANGs.RuntimeTypes.MessagePropertyDefinition _getMessagePropertyDefinition(System.Type) The following is a stack trace that identifies the location where the exception occured   at Microsoft.XLANGs.Core.XMessage._getMessagePropertyDefinition(Type propType) at Microsoft.XLANGs.Core.XMessage.GetContentProperty(Type propType) at Microsoft.XLANGs.Core.XMessage.GetPropertyValue(Type propType) at Microsoft.BizTalk.XLANGs.BTXEngine.BTXMessage.GetPropertyValue(Type propType) at Microsoft.XLANGs.Core.MessageWrapperForUserCode.GetPropertyValue(Type propType) at Tellago.Common.Components.Load(XLANGMessage msg) at Tellago.SuspensionProcess.segment1(StopConditions stopOn) at Microsoft.XLANGs.Core.SegmentScheduler.RunASegment(Segment s, StopConditions stopCond, Exception& exp)

    Read the article

  • Can I make a Compiz animation rule for Wine menus?

    - by satuon
    Wine menus appear and disappear with the "Glide 2" effect. This looks good for dialogs but not so good for menus. Can I make a custom rule to apply fade in/out instead? I include the animation section from my exported Compiz settings (default values are skipped): [animation] s0_open_matches = ((type=Dialog | ModalDialog | Normal | Unknown) | name=sun-awt-X11-XFramePeer | name=sun-awt-X11-XDialogPeer) & !(role=toolTipTip | role=qtooltip_label) & !(type=Normal & override_redirect=1) & !(name=gnome-screensaver);(type=Menu | PopupMenu | DropdownMenu | Dialog | ModalDialog | Normal);(type=Tooltip | Notification | Utility) & !(name=compiz) & !(title=notify-osd); s0_close_effects = animation:Glide 2;animation:Fade;animation:None; s0_close_matches = ((type=Dialog | ModalDialog | Normal | Unknown) | name=sun-awt-X11-XFramePeer | name=sun-awt-X11-XDialogPeer) & !(role=toolTipTip | role=qtooltip_label) & !(type=Normal & override_redirect=1) & !(name=gnome-screensaver);(type=Menu | PopupMenu | DropdownMenu | Dialog | ModalDialog | Normal);(type=Tooltip | Notification | Utility) & !(name=compiz) & !(title=notify-osd);

    Read the article

  • Dual boot Win 7 Ubuntu - home and boot partitions have no mount point

    - by cwmff
    After installing Ubuntu 12.04.3 on a Windows 7 laptop, running Ubuntu Disk Utility showed the following partition information; /dev/sda1 NTFS, Bootable, Filesystem, Labled: System Reserved, 105MB, Not Mounted /dev/sda2 NTFS, no flag, Filesystem, no label, 84GB, Not Mounted /dev/sda3 Usage: Container for logical partitions. Partition Type: Extended (0x05). no flags. no label. Capacity: 416GB /dev/sda5 Usage: Filesystem. Partition Type: Linux (0x83). no partition label. no flags. Capacity: 999MB. Type: Ext4(ver 1.0). Available: -. Label: -. Mount Point: Not Mounted /dev/sda8 Usage: Filesystem. Partition Type: Linux (0x83). no partition label. no flags. Capacity: 30GB. Type: Ext4(ver 1.0). Available: -. Label: -. Mount Point: Mounted at / /dev/sda6 Usage: Filesystem. Partition Type: Linux (0x83). no partition label. no flags. Capacity: 377GB. Type: Ext4(ver 1.0). Available: -. Label: -. Mount Point: Not Mounted /dev/sda7 Usage: Swap Space. Partition Type: Linux (0x82). no partition label. no flags. Capacity: 8.4GB. sda2 contains Windows 7 but without a mount point sda6 should have been the Home partition and sda5 should have been the Boot partition but the mount points seem to have been lost and now everything but Swap has gone into the root partition sda8 (the Home folder also seems to be within sda8). How do I go about getting sda6 used as the Home partition and sda5 as Boot

    Read the article

  • Discuss: PLs are characterised by which (iso)morphisms are implemented

    - by Yttrill
    I am interested to hear discussion of the proposition summarised in the title. As we know programming language constructions admit a vast number of isomorphisms. In some languages in some places in the translation process some of these isomorphisms are implemented, whilst others require code to be written to implement them. For example, in my language Felix, the isomorphism between a type T and a tuple of one element of type T is implemented, meaning the two types are indistinguishable (identical). Similarly, a tuple of N values of the same type is not merely isomorphic to an array, it is an array: the isomorphism is implemented by the compiler. Many other isomorphisms are not implemented for example there is an isomorphism expressed by the following client code: match v with | ((?x,?y),?z = x,(y,z) // Felix match v with | (x,y), - x,(y,z) (* Ocaml *) As another example, a type constructor C of int in Felix may be used directly as a function, whilst in Ocaml you must write a wrapper: let c x = C x Another isomorphism Felix implements is the elimination of unit values, including those in tuples: Felix can do this because (most) polymorphic values are monomorphised which can be done because it is a whole program analyser, Ocaml, for example, cannot do this easily because it supports separate compilation. For the same reason Felix performs type-class dispatch at compile time whilst Haskell passes around dictionaries. There are some quite surprising issues here. For example an array is just a tuple, and tuples can be indexed at run time using a match and returning a value of a corresponding sum type. Indeed, to be correct the index used is in fact a case of unit sum with N summands, rather than an integer. Yet, in a real implementation, if the tuple is an array the index is replaced by an integer with a range check, and the result type is replaced by the common argument type of all the constructors: two isomorphisms are involved here, but they're implemented partly in the compiler translation and partly at run time.

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • How to Get a Smartphone-Style Word Suggestion on Windows

    - by Zainul Franciscus
    Have you ever wished that you can type faster and better in Windows ? Then you’re in luck, because today we’ll show you how to get a smartphone’s word suggestion in Windows. To accomplish that, you need to install AI Type, a software that gives word suggestion when you write in Windows.  AI Type not only fulfils our gratification to have a smartphone-style word suggestion for Windows,  AI Type also improves our writings by suggesting word according to its context. It  will also try to match words according to the  probability in which other users may have used it. Installing AI Type is a breeze; Just download the installer from AI Type website, run the executable, fill in a registration form, and you’re all set to use AI Type for your daily writing. Once you’re done with the installation, AI Type appears on your system tray. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic] Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More

    Read the article

  • SD card won't appear after upgrade to 13.10

    - by Pixel
    My SD card won't mount when I put it into my lap top, everything was fine before the upgrade. The information about the SD card appears just fine when I type "sudo fdisk l " it just says that it doesn't have a valid partition table. When I type "_sudo blkid" I get the following answer: /dev/sda1: UUID="CCA8-9030" TYPE="vfat" /dev/sda2: UUID="8a1d135b-384b-432d-b608-64dcf09ada24" TYPE="ext2" /dev/sda3: UUID="7s6PtU-kj2Z-N8XD-0mzl-840i-i3HG-enlbAf" TYPE="LVM2_member" /dev/sr0: LABEL="Bamboo CD" TYPE="iso9660" /dev/mapper/ubuntu--vg-root: UUID="c9b521c8-7c9f-493b-95c8-a7d79c465318" TYPE="ext4" /dev/mapper/ubuntu--vg-swap_1: UUID="7f155ab6-e1b9-485b-a2bc-443c0622284d" TYPE="swap" When I use lsusb: Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 13d3:5710 IMC Networks UVC VGA Webcam Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 046d:c52f Logitech, Inc. Unifying Receiver Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub I've read the other threads and I couldn't really find any good answers, my card reader was compatible with the previous version of ubuntu, so technically it should still be compatible with the next version. Also I can't erase what's on the card, it contains important data which I need... :/ If you need anymore information just ask, I'll give it as soon as I can. Pixel.

    Read the article

  • Used mountmanager now Ubuntu hangs on boot

    - by fpghost
    I was using MountManager in Ubuntu 12.04 to set user permissions in mounting hard drives. I set each partititon to be mountable by everyone instead of admin only. Then I clicked Apply in the file menu and it gave me the message successfully updated. Upon restarting Ubuntu, just hangs on the splash screen and does not boot any further. Windows still boots fine. How can I fix these? please help thanks From LiveUSB: my fstab looks like: overlayfs / overlayfs rw 0 0 tmpfs /tmp tmpfs nosuid,nodev 0 0 /dev/sda5 swap swap defaults 0 0 /dev/sda7 swap swap defaults 0 0 Is this corrupted? Other things that may be helpful: blkid returns /dev/loop0: TYPE="squashfs" /dev/sda1: LABEL="System Reserved" UUID="0AF26C31F26C22E5" TYPE="ntfs" /dev/sda2: UUID="5E1C88E31C88B813" TYPE="ntfs" /dev/sda3: UUID="94B2BB7DB2BB6282" TYPE="ntfs" /dev/sda5: UUID="41b66b9a-2b48-45cf-b59d-cd50e41ec971" TYPE="swap" /dev/sda6: UUID="c73ca79e-4fa4-4bde-967e-670593736f6a" TYPE="ext4" /dev/sda7: UUID="c05d659f-103c-4444-9dc4-3121b9e081d6" TYPE="swap" /dev/sdb1: LABEL="PENDRIVE" UUID="1DE8-0A49" TYPE="vfat" and cat /proc/mounts rootfs / rootfs rw 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev devtmpfs rw,relatime,size=1950000k,nr_inodes=206759,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,nosuid,relatime,size=783056k,mode=755 0 0 /dev/sdb1 /cdrom vfat rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=iso8859-1,shortname=mixed,erro rs=remount-ro 0 0 /dev/loop0 /rofs squashfs ro,noatime 0 0 tmpfs /cow tmpfs rw,noatime,mode=755 0 0 /cow / overlayfs rw,relatime,lowerdir=//filesystem.squashfs,upperdir=/cow 0 0 none /sys/fs/fuse/connections fusectl rw,relatime 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 tmpfs /tmp tmpfs rw,nosuid,nodev,relatime 0 0 none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0 none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0 gvfs-fuse-daemon /home/ubuntu/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=999,group_id=999 0 0

    Read the article

  • WCF AuthenticationService in IIS7 Error

    - by germandb
    I have a WCF Server running on IIS 7 using default application pool, with SSL activate, the services is installed in a SBS Server 2008. I implement client application services with wcf and SQL 2005 for setting the access control in my application. The application run under windows vista and is make with WPF. In my developer machine the application and the WCF services run well, the IIS i'm use for the trials is the local IIS 7 and the database is the SQL Server 2005 database hosting in my server. I'm using Visual Studio Project Designer to enable and configure client application services. using https://localhost/WcfServidorFundacion. When i'm change the authentication services location to https://WcfServices:5659/WcfServidorFundacion and recompile the application, the following error show up. Message: The web service returned the error status code: InternalServerError. Details of service failure: {"Message":" Error while processing your request ","StackTrace":"","ExceptionType":""} Stack Trace: en System.Net.HttpWebRequest.GetResponse() en System.Web.ClientServices.Providers.ProxyHelper.CreateWebRequestAndGetResponse(String serverUri, CookieContainer& cookies, String username, String connectionString, String connectionStringProvider, String[] paramNames, Object[] paramValues, Type returnType) InnerException: System.Net.WebException Message="Remote Server Error: (500) Interal Server Error." I can access the WCF service from the navigator using the url mentioned above and even make a webReference in my project. I make a capture of the response but I'cant post it because i don't have 10 reputation points I activate the error log in the IIS 7 server, and the result is a Warning in the ManagedPipilineHandler. I appreciate if any one can help me Errors & Warnings No.? Severity Event Module Name 132. view trace Warning -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName ManagedPipelineHandler Notification 128 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode La operación se ha completado correctamente. (0x0) Maybe this can help, is the web.config of my service <?xml version="1.0" encoding="utf-8"?> <!-- Nota: como alternativa para editar manualmente este archivo, puede utilizar la herramienta Administración de sitios web para configurar los valores de la aplicación. Utilice la opción Sitio Web->Configuración de Asp.Net en Visual Studio. Encontrará una lista completa de valores de configuración y comentarios en machine.config.comments, que se encuentra generalmente en \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings /> <connectionStrings> <remove name="LocalMySqlServer" /> <remove name="LocalSqlServer" /> <add name="fundacionSelfAut" connectionString="Data Source=FUNDACIONSERVER/PRUEBAS;Initial Catalog=fundacion;User ID=wcfBaseDatos;Password=qwerty_2009;" providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <profile enabled="true" defaultProvider="SqlProfileProvider"> <providers> <clear /> <add name="SqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="fundacionSelfAut" applicationName="fundafe" /> </providers> <properties> <add name="FirstName" type="String" /> <add name="LastName" type="String" /> <add name="PhoneNumber" type="String" /> </properties> </profile> <roleManager enabled="true" defaultProvider="SqlRoleProvider"> <providers> <clear /> <add name="SqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="fundacionSelfAut" applicationName="fundafe" /> </providers> </roleManager> <membership defaultProvider="SqlMembershipProvider"> <providers> <clear /> <add name="SqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="fundacionSelfAut" applicationName="fundafe" enablePasswordRetrieval="false" enablePasswordReset="false" requiresQuestionAndAnswer="true" requiresUniqueEmail="true" passwordFormat="Hashed" /> </providers> </membership> <authentication mode="Forms" /> <compilation debug="true" strict="false" explicit="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <!-- La sección <authentication> permite la configuración del modo de autenticación de seguridad utilizado por ASP.NET para identificar a un usuario entrante. --> <!-- La sección <customErrors> permite configurar las acciones que se deben llevar a cabo/cuando un error no controlado tiene lugar durante la ejecución de una solicitud. Específicamente, permite a los desarrolladores configurar páginas de error html que se mostrarán en lugar de un seguimiento de pila de errores. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> <sessionState timeout="40" /> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5" /> <providerOption name="WarnAsError" value="false" /> </compiler> </compilers> </system.codedom> <!-- La sección webServer del sistema es necesaria para ejecutar ASP.NET AJAX en Internet Information Services 7.0. Sin embargo, no es necesaria para la versión anterior de IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules> <add name="ScriptModule" preCondition="integratedMode" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> <tracing> <traceFailedRequests> <add path="*"> <traceAreas> <add provider="ASP" verbosity="Verbose" /> <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" /> <add provider="ISAPI Extension" verbosity="Verbose" /> <add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module" verbosity="Verbose" /> </traceAreas> <failureDefinitions statusCodes="401.3,500,403,404,405" /> </add> </traceFailedRequests> </tracing> <security> <authorization> <add accessType="Allow" users="germanbarbosa,informatica" /> </authorization> <authentication> <windowsAuthentication enabled="false" /> </authentication> </security> </system.webServer> <system.web.extensions> <scripting> <webServices> <authenticationService enabled="true" requireSSL="true" /> <profileService enabled="true" readAccessProperties="FirstName,LastName,PhoneNumber" /> <roleService enabled="true" /> </webServices> </scripting> </system.web.extensions> <system.serviceModel> <services> <!-- this enables the WCF AuthenticationService endpoint --> <service behaviorConfiguration="AppServiceBehaviors" name="System.Web.ApplicationServices.AuthenticationService"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="userHttps" bindingNamespace="http://asp.net/ApplicationServices/v200" contract="System.Web.ApplicationServices.AuthenticationService" /> </service> <!-- this enables the WCF RoleService endpoint --> <service behaviorConfiguration="AppServiceBehaviors" name="System.Web.ApplicationServices.RoleService"> <endpoint binding="basicHttpBinding" bindingConfiguration="userHttps" bindingNamespace="http://asp.net/ApplicationServices/v200" contract="System.Web.ApplicationServices.RoleService" /> </service> <!-- this enables the WCF ProfileService endpoint --> <service behaviorConfiguration="AppServiceBehaviors" name="System.Web.ApplicationServices.ProfileService"> <endpoint binding="basicHttpBinding" bindingNamespace="http://asp.net/ApplicationServices/v200" bindingConfiguration="userHttps" contract="System.Web.ApplicationServices.ProfileService" /> </service> </services> <bindings> <basicHttpBinding> <!-- Set up a binding that uses Username as the client credential type --> <binding name="userHttps"> <security mode="Transport"> </security> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="AppServiceBehaviors"> <serviceMetadata httpGetEnabled="false" httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="SqlRoleProvider" /> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="MembershipProvider" membershipProviderName="SqlMembershipProvider" /> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> </system.serviceModel> </configuration>

    Read the article

  • trying to validate user input in php

    - by user225269
    I'm trying to validate user input in php. This code will check if the values are null or not. If it is null, this will require the user to input the values that are null. When all the text boxes in the html form that came before this. This code will show the submit button, and that submit button will save the inputted data into the mysql database. But the problem is that the value that is saved is zero zero and zero, what might be the cause of this? <html> <head> <title>Admission Information Sheet</title> <meta http-equiv="Content-Type" content="text/html; Western (ISO-8859-1)"> <meta name="author" content=" "> <title> <style> input { font-size: 16px;} </style> <?php include('header.php'); ?> <div id="main_content"> </div> <?php include('footer.php'); ?> <table border="1" width="900" border="0" align="left" cellpadding="0" cellspacing="1" bgcolor="#CCCCCC"> <tr> <form name="form1.1" method="POST" action="aisaction.php"> <?php $NURSE = $_POST[nurse]; $TELNUM = $_POST[telnum]; $HOSPNUM = $_POST[hnum]; $ROOMNUM = $_POST[rnum]; $LASTNAME = $_POST[lname]; $FIRSTNAME = $_POST[fname]; $MIDNAME = $_POST[mname]; $AD = $_POST[ad]; $ADATE = $_POST[adate]; $ADTIME = $_POST[adtime]; $CSTAT = $_POST[cs]; $AGE = $_POST[age]; $BDAY = $_POST[bday]; $SEX = $_POST[sex]; ?> <td> <table width="100%" border="0" cellpadding="2" cellspacing="1" bgcolor="#FFFFFF"> <tr> <td colspan="12" style="background:#9ACD32; color:white; border:white 1px solid; text-align: center"><strong><font size="3">ADMISSION INFORMATION SHEET</strong></td> </tr> <tr> </td><br> <td width="54"><font size="3">Hospital #</td> <td width="3">:</td> <td width="168"><input type="display" name="hnum" disabled="true" value= "<?php print "$HOSPNUM";?>"><br> <font color="red"> <?php if(empty($HOSPNUM)) print "* Hospital Number required!<br>"; ?> </td> <td width="41"><font size="3">Room #</td> <td width="3">:</td> <td width="168"><input type="display" name="rnum" disabled="true" value= "<?php print "$ROOMNUM";?>"><br> <font color="red"> <?php if(empty($ROOMNUM)) print "* Room Number required!<br>"; ?> </td> <td width="67"><font size="3">Admission Date</td> <td width="3">:</td> <td width="168"><input type="display" name="adate" disabled="true" value= "<?php print "$ADATE";?>"><br> <font color="red"> <?php if(empty($ADATE)) print "* Admission Date required!<br>"; ?> </td> </tr> <tr> <td><font size="3">Last Name</td> <td>:</td> <td><input type="display" name="lname" disabled="true" value= "<?php print "$LASTNAME";?>"><br> <font color="red"> <?php if(empty($LASTNAME)) print "* Last Name required!<br>"; ?> </td> <td><font size="3">First Name</td> <td>:</td> <td><input type="display" name="fname" disabled="true" value= "<?php print "$FIRSTNAME";?>"><br> <font color="red"> <?php if(empty($FIRSTNAME)) print "* First Name required!<br>"; ?> </td> <td><font size="3">Middle Name</td> <td>:</td> <td><input type="display" name="mname" disabled="true" value= "<?php print "$MIDNAME";?>"><br> <font color="red"> <?php if(empty($MIDNAME)) print "* Middle Name required!<br>"; ?> </td> <td><font size="3">Admit time</td> <td>:</td> <td><input type="display" name="mname" disabled="true" value= "<?php print "$ADTIME";?>"><br> <font color="red"> <?php if(empty($ADTIME)) print "* Adtime required!<br>"; ?> </td> </tr> <tr> <td><font size="3">Civil Status</td> <td>:</td> <td><input type="display" name="cs" disabled="true" value= "<?php print "$CSTAT";?>"><br> <font color="red"> <?php if(empty($CSTAT)) print "* Civil Status required!<br>"; ?> </td> <td><font size="3">Age</td> <td>:</td> <td><input type="display" name="age" disabled="true" value= "<?php print "$AGE";?>"><br> <font color="red"> <?php if(empty($AGE)) print "* Age required!<br>"; ?> </td> <td><font size="3">Birthday</td> <td>:</td> <td><input type="display" name="bday" disabled="true" value= "<?php print "$BDAY";?>"><br> <font color="red"> <?php if(empty($BDAY)) print "* Birthday required!<br>"; ?> </td> </tr> <tr> <td><font size="3">Address</td> <td>:</td> <td><input type="display" name="address" disabled="true" value= "<?php print "$AD";?>"><br> <font color="red"> <?php if(empty($AD)) print "* Address required!<br>"; ?> </td> <td><font size="3">Telephone #</td> <td>:</td> <td><input type="display" name="telnum" disabled="true" value= "<?php print "$TELNUM";?>"></td> <td width="23"><font size="3">Sex</td> <td width="3">:</td> <td width="174"><input type="display" name="sex" disabled="true" value= "<?php print "$SEX";?>"><br> <font color="red"> <?php if(empty($SEX)) print "* Gender required!<br>"; ?> </td> </tr> <tr> <td><font size="3">Pls. Check</td> <td>:</td> <td><input name="stats1" type="checkbox" id="SSS" value="SSS">SSS</td> <td><font size="3"></td> <td>:</td> <td><input name="stats1" type="checkbox" id="nonmed" value="NonMedicare">Non Medicare</td> <td><font size="3"></td> <td>:</td> <td><input name="stats1" type="checkbox" id="sh" value="stockholder">Stockholder</td> </tr> <tr> <td><font size="3"></td> <td></td> <td><input name="stats1" type="checkbox" id="gsis" value="GSIS">GSIS</td> <td><font size="3"></td> <td></td> <td><input name="stats1" type="checkbox" id="senior" value="seniorcitizen">Senior-Citizen</td> <tr> <td><font size="3"></td> <td></td> <td><input name="stats1" type="checkbox" id="dep" value="dependent">Dependent</td> <td><font size="3"></td> <td></td> <td><input name="stats1" type="checkbox" id="emp" value="employee">Employee</td> </tr> <tr> <td><font size="3">Attending Nurse</td> <td>:</td> <td><input type="display" name="nurse" disabled="true" value= "<?php print "$NURSE";?>"><br> <font color="red"> <?php if(empty($NURSE)) print "* Admitting/Attending Nurse required!<br>"; ?> </td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td><input type="button" value="Back" onClick="history.go(-1);return true;"> <?php $val1 = $_POST['NURSE']; if($_POST['NURSE'] !="") { ?> <form action="aisaction.php" method="POST" target="_window"> <input type="hidden" name="submit" value="yes"> <input type="submit" value="submit"> </form> <?php } ?> </td> </td> </tr> </table> </td> </form> </tr> </table> </head> </html>

    Read the article

  • How can I use Perl regular expressions to parse XML data?

    - by Luke
    I have a pretty long piece of XML that I want to parse. I want to remove everything except for the subclass-code and city. So that I am left with something like the example below. EXAMPLE TEST SUBCLASS|MIAMI CODE <?xml version="1.0" standalone="no"?> <web-export> <run-date>06/01/2010 <pub-code>TEST <ad-type>TEST <cat-code>Real Estate</cat-code> <class-code>TEST</class-code> <subclass-code>TEST SUBCLASS</subclass-code> <placement-description></placement-description> <position-description>Town House</position-description> <subclass3-code></subclass3-code> <subclass4-code></subclass4-code> <ad-number>0000284708-01</ad-number> <start-date>05/28/2010</start-date> <end-date>06/09/2010</end-date> <line-count>6</line-count> <run-count>13</run-count> <customer-type>Private Party</customer-type> <account-number>100099237</account-number> <account-name>DOE, JOHN</account-name> <addr-1>207 CLARENCE STREET</addr-1> <addr-2> </addr-2> <city>MIAMI</city> <state>FL</state> <postal-code>02910</postal-code> <country>USA</country> <phone-number>4014612880</phone-number> <fax-number></fax-number> <url-addr> </url-addr> <email-addr>[email protected]</email-addr> <pay-flag>N</pay-flag> <ad-description>DEANESTATES2BEDS2BATHSAPPLIANCED</ad-description> <order-source>Import</order-source> <order-status>Live</order-status> <payor-acct>100099237</payor-acct> <agency-flag>N</agency-flag> <rate-note></rate-note> <ad-content> MIAMI&#47;Dean Estates&#58; 2 beds&#44; 2 baths&#46; Applianced&#46; Central air&#46; Carpets&#46; Laundry&#46; 2 decks&#46; Pool&#46; Parking&#46; Close to everything&#46;No smoking&#46; No utilities&#46; &#36;1275 mo&#46; 401&#45;578&#45;1501&#46; </ad-content> </ad-type> </pub-code> </run-date> </web-export> PERL So what I want to do is open an existing file read the contents then use regular expressions to eliminate the unnecessary XML tags. open(READFILE, "FILENAME"); while(<READFILE>) { $_ =~ s/<\?xml version="(.*)" standalone="(.*)"\?>\n.*//g; $_ =~ s/<subclass-code>//g; $_ =~ s/<\/subclass-code>\n.*/|/g; $_ =~ s/(.*)PJ RER Houses /PJ RER Houses/g; $_ =~ s/\G //g; $_ =~ s/<city>//g; $_ =~ s/<\/city>\n.*//g; $_ =~ s/<(\/?)web-export>(.*)\n.*//g; $_ =~ s/<(\/?)run-date>(.*)\n.*//g; $_ =~ s/<(\/?)pub-code>(.*)\n.*//g; $_ =~ s/<(\/?)ad-type>(.*)\n.*//g; $_ =~ s/<(\/?)cat-code>(.*)<(\/?)cat-code>\n.*//g; $_ =~ s/<(\/?)class-code>(.*)<(\/?)class-code>\n.*//g; $_ =~ s/<(\/?)placement-description>(.*)<(\/?)placement-description>\n.*//g; $_ =~ s/<(\/?)position-description>(.*)<(\/?)position-description>\n.*//g; $_ =~ s/<(\/?)subclass3-code>(.*)<(\/?)subclass3-code>\n.*//g; $_ =~ s/<(\/?)subclass4-code>(.*)<(\/?)subclass4-code>\n.*//g; $_ =~ s/<(\/?)ad-number>(.*)<(\/?)ad-number>\n.*//g; $_ =~ s/<(\/?)start-date>(.*)<(\/?)start-date>\n.*//g; $_ =~ s/<(\/?)end-date>(.*)<(\/?)end-date>\n.*//g; $_ =~ s/<(\/?)line-count>(.*)<(\/?)line-count>\n.*//g; $_ =~ s/<(\/?)run-count>(.*)<(\/?)run-count>\n.*//g; $_ =~ s/<(\/?)customer-type>(.*)<(\/?)customer-type>\n.*//g; $_ =~ s/<(\/?)account-number>(.*)<(\/?)account-number>\n.*//g; $_ =~ s/<(\/?)account-name>(.*)<(\/?)account-name>\n.*//g; $_ =~ s/<(\/?)addr-1>(.*)<(\/?)addr-1>\n.*//g; $_ =~ s/<(\/?)addr-2>(.*)<(\/?)addr-2>\n.*//g; $_ =~ s/<(\/?)state>(.*)<(\/?)state>\n.*//g; $_ =~ s/<(\/?)postal-code>(.*)<(\/?)postal-code>\n.*//g; $_ =~ s/<(\/?)country>(.*)<(\/?)country>\n.*//g; $_ =~ s/<(\/?)phone-number>(.*)<(\/?)phone-number>\n.*//g; $_ =~ s/<(\/?)fax-number>(.*)<(\/?)fax-number>\n.*//g; $_ =~ s/<(\/?)url-addr>(.*)<(\/?)url-addr>\n.*//g; $_ =~ s/<(\/?)email-addr>(.*)<(\/?)email-addr>\n.*//g; $_ =~ s/<(\/?)pay-flag>(.*)<(\/?)pay-flag>\n.*//g; $_ =~ s/<(\/?)ad-description>(.*)<(\/?)ad-description>\n.*//g; $_ =~ s/<(\/?)order-source>(.*)<(\/?)order-source>\n.*//g; $_ =~ s/<(\/?)order-status>(.*)<(\/?)order-status>\n.*//g; $_ =~ s/<(\/?)payor-acct>(.*)<(\/?)payor-acct>\n.*//g; $_ =~ s/<(\/?)agency-flag>(.*)<(\/?)agency-flag>\n.*//g; $_ =~ s/<(\/?)rate-note>(.*)<(\/?)rate-note>\n.*//g; $_ =~ s/<ad-content>(.*)\n.*//g; $_ =~ s/\t(.*)\n.*//g; $_ =~ s/<\/ad-content>(.*)\n.*//g; } close( READFILE1 ); Is there an easier way of doing this? I don't want to use any modules. I know that it might make this easier but the file I am reading has a lot of data in it.

    Read the article

  • Metro: Introduction to the WinJS ListView Control

    - by Stephen.Walther
    The goal of this blog entry is to provide a quick introduction to the ListView control – just the bare minimum that you need to know to start using the control. When building Metro style applications using JavaScript, the ListView control is the primary control that you use for displaying lists of items. For example, if you are building a product catalog app, then you can use the ListView control to display the list of products. The ListView control supports several advanced features that I plan to discuss in future blog entries. For example, you can group the items in a ListView, you can create master/details views with a ListView, and you can efficiently work with large sets of items with a ListView. In this blog entry, we’ll keep things simple and focus on displaying a list of products. There are three things that you need to do in order to display a list of items with a ListView: Create a data source Create an Item Template Declare the ListView Creating the ListView Data Source The first step is to create (or retrieve) the data that you want to display with the ListView. In most scenarios, you will want to bind a ListView to a WinJS.Binding.List object. The nice thing about the WinJS.Binding.List object is that it enables you to take a standard JavaScript array and convert the array into something that can be bound to the ListView. It doesn’t matter where the JavaScript array comes from. It could be a static array that you declare or you could retrieve the array as the result of an Ajax call to a remote server. The following JavaScript file – named products.js – contains a list of products which can be bound to a ListView. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99 }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44 }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The products variable represents a WinJS.Binding.List object. This object is initialized with a plain-old JavaScript array which represents an array of products. To avoid polluting the global namespace, the code above uses the module pattern and exposes the products using a namespace. The list of products is exposed to the world as ListViewDemos.products. To learn more about the module pattern and namespaces in WinJS, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/22/metro-namespaces-and-modules.aspx Creating the ListView Item Template The ListView control does not know how to render anything. It doesn’t know how you want each list item to appear. To get the ListView control to render something useful, you must create an Item Template. Here’s what our template for rendering an individual product looks like: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> This template displays the product name and price from the data source. Normally, you will declare your template in the same file as you declare the ListView control. In our case, both the template and ListView are declared in the default.html file. To learn more about templates, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/27/metro-using-templates.aspx Declaring the ListView The final step is to declare the ListView control in a page. Here’s the markup for declaring a ListView: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> You declare a ListView by adding the data-win-control to an HTML DIV tag. The data-win-options attribute is used to set two properties of the ListView. The ListView is associated with its data source with the itemDataSource property. Notice that the data source is ListViewDemos.products.dataSource and not just ListViewDemos.products. You need to associate the ListView with the dataSoure property. The ListView is associated with its item template with the help of the itemTemplate property. The ID of the item template — #productTemplate – is used to select the template from the page. Here’s what the complete version of the default.html page looks like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> </body> </html> Notice that the page above includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The page above also contains a Template control which contains the ListView item template. Finally, the page includes the declaration of the ListView control. Summary The goal of this blog entry was to describe the minimal set of steps which you must complete to use the WinJS ListView control to display a simple list of items. You learned how to create a data source, declare an item template, and declare a ListView control.

    Read the article

  • The last MVVM you'll ever need?

    - by Nuri Halperin
    As my MVC projects mature and grow, the need to have some omnipresent, ambient model properties quickly emerge. The application no longer has only one dynamic pieced of data on the page: A sidebar with a shopping cart, some news flash on the side – pretty common stuff. The rub is that a controller is invoked in context of a single intended request. The rest of the data, even though it could be just as dynamic, is expected to appear on it's own. There are many solutions to this scenario. MVVM prescribes creating elaborate objects which expose your new data as a property on some uber-object with more properties exposing the "side show" ambient data. The reason I don't love this approach is because it forces fairly acute awareness of the view, and soon enough you have many MVVM objects laying around, and views have to start doing null-checks in order to ensure you really supplied all the values before binding to them. Ick. Just as unattractive is the ViewData dictionary. It's not strongly typed, and in both this and the MVVM approach someone has to populate these properties – n'est pas? Where does that live? With MVC2, we get the formerly-futures  feature Html.RenderAction(). The feature allows you plant a line in a view, of the format: <% Html.RenderAction("SessionInterest", "Session"); %> While this syntax looks very clean, I can't help being bothered by it. MVC was touting a very strong separation of concerns, the Model taking on the role of the business logic, the controller handling route and performing minimal view-choosing operations and the views strictly focused on rendering out angled-bracket tags. The RenderAction() syntax has the view calling some controller and invoking it inline with it's runtime rendering. This – to my taste – embeds too much  knowledge of controllers into the view's code – which was allegedly forbidden.  The one way flow "Controller Receive Data –> Controller invoke Model –> Controller select view –> Controller Hand data to view" now gets a "View calls controller and gets it's own data" which is not so one-way anymore. Ick. I toyed with some other solutions a bit, including some base controllers, special view classes etc. My current favorite though is making use of the ExpandoObject and dynamic features with C# 4.0. If you follow Phil Haack or read a bit from David Heyden you can see the general picture emerging. The game changer is that using the new dynamic syntax, one can sprout properties on an object and make use of them in the view. Well that beats having a bunch of uni-purpose MVVM's any day! Rather than statically exposed properties, we'll just use the capability of adding members at runtime. Armed with new ideas and syntax, I went to work: First, I created a factory method to enrich the focuse object: public static class ModelExtension { public static dynamic Decorate(this Controller controller, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = CodeCampBL.SessoinInterest(); result.TagUsage = CodeCampBL.TagUsage(); return result; } } This gives me a nice fluent way to have the controller add the rest of the ambient "side show" items (SessionInterest, TagUsage in this demo) and expose them all as the Model: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); dynamic result = this.Decorate(data); return View(result); } So now what remains is that my view knows to expect a dynamic object (rather than statically typed) so that the ASP.NET page compiler won't barf: <%@ Page Language="C#" Title="Ambient Demo" MasterPageFile="~/Views/Shared/Ambient.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> Notice the generic ViewPage<dynamic>. It doesn't work otherwise. In the page itself, Model.Value property contains the main data returned from the controller. The nice thing about this, is that the master page (Ambient.Master) also inherits from the generic ViewMasterPage<dynamic>. So rather than the page worrying about all this ambient stuff, the side bars and panels for ambient data all reside in a master page, and can be rendered using the RenderPartial() syntax: <% Html.RenderPartial("TagCloud", Model.SessionInterest as Dictionary<string, int>); %> Note here that a cast is necessary. This is because although dynamic is magic, it can't figure out what type this property is, and wants you to give it a type so its binder can figure out the right property to bind to at runtime. I use as, you can cast if you like. So there we go – no violation of MVC, no explosion of MVVM models and voila – right? Well, I could not let this go without a tweak or two more. The first thing to improve, is that some views may not need all the properties. In that case, it would be a waste of resources to populate every property. The solution to this is simple: rather than exposing properties, I change d the factory method to expose lambdas - Func<T> really. So only if and when a view accesses a member of the dynamic object does it load the data. public static class ModelExtension { // take two.. lazy loading! public static dynamic LazyDecorate(this Controller c, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = new Func<Dictionary<string, int>>(() => CodeCampBL.SessoinInterest()); result.TagUsage = new Func<Dictionary<string, int>>(() => CodeCampBL.TagUsage()); return result; } } Now that lazy loading is in place, there's really no reason not to hook up all and any possible ambient property. Go nuts! Add them all in – they won't get invoked unless used. This now requires changing the signature of usage on the ambient properties methods –adding some parenthesis to the master view: <% Html.RenderPartial("TagCloud", Model.SessionInterest() as Dictionary<string, int>); %> And, of course, the controller needs to call LazyDecorate() rather than the old Decorate(). The final touch is to introduce a convenience method to the my Controller class , so that the tedium of calling Decorate() everywhere goes away. This is done quite simply by adding a bunch of methods, matching View(object), View(string,object) signatures of the Controller class: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); return AmbientView(data); } //these methods can reside in a base controller for the solution: public ViewResult AmbientView(dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(result); } public ViewResult AmbientView(string viewName, dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(viewName, result); } The call to AmbientView now replaces any call the View() that requires the ambient data. DRY sattisfied, lazy loading and no need to replace core pieces of the MVC pipeline. I call this a good MVC day. Enjoy!

    Read the article

  • Metro: Introduction to the WinJS ListView Control

    - by Stephen.Walther
    The goal of this blog entry is to provide a quick introduction to the ListView control – just the bare minimum that you need to know to start using the control. When building Metro style applications using JavaScript, the ListView control is the primary control that you use for displaying lists of items. For example, if you are building a product catalog app, then you can use the ListView control to display the list of products. The ListView control supports several advanced features that I plan to discuss in future blog entries. For example, you can group the items in a ListView, you can create master/details views with a ListView, and you can efficiently work with large sets of items with a ListView. In this blog entry, we’ll keep things simple and focus on displaying a list of products. There are three things that you need to do in order to display a list of items with a ListView: Create a data source Create an Item Template Declare the ListView Creating the ListView Data Source The first step is to create (or retrieve) the data that you want to display with the ListView. In most scenarios, you will want to bind a ListView to a WinJS.Binding.List object. The nice thing about the WinJS.Binding.List object is that it enables you to take a standard JavaScript array and convert the array into something that can be bound to the ListView. It doesn’t matter where the JavaScript array comes from. It could be a static array that you declare or you could retrieve the array as the result of an Ajax call to a remote server. The following JavaScript file – named products.js – contains a list of products which can be bound to a ListView. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99 }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44 }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The products variable represents a WinJS.Binding.List object. This object is initialized with a plain-old JavaScript array which represents an array of products. To avoid polluting the global namespace, the code above uses the module pattern and exposes the products using a namespace. The list of products is exposed to the world as ListViewDemos.products. To learn more about the module pattern and namespaces in WinJS, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/22/metro-namespaces-and-modules.aspx Creating the ListView Item Template The ListView control does not know how to render anything. It doesn’t know how you want each list item to appear. To get the ListView control to render something useful, you must create an Item Template. Here’s what our template for rendering an individual product looks like: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> This template displays the product name and price from the data source. Normally, you will declare your template in the same file as you declare the ListView control. In our case, both the template and ListView are declared in the default.html file. To learn more about templates, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/27/metro-using-templates.aspx Declaring the ListView The final step is to declare the ListView control in a page. Here’s the markup for declaring a ListView: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> You declare a ListView by adding the data-win-control to an HTML DIV tag. The data-win-options attribute is used to set two properties of the ListView. The ListView is associated with its data source with the itemDataSource property. Notice that the data source is ListViewDemos.products.dataSource and not just ListViewDemos.products. You need to associate the ListView with the dataSoure property. The ListView is associated with its item template with the help of the itemTemplate property. The ID of the item template — #productTemplate – is used to select the template from the page. Here’s what the complete version of the default.html page looks like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> </body> </html> Notice that the page above includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The page above also contains a Template control which contains the ListView item template. Finally, the page includes the declaration of the ListView control. Summary The goal of this blog entry was to describe the minimal set of steps which you must complete to use the WinJS ListView control to display a simple list of items. You learned how to create a data source, declare an item template, and declare a ListView control.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • There is no ViewData item of type 'IEnumerable<SelectListItem>' that has the key 'xxx'.

    - by Jimbo
    There are a couple of posts about this on Stack Overflow but none with an answer that seem to fix the problem in my current situation. I have a page with a table in it, each row has a number of text fields and a dropdown. All the dropdowns need to use the same SelectList data so I have set it up as follows: Controller ViewData["Submarkets"] = new SelectList(submarketRep.AllOrdered(), "id", "name"); View <%= Html.DropDownList("submarket_0", (SelectList)ViewData["Submarkets"], "(none)") %> I have used exactly this setup in many places, but for some reason in this particular view I get the error: There is no ViewData item of type 'IEnumerable' that has the key 'submarket_0'.

    Read the article

  • [C#] RSACryptoServiceProvider Using the Public Key to decrypt Private Key encrypted Data ?

    - by prixone
    Hello, i went thru lots and lots of searchs and results and pages considering this matter but havent found the solution yet. Like i described in the title i want to decrypt data using my Public Key which is something i am able to do with my PHP code, my certificate is generated from my PHP page. Here is a sample of what i am trying: public string Decrypt(string data) { X509Certificate2 cert = new X509Certificate2(); cert.Import(Encoding.ASCII.GetBytes(webApi["PublicDefaultKey"])); RSACryptoServiceProvider rsa = (RSACryptoServiceProvider)cert.PublicKey.Key; byte[] ciphertextBytes = Convert.FromBase64String(data); byte[] plaintextBytes = rsa.Decrypt(ciphertextBytes, false); return System.Text.Encoding.UTF8.GetString(plaintextBytes); } MessageBox.Show(Decrypt("pgcl93TpVfyHubuRlL72/PZN0nA4Q+HHx8Y15qGvUyTNpI6y7J13YG07ZVEyB7Dbgx63FSw9vEw1D1Z3bvNbI0gqalVfKTfHv5tKVc7Y6nQwQYwoODpUhVpa/K9OP1lqx4esnxqwJx95G0rqgJTdS+Yo773s5UcJrHzzbsX2z+w=")); here is the public key i am using for tests: -----BEGIN CERTIFICATE----- MIIDxDCCAy2gAwIBAgIBADANBgkqhkiG9w0BAQUFADCBozELMAkGA1UEBhMCVUsx EDAOBgNVBAgTB0VuZ2xhbmQxDzANBgNVBAcTBkxvbmRvbjESMBAGA1UEChMJTU1P Q2xldmVyMSMwIQYDVQQLExpNTU9DbGV2ZXIgRGV2ZWxvcGVyJ3MgVGVhbTESMBAG A1UEAxMJTU1PQ2xldmVyMSQwIgYJKoZIhvcNAQkBFhVzdXBwb3J0QG1tb2NsZXZl ci5jb20wHhcNMTAwNTE2MTkxNDQ4WhcNMTEwNTE2MTkxNDQ4WjCBozELMAkGA1UE BhMCVUsxEDAOBgNVBAgTB0VuZ2xhbmQxDzANBgNVBAcTBkxvbmRvbjESMBAGA1UE ChMJTU1PQ2xldmVyMSMwIQYDVQQLExpNTU9DbGV2ZXIgRGV2ZWxvcGVyJ3MgVGVh bTESMBAGA1UEAxMJTU1PQ2xldmVyMSQwIgYJKoZIhvcNAQkBFhVzdXBwb3J0QG1t b2NsZXZlci5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMozHr17PL+N seZyadobUmpIV+RKqmRUGX0USIdj0i0yvwvltu3AIKAyRhGz16053jZV2WeglCEj qfiewF9sYTAAoIVGtdd/sZvO4uUcng9crSzDo0CrEPs/Tn5SunmlmyFlZfdlqpAM XXLno/HMo9cza0CrcMnRokaTiu8szBeBAgMBAAGjggEEMIIBADAdBgNVHQ4EFgQU zip+3/hBIpjvdcSoWQ2rW+xDEWAwgdAGA1UdIwSByDCBxYAUzip+3/hBIpjvdcSo WQ2rW+xDEWChgamkgaYwgaMxCzAJBgNVBAYTAlVLMRAwDgYDVQQIEwdFbmdsYW5k MQ8wDQYDVQQHEwZMb25kb24xEjAQBgNVBAoTCU1NT0NsZXZlcjEjMCEGA1UECxMa TU1PQ2xldmVyIERldmVsb3BlcidzIFRlYW0xEjAQBgNVBAMTCU1NT0NsZXZlcjEk MCIGCSqGSIb3DQEJARYVc3VwcG9ydEBtbW9jbGV2ZXIuY29tggEAMAwGA1UdEwQF MAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAyU6RLifBXSyQUkBFnTcI6h5ryujh+6o6 eQ1wuPOPmRdYJZuWx/5mjMpIF13sYlNorcOv5WaEnp8/Jfuwc9h/jXlcujser0UE WoaaFwK81O801Xkv2zEm2UUWiOabrGTIT4FVy3gCUXJYjaCnvfSdmkfLJOQxNHVt 4NTMp7IeF60= -----END CERTIFICATE----- by default, the PHP generates the key in PKCS as far as i know, from my C# code i can encrypt using my publick key with out any problems and decrypt it on my php page but so far i was not able to decrypt data sent from my php page encrypted with the private key which is something i can do on my php... i did like some help to understand what i am doing wrong... the error when i run the functions is "bad data" at rsa.Decrypt(ciphertextBytes, false); which i couldnt found anymore information at it... Thanks for any and all the help :)

    Read the article

  • is of a type that is invalid for use as a key column in an index.

    - by acidzombie24
    I have an error at Column 'key' in table 'misc_info' is of a type that is invalid for use as a key column in an index. where key is a nvarchar(max). A quick google found this. It however doesnt explain what a solution is. How do i create something like Dictionary where the key and value are both strings and obviously the key must be unique and is single. My sql statement was create table [misc_info] ( [id] INTEGER PRIMARY KEY IDENTITY NOT NULL, [key] nvarchar(max) UNIQUE NOT NULL, [value] nvarchar(max) NOT NULL);

    Read the article

  • Datapager in silverlight 4 -Nested datagrid visibility issue

    - by Archie
    I have a datagrid in silverlight with child datagrid nested in it. Also I have a DataPager on the outer datagrid. The code looks like this: <data:DataGrid x:Name="dgData" Width="600" ItemsSource="{Binding}" AutoGenerateColumns="False" IsReadOnly="True" HorizontalScrollBarVisibility="Hidden" CanUserSortColumns="False" RowDetailsVisibilityChanged="dgData_RowDetailsVisibilityChanged" Margin="20,0" Grid.RowSpan="2"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Item" Width="*" Binding="{Binding ItemName,Mode=TwoWay}"/> <data:DataGridTextColumn Header="Company" Width="*" Binding="{Binding Company,Mode=TwoWay}"/> </data:DataGrid.Columns> <data:DataGrid.RowDetailsTemplate> <DataTemplate> <data:DataGrid x:Name="dgRowDetail" Width="400" HorizontalScrollBarVisibility="Hidden" AutoGenerateColumns="False" Visibility="Collapsed"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Date" Width="*" Binding="{Binding Date,Mode=TwoWay}"/> <data:DataGridTextColumn Header="Price" Width="*" Binding="{Binding Price,Mode=TwoWay}"/> </data:DataGrid.Columns> </data:DataGrid> </DataTemplate> </data:DataGrid.RowDetailsTemplate> </data:DataGrid> <data:DataPager x:Name="dpData" HorizontalAlignment="Center" DisplayMode="FirstLastPreviousNextNumeric" Source="{Binding}"/> I have one PagedCollectionView pgv which is bound to outer datagrid as: DataContext = pgv; When the row is clicked I set the child datagrid's ItemsSource property to another PagedCollectionView. My problem is it works fine except for the first row for the first time. When I click on it, it doesn't fire the dgData_RowDetailsVisibilityChanged event. Also, when I click on second row, firstly first row fires the event and then the second row fires it and shows the nested grid. Please help.

    Read the article

  • cPickle ImportError: No module named multiarray

    - by Rafal
    Hello, I'm using cPickle to save my Database into file. The code looks like that: def Save_DataBase(): import cPickle from scipy import * from numpy import * a=Results.VersionName #filename='D:/results/'+a[a.find('/')+1:-a.find('/')-2]+Results.AssType[:3]+str(random.randint(0,100))+Results.Distribution+".lft" filename='D:/results/pppp.lft' plik=open(filename,'w') DataOutput=[[[DataBase.Arrays.Nodes,DataBase.Arrays.Links,DataBase.Arrays.Turns,DataBase.Arrays.Connectors,DataBase.Arrays.Zones], [DataBase.Nodes.Data,DataBase.Links.Data,DataBase.Turns.Data,DataBase.OrigConnectors.Data,DataBase.DestConnectors.Data,DataBase.Zones.Data], [DataBase.Nodes.DictionaryPy2Vis,DataBase.Links.DictionaryPy2Vis,DataBase.Turns.DictionaryPy2Vis,DataBase.OrigConnectors.DictionaryPy2Vis,DataBase.DestConnectors.DictionaryPy2Vis,DataBase.Zones.DictionaryPy2Vis], [DataBase.Nodes.DictionaryVis2Py,DataBase.Links.DictionaryVis2Py,DataBase.Turns.DictionaryVis2Py,DataBase.OrigConnectors.DictionaryVis2Py,DataBase.DestConnectors.DictionaryVis2Py,DataBase.Zones.DictionaryVis2Py], [DataBase.Paths.List]],[Results.VersionName,Results.noZones,Results.noNodes,Results.noLinks,Results.noTurns,Results.noTrips, Results.Times.VersionLoad,Results.Times.GetData,Results.Times.GetCoords,Results.Times.CrossTheTime,Results.Times.Plot_Cylinder, Results.AssType,Results.AssParam,Results.tStart,Results.tEnd,Results.Distribution,Results.tVector]] cPickle.dump(DataOutput, plik, protocol=0) plik.close()` And it works fine. Most of my Database rows are lists of a lists, vecor-like, or array-like data sets. But now when I input data, an error occurs: def Load_DataBase(): import cPickle from scipy import * from numpy import * filename='D:/results/pppp.lft' plik= open(filename, 'rb') """ first cPickle load approach """ A= cPickle.load(plik) """ fail """ """ Another approach - data format exact as in Output step above , also fails""" [[[DataBase.Arrays.Nodes,DataBase.Arrays.Links,DataBase.Arrays.Turns,DataBase.Arrays.Connectors,DataBase.Arrays.Zones], [DataBase.Nodes.Data,DataBase.Links.Data,DataBase.Turns.Data,DataBase.OrigConnectors.Data,DataBase.DestConnectors.Data,DataBase.Zones.Data], [DataBase.Nodes.DictionaryPy2Vis,DataBase.Links.DictionaryPy2Vis,DataBase.Turns.DictionaryPy2Vis,DataBase.OrigConnectors.DictionaryPy2Vis,DataBase.DestConnectors.DictionaryPy2Vis,DataBase.Zones.DictionaryPy2Vis], [DataBase.Nodes.DictionaryVis2Py,DataBase.Links.DictionaryVis2Py,DataBase.Turns.DictionaryVis2Py,DataBase.OrigConnectors.DictionaryVis2Py,DataBase.DestConnectors.DictionaryVis2Py,DataBase.Zones.DictionaryVis2Py], [DataBase.Paths.List]],[Results.VersionName,Results.noZones,Results.noNodes,Results.noLinks,Results.noTurns,Results.noTrips, Results.Times.VersionLoad,Results.Times.GetData,Results.Times.GetCoords,Results.Times.CrossTheTime,Results.Times.Plot_Cylinder, Results.AssType,Results.AssParam,Results.tStart,Results.tEnd,Results.Distribution,Results.tVector]]= cPickle.load(plik)` Error is (in both cases): A= cPickle.load(plik) ImportError: No module named multiarray Any Ideas? PS.

    Read the article

  • How to mock ISerializable classes with Moq?

    - by asmois
    Hi there, I'm completly new to Moq and now trying to create a mock for System.Assembly class. I'm using this code: var mockAssembly = new Mock<Assembly>(); mockAssembly.Setup( x => x.GetTypes() ).Returns( new Type[] { typeof( Type1 ), typeof( Type2 ) } ); But when I run tests I get next exception: System.ArgumentException : The type System.Reflection.Assembly implements ISerializable, but failed to provide a deserialization constructor Stack Trace: at Castle.DynamicProxy.Generators.BaseProxyGenerator.VerifyIfBaseImplementsGet­ObjectData(Type baseType) at Castle.DynamicProxy.Generators.ClassProxyGenerator.GenerateCode(Type[] interfaces, ProxyGenerationOptions options) at Castle.DynamicProxy.DefaultProxyBuilder.CreateClassProxy(Type classToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options) at Castle.DynamicProxy.ProxyGenerator.CreateClassProxy(Type classToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options, Object[] constructorArguments, IInterceptor[] interceptors) at Moq.Proxy.CastleProxyFactory.CreateProxy[T](ICallInterceptor interceptor, Type[] interfaces, Object[] arguments) at Moq.Mock`1.<InitializeInstance>b__0() at Moq.PexProtector.Invoke(Action action) at Moq.Mock`1.InitializeInstance() at Moq.Mock`1.OnGetObject() at Moq.Mock`1.get_Object() Could you reccomend me the right way to mock ISerializable classes (like System.Assembly) with Moq. Thanks in advance!

    Read the article

  • Error loading the report template:org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid con

    - by Poonam
    hi, i'm newer in jasper reports.i've some problem when i call iReport from jsp....some error occurs at run time as like below :- Error loading the report template: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'head'. One of '{"http://jasperreports.sourceforge.net/jasperreports":field, "http://jasperreports.sourceforge.net/jasperreports":sortField, "http://jasperreports.sourceforge.net/jasperreports":variable, "http://jasperreports.sourceforge.net/jasperreports":filterExpression, "http://jasperreports.sourceforge.net/jasperreports":group, "http://jasperreports.sourceforge.net/jasperreports":background, "http://jasperreports.sourceforge.net/jasperreports":title, "http://jasperreports.sourceforge.net/jasperreports":pageHeader, "http://jasperreports.sourceforge.net/jasperreports":columnHeader, "http://jasperreports.sourceforge.net/jasperreports":detail, "http://jasperreports.sourceforge.net/jasperreports":columnFooter, "http://jasperreports.sourceforge.net/jasperreports":pageFooter, "http://jasperreports.sourceforge.net/jasperreports":lastPageFooter, "http://jasperreports.sourceforge.net/jasperreports":summary, "http://jasperreports.sourceforge.net/jasperreports":noData}' is expected. if anypne know plz help me. & can give me answer on my email id:- [email protected] thanks in advance....

    Read the article

  • How do i specify wcf behaviorExtension class type without the assembly version number?

    - by Mr Bell
    I have a web app that uses a WCF service that utilizes a behaviorExtension like so: <behaviorExtensions> <add name="clientCredentialsExtension" type="Simon.Web.Giftcard.WCFSecurity.ClientCredentialsExtensionElement, Simon.Web.Giftcard, Version=1.0.3736.20411, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> The problem is this web app's version changes with every compile (i think) and thus invalidating this entry. How can I avoid having to change the version number every time I compile this? Can I specify the extension in code somewhere?

    Read the article

  • Where to change data for a "list" datasource in dashcode?

    - by erotsppa
    I created a new project in dashcode and it automatically generated a mobile web app for me with a list and sample datasources. I see two datasources, one is labeled as "datasource" and the other is "list". However I can see that the actual data in datasources is in a js file, everything is good but where is the actual data stored for "list"? The sample application came with a bunch of data for "list" and I am unable to change it. Any ideas?

    Read the article

< Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >